WO2016073018A1 - Storing excess data in a raid 60 array - Google Patents

Storing excess data in a raid 60 array Download PDF

Info

Publication number
WO2016073018A1
WO2016073018A1 PCT/US2015/010155 US2015010155W WO2016073018A1 WO 2016073018 A1 WO2016073018 A1 WO 2016073018A1 US 2015010155 W US2015010155 W US 2015010155W WO 2016073018 A1 WO2016073018 A1 WO 2016073018A1
Authority
WO
WIPO (PCT)
Prior art keywords
raid
array
data
parity
storing
Prior art date
Application number
PCT/US2015/010155
Other languages
French (fr)
Inventor
Gururaj S MORABAD
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Publication of WO2016073018A1 publication Critical patent/WO2016073018A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems

Definitions

  • Redundant Array of Inexpensive Disks is a method of combining several hard drives into one logical unit.
  • the physical disks combine to form a virtual disk, which appears to a host system as a single logical unit or drive.
  • a RAID array thus offers a mechanism of storing data on multiple independent physical disks for the purposes of data redundancy, performance improvement, and fault tolerance.
  • RAID can provide higher throughput levels than a single hard drive or group of independent hard drives.
  • FIG. 1 is a block diagram of an example system for storing excess data in a RAID 60 array
  • FIG. 2 is a block diagram of an example computer system for storing excess data in a RAID 60 array
  • FIG. 3 illustrates an example RAID 60 array for storing excess data.
  • FIG. 4 illustrates an example RAID 60 array with an additional disk.
  • FIG. 5 is a flow chart of an example method for storing excess data in a RAID 60 array; and [008] FIG. 6 is a block diagram of an example system for storing excess data in a RAID 60 array.
  • RAID is an architecture designed to improve data availability by using an arrays of disks. There are different RAID types or levels, which determine how data is distributed across the drives. Each RAID level has specific data protection and system performance characteristics. Different RAID types are named by the word RAID followed by a number (e.g. RAID 0, RAID 1 ). RAID levels 0, 1 , 3, 5 and 6 are the most commonly implemented RAID. Each level tries to provide a different balance between various goals such as reliability, performance and capacity.
  • Standard RAID levels such as 0, 1 , 3, 5, and 6, may be combined to gain performance, additional redundancy, or both. These combined levels may be referred to as nested RAID levels or hybrid RAID. Some examples of nested RAID level include RAID 01 , RAID 10, RAID 50, and RAID 60.
  • RAID 60 also referred to as level 6+0, combines multiple RAID 6 sets with RAID 0.
  • RAID 60 combines features of both RAID 6 and RAID 0.
  • RAID 60 uses data striping feature of RAID 0.
  • RAID 60 may use striping technique used in RAID 0 to gain in performance. In this technique, data is broken down into byte or block levels ("stripes"), and each byte (or block) is written to a separate disk in an array. The size of each stripe may be user defined.
  • RAID 60 also uses double distributed parity feature of RAID 6.
  • RAID 6 provides data redundancy by using data striping in combination with parity information. Dual parity allows the failure of two disks in each RAID 6 array.
  • RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information.
  • the additional parity provides data protection in case two disks fail.
  • RAID 60 may be used in mission critical operations, medium-sized transactions or data-intensive computing. This spanned RAID level provides better data protection with dual parity and better performance.
  • RAID 60 array may reject the data that exceeds its available storage capacity. Needless to say this scenario is not desirable from an organizations' perspective since it may lead to crucial data loss.
  • the present disclosure describes various examples for storing excess data in a RAID 60 array.
  • a determination may be made whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array. If it is determined that the data received for storing in a RAID 60 exceeds the available data storage capacity of the RAID 60 array, the excess data may be stored in a parity disk of the RAID 60 array.
  • the proposed examples maintain data integrity and data reliability i.e. store excess data in a spanned RAID 60, for instance during run time.
  • FIG. 1 is a block diagram of an example system 100 for storing excess data in a RAID 60 array.
  • System 100 may include a computing device 102 and a RAID 60 array 104.
  • Computing device 102 may be communicatively coupled to RAID 60 array 104 via a suitable interface that may use one or more communication protocols (mentioned below).
  • computing device 102 may be included in RAID 60 array 104.
  • computing device 102 may include a processor and a memory.
  • Processor may be any type of Central Processing Unit (CPU), microprocessor, or processing logic that may interpret and executes machine-readable instructions stored in memory.
  • processor may handle movement of data as well as other data related functions related to RAID 60 104.
  • Memory may be a random access memory (RAM), Synchronous DRAM (SDRAM), Double Data Rate (DDR), Rambus DRAM (RDRAM), Rambus RAM, and the like.
  • RAM random access memory
  • SDRAM Synchronous DRAM
  • DDR Double Data Rate
  • RDRAM Rambus DRAM
  • Rambus RAM Rambus RAM
  • Computing device 102 may include an interface to communicate with a host computer (not shown) via one or more protocols such as AT attachment (ATA), Serial ATA, Small Computer System Interface (SCSI), Fibre Channel (FC), and the like.
  • Computing device may include another interface to communicate with a storage system such as RAID 60 array 104 via one or more protocols such as Fibre Connection (FICON), Internet Small Computer System Interface (iSCSI), HyperSCSI, ATA over Ethernet, and the like.
  • FICON Fibre Connection
  • iSCSI Internet Small Computer System Interface
  • HyperSCSI HyperSCSI
  • ATA over Ethernet and the like.
  • Computing device 102 may be a device that manages the physical disk drives of RAID array 60 104 and presents them to a host as logical units.
  • computing device 102 may manage the physical storage units in RAID array 104 and share them with a host as logical units (for example, although multiple physical disks may be used in a RAID array 104, a host may see only one logical drive).
  • Computing device may perform a wide range of functions related to RAID 60 array 104.
  • computing device may be a storage array controller or host bus adaptor (HBA).
  • HBA host bus adaptor
  • Some of the non-limiting examples performed by computing device 102 may include configuring disks per a desired RAID level in a RAID array 104, performing a RAID disk read and/or write operation, writing parity data in a RAID array 104, and writing data in disk stripes depending on the RAID level.
  • computing device 102 may include a data module 106, and a storage module 108. These modules are described in detail below.
  • RAID 60 may 104 include a plurality of storage disks (for example, S1 , S2, S3, S4, S5, S6, S7, and S8) that may combine multiple RAID 6 sets with RAID 0.
  • Some non-limiting examples of the storage disks may include Serial Advanced Technology Attachment (SATA) disk drives, Fibre Channel (FC) disk drives, Serial Attached SCSI (SAS) disk drives, optical disks, solid state disks, magnetic tape drives, DVD disks, and CD-ROM disks.
  • SATA Serial Advanced Technology Attachment
  • FC Fibre Channel
  • SAS Serial Attached SCSI
  • RAID 60 array 104 to construct a RAID 60 array 104, one or more RAID 6 sets may be combined, and then these sets may be aggregated at a higher level into a RAID 0.
  • Each RAID 60 104 includes redundancy and may withstand the loss of up to two disks in each parity set.
  • RAID 60 array 104 may include two RAID 6 sets 1 10 and 1 12.
  • RAID 60 array 104 may be created by striping data over more than one span of physical disks that form RAID 6.
  • FIG. 1 shows two spans of physical disks (i.e. two RAID 6 sets).
  • Each RAID 6 subset 1 10 and 1 12 requires at least four disks. Two disks are used for data and two disks are used for parity.
  • RAID 60 array 104 may include additional disks.
  • RAID 60 array 104 may strip data across each RAID 6 subset.
  • RAID 60 array 104 may be a striped disk array.
  • RAID 60 array 104 may use striping technique used in RAID 0 to gain in performance.
  • data is broken down into byte or block levels ("stripes"), and each byte (or block) is written to a separate disk in an array.
  • striping may be performed by storing bytes or blocks of incoming or received data across a disk span S1 i.e. S1 , S2, S3, and S4, in a sequential rotating manner.
  • RAID 60 array 104 may use double distributed parity feature of RAID 6.
  • RAID 6 provides data redundancy by using data striping in combination with parity information.
  • Dual parity allows the failure of two disks in each RAID 6 array.
  • RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information.
  • the additional parity provides data protection in case two disks fail.
  • "P" and "Q" may represent parity data.
  • a sequence of parity data across a span of disks may be referred to as a "parity arm" or "parity disk”.
  • parity data P across disks S1 , S2, S3, and S4 may constitute a parity arm or parity disk.
  • parity data Q across disks S1 , S2, S3, and S4 may constitute another parity arm or parity disk.
  • Each span of RAID 6 disks may include at least two parity arms.
  • FIG. 2 is a block diagram of an example computing device 200 for storing excess data in a RAID 60 array (for example, 104).
  • computing device 200 may be similar to computing device 102 described above. Accordingly, components of computing device that are similarly named and illustrated in computing device 102 may be considered similar.
  • computing device 200 may include a data module, and a storage module.
  • the aforesaid components of computing device 200 may be implemented as machine-readable instructions stored on a machine-readable storage medium.
  • the machine- readable storage medium storing such instructions may be integrated with the computing device 200, or it may be an external medium that may be accessible to the computing device 200.
  • module may refer to a software component (machine executable instructions), a hardware component or a combination thereof.
  • a module may include, by way of example, components, such as software components, processes, tasks, co-routines, functions, attributes, procedures, drivers, firmware, data, databases, data structures, Application Specific Integrated Circuits (ASIC) and other computing devices.
  • the module may reside on a volatile or non-volatile storage medium and configured to interact with a processor of a computing device.
  • Data module 106 may determine whether data received for storing in a RAID 60 array exceeds the available data storage capacity of the RAID 60 array.
  • RAID 60 array may be present online or available real time.
  • data module 106 may determine whether an incoming data may exceed the available storage capacity of a RAID 60 array.
  • data module 106 may determine whether data capacity of a RAID 60 array is less than the amount of an incoming data, which may occur, for instance, during an inflow of high data traffic in a real time scenario.
  • storage module 106 may store the excess data in a parity arm (or parity disk) of the RAID 60 array.
  • a parity arm or parity disk
  • storage may store the excess data in a parity arm (or parity disk) of the RAID 60 array.
  • a RAID 60 array may include two parity disks for each RAID 6 subset in the RAID 60 array.
  • Storage module 108 may use one of the two parity disks to store the excess data.
  • storage module 108 may store the excess data in a parity disk by overwriting or replacing existing parity data in the parity disk of the RAID 60 array. For example, if the data received for storing in a RAID 60 array is 50 GB, but the existing capacity of the RAID 60 array is 40 GB, storage module 108 may store the excess 10 GB in one of the parity arms of the RAID 60 array.
  • storage module 108 may store sequential blocks of an incoming excess data at the current positions of parity data, in one of the parity arms, across all the disks in a span (or spans) of RAID 6 arrays in a RAID 60.
  • storage module 108 may store the excess data in one of the parity arms by writing the incoming data diagonally on to storage discs of a span (or spans) of a RAID 60 array.
  • the other parity arm of the RAID 60 array may be used for storing the parity data.
  • FIG. 3 illustrates an example RAID 60 array 300 for storing excess data in one of the parity arms.
  • RAID 60 array of FIG. 3 may be similar to RAID 60 array of FIG. 1 .
  • RAID 60 array 300 includes two spans of RAID 6. Each span includes fours storage disks with double distributed parity feature of RAID 6. Each RAID 6 span may provide data redundancy by using data striping in combination with parity information. Each RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information.
  • striping may be performed by storing bytes or blocks of incoming or received data across eight disks i.e. S1 , S2, S3, S4, S5, S6, S7, and S8, in a sequential rotating manner.
  • data D1 , D2, P (parity data), and Q (parity data) may form a data stripe 1 14 across disks: S1 , S2, S3, and S4.
  • data D6, P (parity data), Q (parity data), D7, and D8 may form another data stripe across disks: S1 , S2, S3, and S4.
  • data D3, D4, P (parity data), and Q may form another data stripe 1 16 across disks: S5, S6, S7, and S8.
  • parity data P across disks S1 , S2, S3, and S4 may constitute a parity arm or parity disk.
  • parity data Q across disks S1 , S2, S3, and S4, may constitute another parity arm or parity disk.
  • Each span of RAID 6 disks may include at least two parity arms. If data in excess of available storage capacity is received by RAID 60 array 300, storage module may use one of the two parity disks to store the excess data. In an instance, storage module may store the excess data in parity disk 302 by overwriting or replacing existing parity data in the parity disk 302 of the RAID 60 array.
  • computing device 200 may include an alarm module.
  • An alarm module may generate an alarm or a system event once the storage module stores the excess data in a parity arm of the online RAID 60 array (for example, 104).
  • the alarm may alert a user (for example, a system administrator) that a RAID 60 level has been broken with the addition of excess data in one of the parity arms.
  • the user may insert an additional disk(s) in the RAID 60 array (for example, 104) to reconstruct a state of the RAID 60 array that existed prior to storing the excess data in one of the parity disks of the RAID 60 array.
  • An additional storage disk (or disks) for example, disks S3 and S8, may be included in a RAID 60 array (for example, 104), for instance, to reconstruct an earlier state of the storage volume of the RAID 60 array that existed prior to storing excess data in one of the parity disks of the RAID 60 array.
  • an additional disk may be inserted in each span of the RAID 60 array to create a storage volume according to an earlier RAID 60 level.
  • FIG. 5 is a flow chart of an example method 500 for storing excess data in a RAID 60 array.
  • the method 500 which is described below, may be executed on a computing device such as computing device 102 of FIG. 1 and computing device 200 of FIG. 2. However, other computing systems may be used as well.
  • a determination may be made, for example by data module 106, whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array (for example, 104).
  • the excess data may be stored, for example by storage module 108, in a parity disk of the RAID 60 array (for example, 104).
  • FIG. 6 is a block diagram of an example system 600 for storing excess data in a RAID 60 array.
  • System 600 includes a processor 602 and a machine-readable storage medium 604 communicatively coupled through a system bus.
  • Processor 602 may be any type of Central Processing Unit (CPU), microprocessor, or processing logic that interprets and executes machine-readable instructions stored in machine-readable storage medium 604.
  • Machine-readable storage medium 604 may be a random access memory (RAM) or another type of dynamic storage device that may store information and machine-readable instructions that may be executed by processor 602.
  • RAM random access memory
  • machine-readable storage medium 604 may be Synchronous DRAM (SDRAM), Double Data Rate (DDR), Rambus DRAM (RDRAM), Rambus RAM, etc.
  • SDRAM Synchronous DRAM
  • DDR Double Data Rate
  • RDRAM Rambus DRAM
  • Rambus RAM Rambus RAM
  • machine-readable storage medium 604 may be a non-transitory machine-readable medium.
  • machine-readable storage medium 604 may be remote but accessible to system 600.
  • Machine-readable storage medium 604 may store instructions 606, and 608.
  • instructions 606 may be executed by processor 602 to determine whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array.
  • Instructions 608 may be executed by processor 602 to write excess data to one of the two parity arms present in the RAID 60 array, if the data received for storing in the RAID 60 array exceeds the available data storage capacity of the RAID 60 system.
  • FIG. 5 is shown as executing serially, however it is to be understood and appreciated that the present and other examples are not limited by the illustrated order.
  • the example systems of FIGS. 1 , 2 and 6, and method of FIG. 5 may be implemented in the form of a computer program product including computer-executable instructions, such as program code, which may be run on any suitable computing device in conjunction with a suitable operating system (for example, Microsoft Windows, Linux, UNIX, and the like).
  • a suitable operating system for example, Microsoft Windows, Linux, UNIX, and the like.
  • Embodiments within the scope of the present solution may also include program products comprising non-transitory computer-readable media for carrying or having computer-executable instructions or data structures stored thereon.
  • Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
  • such computer-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM, magnetic disk storage or other storage devices, or any other medium which can be used to carry or store desired program code in the form of computer-executable instructions and which can be accessed by a general purpose or special purpose computer.
  • the computer readable instructions can also be accessed from memory and executed by a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

Some examples described herein relate to storing excess data in a RAID 60 array. In an example, a determination may be made whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array. If the data received for storing in the RAID 60 exceeds the available data storage capacity of the RAID 60 array, excess data may be stored in a parity disk of the RAID 60 array.

Description

STORING EXCESS DATA IN A RAID 60 ARRAY
Background
[001] Redundant Array of Inexpensive Disks (RAID) is a method of combining several hard drives into one logical unit. The physical disks combine to form a virtual disk, which appears to a host system as a single logical unit or drive. A RAID array thus offers a mechanism of storing data on multiple independent physical disks for the purposes of data redundancy, performance improvement, and fault tolerance. RAID can provide higher throughput levels than a single hard drive or group of independent hard drives.
Brief Description of the Drawings
[002] For a better understanding of the solution, embodiments will now be described, purely by way of example, with reference to the accompanying drawings, in which:
[003] FIG. 1 is a block diagram of an example system for storing excess data in a RAID 60 array;
[004] FIG. 2 is a block diagram of an example computer system for storing excess data in a RAID 60 array;
[005] FIG. 3 illustrates an example RAID 60 array for storing excess data.
[006] FIG. 4 illustrates an example RAID 60 array with an additional disk.
[007] FIG. 5 is a flow chart of an example method for storing excess data in a RAID 60 array; and [008] FIG. 6 is a block diagram of an example system for storing excess data in a RAID 60 array.
Detailed Description
[009] RAID is an architecture designed to improve data availability by using an arrays of disks. There are different RAID types or levels, which determine how data is distributed across the drives. Each RAID level has specific data protection and system performance characteristics. Different RAID types are named by the word RAID followed by a number (e.g. RAID 0, RAID 1 ). RAID levels 0, 1 , 3, 5 and 6 are the most commonly implemented RAID. Each level tries to provide a different balance between various goals such as reliability, performance and capacity.
[0010] Standard RAID levels such as 0, 1 , 3, 5, and 6, may be combined to gain performance, additional redundancy, or both. These combined levels may be referred to as nested RAID levels or hybrid RAID. Some examples of nested RAID level include RAID 01 , RAID 10, RAID 50, and RAID 60.
[0011] RAID 60, also referred to as level 6+0, combines multiple RAID 6 sets with RAID 0. RAID 60 combines features of both RAID 6 and RAID 0. For instance, RAID 60 uses data striping feature of RAID 0. In other words, RAID 60 may use striping technique used in RAID 0 to gain in performance. In this technique, data is broken down into byte or block levels ("stripes"), and each byte (or block) is written to a separate disk in an array. The size of each stripe may be user defined. RAID 60 also uses double distributed parity feature of RAID 6. RAID 6 provides data redundancy by using data striping in combination with parity information. Dual parity allows the failure of two disks in each RAID 6 array. In other words, RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information. The additional parity provides data protection in case two disks fail. RAID 60 may be used in mission critical operations, medium-sized transactions or data-intensive computing. This spanned RAID level provides better data protection with dual parity and better performance.
[0012] If data capacity of a RAID 60 array, for example in an online environment, is less than the incoming data meant to be stored on the RAID 60 array, the RAID 60 array may reject the data that exceeds its available storage capacity. Needless to say this scenario is not desirable from an organizations' perspective since it may lead to crucial data loss.
[0013] To address this issue, the present disclosure describes various examples for storing excess data in a RAID 60 array. In an example, a determination may be made whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array. If it is determined that the data received for storing in a RAID 60 exceeds the available data storage capacity of the RAID 60 array, the excess data may be stored in a parity disk of the RAID 60 array. The proposed examples maintain data integrity and data reliability i.e. store excess data in a spanned RAID 60, for instance during run time.
[0014] FIG. 1 is a block diagram of an example system 100 for storing excess data in a RAID 60 array. System 100 may include a computing device 102 and a RAID 60 array 104. Computing device 102 may be communicatively coupled to RAID 60 array 104 via a suitable interface that may use one or more communication protocols (mentioned below). In an example, computing device 102 may be included in RAID 60 array 104.
[0015] In an example, computing device 102 may include a processor and a memory. Processor may be any type of Central Processing Unit (CPU), microprocessor, or processing logic that may interpret and executes machine-readable instructions stored in memory. In an example, processor may handle movement of data as well as other data related functions related to RAID 60 104. [0016] Memory may be a random access memory (RAM), Synchronous DRAM (SDRAM), Double Data Rate (DDR), Rambus DRAM (RDRAM), Rambus RAM, and the like. In an example, memory may be a non-transitory memory.
[0017] Computing device 102 may include an interface to communicate with a host computer (not shown) via one or more protocols such as AT attachment (ATA), Serial ATA, Small Computer System Interface (SCSI), Fibre Channel (FC), and the like. Computing device may include another interface to communicate with a storage system such as RAID 60 array 104 via one or more protocols such as Fibre Connection (FICON), Internet Small Computer System Interface (iSCSI), HyperSCSI, ATA over Ethernet, and the like.
[0018] Computing device 102 may be a device that manages the physical disk drives of RAID array 60 104 and presents them to a host as logical units. In other words, computing device 102 may manage the physical storage units in RAID array 104 and share them with a host as logical units (for example, although multiple physical disks may be used in a RAID array 104, a host may see only one logical drive). Computing device may perform a wide range of functions related to RAID 60 array 104. In an example, computing device may be a storage array controller or host bus adaptor (HBA).
[0019] Some of the non-limiting examples performed by computing device 102 may include configuring disks per a desired RAID level in a RAID array 104, performing a RAID disk read and/or write operation, writing parity data in a RAID array 104, and writing data in disk stripes depending on the RAID level.
[0020] In the example of FIG. 1 , computing device 102 may include a data module 106, and a storage module 108. These modules are described in detail below. [0021] RAID 60 may 104 include a plurality of storage disks (for example, S1 , S2, S3, S4, S5, S6, S7, and S8) that may combine multiple RAID 6 sets with RAID 0. Some non-limiting examples of the storage disks may include Serial Advanced Technology Attachment (SATA) disk drives, Fibre Channel (FC) disk drives, Serial Attached SCSI (SAS) disk drives, optical disks, solid state disks, magnetic tape drives, DVD disks, and CD-ROM disks. In an example, to construct a RAID 60 array 104, one or more RAID 6 sets may be combined, and then these sets may be aggregated at a higher level into a RAID 0. Each RAID 60 104 includes redundancy and may withstand the loss of up to two disks in each parity set.
[0022] In the example of FIG. 1 , RAID 60 array 104 may include two RAID 6 sets 1 10 and 1 12. RAID 60 array 104 may be created by striping data over more than one span of physical disks that form RAID 6. FIG. 1 shows two spans of physical disks (i.e. two RAID 6 sets). Each RAID 6 subset 1 10 and 1 12 requires at least four disks. Two disks are used for data and two disks are used for parity. In other examples, RAID 60 array 104 may include additional disks. RAID 60 array 104 may strip data across each RAID 6 subset.
[0023] RAID 60 array 104 may be a striped disk array. In other words, RAID 60 array 104 may use striping technique used in RAID 0 to gain in performance. In this technique, data is broken down into byte or block levels ("stripes"), and each byte (or block) is written to a separate disk in an array. The size of each stripe may be user defined. In the example of FIG. 1 , striping may be performed by storing bytes or blocks of incoming or received data across a disk span S1 i.e. S1 , S2, S3, and S4, in a sequential rotating manner. For instance, data D1 , D2, P (parity data), and Q (parity data) may form a data stripe 1 14 across disks: S1 , S2, S3, and S4. Likewise, data D6, P (parity data), Q (parity data), D7, and D8 may form another data stripe across disks: S1 , S2, S3, and S4. Similarly, data D3, D4, P (parity data), and Q (parity data), may form another data stripe 1 16 across disks: S5, S6, S7, and S8. [0024] RAID 60 array 104 may use double distributed parity feature of RAID 6. RAID 6 provides data redundancy by using data striping in combination with parity information. Dual parity allows the failure of two disks in each RAID 6 array. In other words, RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information. The additional parity provides data protection in case two disks fail. In the example of FIG. 1 , "P" and "Q" may represent parity data. A sequence of parity data across a span of disks may be referred to as a "parity arm" or "parity disk". For example, parity data P across disks S1 , S2, S3, and S4, may constitute a parity arm or parity disk. Likewise, parity data Q across disks S1 , S2, S3, and S4, may constitute another parity arm or parity disk. Each span of RAID 6 disks may include at least two parity arms.
[0025] FIG. 2 is a block diagram of an example computing device 200 for storing excess data in a RAID 60 array (for example, 104). In an example, computing device 200 may be similar to computing device 102 described above. Accordingly, components of computing device that are similarly named and illustrated in computing device 102 may be considered similar. In the example of FIG. 2, computing device 200 may include a data module, and a storage module. In an example, the aforesaid components of computing device 200 may be implemented as machine-readable instructions stored on a machine-readable storage medium. The machine- readable storage medium storing such instructions may be integrated with the computing device 200, or it may be an external medium that may be accessible to the computing device 200.
[0026] The term "module" may refer to a software component (machine executable instructions), a hardware component or a combination thereof. A module may include, by way of example, components, such as software components, processes, tasks, co-routines, functions, attributes, procedures, drivers, firmware, data, databases, data structures, Application Specific Integrated Circuits (ASIC) and other computing devices. The module may reside on a volatile or non-volatile storage medium and configured to interact with a processor of a computing device.
[0027] Data module 106 may determine whether data received for storing in a RAID 60 array exceeds the available data storage capacity of the RAID 60 array. In an example, RAID 60 array may be present online or available real time. In such case, data module 106 may determine whether an incoming data may exceed the available storage capacity of a RAID 60 array. In other words, data module 106 may determine whether data capacity of a RAID 60 array is less than the amount of an incoming data, which may occur, for instance, during an inflow of high data traffic in a real time scenario.
[0028] If the data received for storing in a RAID 60 exceeds the available data storage capacity of the RAID 60 array, storage module 106 may store the excess data in a parity arm (or parity disk) of the RAID 60 array. In other words, if the data capacity of a RAID 60 array is less than the amount of an incoming data, storage may store the excess data in a parity arm (or parity disk) of the RAID 60 array. As mentioned earlier, a RAID 60 array may include two parity disks for each RAID 6 subset in the RAID 60 array. Storage module 108 may use one of the two parity disks to store the excess data. In an instance, storage module 108 may store the excess data in a parity disk by overwriting or replacing existing parity data in the parity disk of the RAID 60 array. For example, if the data received for storing in a RAID 60 array is 50 GB, but the existing capacity of the RAID 60 array is 40 GB, storage module 108 may store the excess 10 GB in one of the parity arms of the RAID 60 array.
[0029] In an example, storage module 108 may store sequential blocks of an incoming excess data at the current positions of parity data, in one of the parity arms, across all the disks in a span (or spans) of RAID 6 arrays in a RAID 60. In another instance, storage module 108 may store the excess data in one of the parity arms by writing the incoming data diagonally on to storage discs of a span (or spans) of a RAID 60 array. The other parity arm of the RAID 60 array may be used for storing the parity data. FIG. 3 illustrates an example RAID 60 array 300 for storing excess data in one of the parity arms. In an example, RAID 60 array of FIG. 3 may be similar to RAID 60 array of FIG. 1 . RAID 60 array 300 includes two spans of RAID 6. Each span includes fours storage disks with double distributed parity feature of RAID 6. Each RAID 6 span may provide data redundancy by using data striping in combination with parity information. Each RAID 6 uses two physical disks to maintain parity such that each stripe in the disk group maintains two disk blocks with parity information.
[0030] In the example of FIG. 3, striping may be performed by storing bytes or blocks of incoming or received data across eight disks i.e. S1 , S2, S3, S4, S5, S6, S7, and S8, in a sequential rotating manner. For instance, data D1 , D2, P (parity data), and Q (parity data) may form a data stripe 1 14 across disks: S1 , S2, S3, and S4. Likewise, data D6, P (parity data), Q (parity data), D7, and D8 may form another data stripe across disks: S1 , S2, S3, and S4. Similarly, data D3, D4, P (parity data), and Q (parity data), may form another data stripe 1 16 across disks: S5, S6, S7, and S8.
[0031] In the example of FIG. 3, "P" and "Q" may represent parity data. Parity data P across disks S1 , S2, S3, and S4, may constitute a parity arm or parity disk. Likewise, parity data Q across disks S1 , S2, S3, and S4, may constitute another parity arm or parity disk. Each span of RAID 6 disks may include at least two parity arms. If data in excess of available storage capacity is received by RAID 60 array 300, storage module may use one of the two parity disks to store the excess data. In an instance, storage module may store the excess data in parity disk 302 by overwriting or replacing existing parity data in the parity disk 302 of the RAID 60 array.
[0032] In an example, computing device 200 may include an alarm module. An alarm module may generate an alarm or a system event once the storage module stores the excess data in a parity arm of the online RAID 60 array (for example, 104). The alarm may alert a user (for example, a system administrator) that a RAID 60 level has been broken with the addition of excess data in one of the parity arms. In an example, once a user becomes aware that the excess data has been written on to one of the parity arms of a RAID 60 array (for example, 104), the user may insert an additional disk(s) in the RAID 60 array (for example, 104) to reconstruct a state of the RAID 60 array that existed prior to storing the excess data in one of the parity disks of the RAID 60 array. This is illustrated in FIG. 4. An additional storage disk (or disks), for example, disks S3 and S8, may be included in a RAID 60 array (for example, 104), for instance, to reconstruct an earlier state of the storage volume of the RAID 60 array that existed prior to storing excess data in one of the parity disks of the RAID 60 array. In another instance, an additional disk may be inserted in each span of the RAID 60 array to create a storage volume according to an earlier RAID 60 level.
[0033] FIG. 5 is a flow chart of an example method 500 for storing excess data in a RAID 60 array. The method 500, which is described below, may be executed on a computing device such as computing device 102 of FIG. 1 and computing device 200 of FIG. 2. However, other computing systems may be used as well. At block 502, a determination may be made, for example by data module 106, whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array (for example, 104). At block 504, if the data received for storing in the RAID 60 exceeds the available data storage capacity of the RAID 60 array, the excess data may be stored, for example by storage module 108, in a parity disk of the RAID 60 array (for example, 104).
[0034] FIG. 6 is a block diagram of an example system 600 for storing excess data in a RAID 60 array. System 600 includes a processor 602 and a machine-readable storage medium 604 communicatively coupled through a system bus. Processor 602 may be any type of Central Processing Unit (CPU), microprocessor, or processing logic that interprets and executes machine-readable instructions stored in machine-readable storage medium 604. Machine-readable storage medium 604 may be a random access memory (RAM) or another type of dynamic storage device that may store information and machine-readable instructions that may be executed by processor 602. For example, machine-readable storage medium 604 may be Synchronous DRAM (SDRAM), Double Data Rate (DDR), Rambus DRAM (RDRAM), Rambus RAM, etc. or storage memory media such as a floppy disk, a hard disk, a CD-ROM, a DVD, a pen drive, and the like. In an example, machine-readable storage medium 604 may be a non-transitory machine-readable medium. In an example, machine-readable storage medium 604 may be remote but accessible to system 600.
[0035] Machine-readable storage medium 604 may store instructions 606, and 608. In an example, instructions 606 may be executed by processor 602 to determine whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array. Instructions 608 may be executed by processor 602 to write excess data to one of the two parity arms present in the RAID 60 array, if the data received for storing in the RAID 60 array exceeds the available data storage capacity of the RAID 60 system.
[0036] For the purpose of simplicity of explanation, the example method of FIG.
5 is shown as executing serially, however it is to be understood and appreciated that the present and other examples are not limited by the illustrated order. The example systems of FIGS. 1 , 2 and 6, and method of FIG. 5 may be implemented in the form of a computer program product including computer-executable instructions, such as program code, which may be run on any suitable computing device in conjunction with a suitable operating system (for example, Microsoft Windows, Linux, UNIX, and the like). Embodiments within the scope of the present solution may also include program products comprising non-transitory computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, such computer-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM, magnetic disk storage or other storage devices, or any other medium which can be used to carry or store desired program code in the form of computer-executable instructions and which can be accessed by a general purpose or special purpose computer. The computer readable instructions can also be accessed from memory and executed by a processor. 37] It may be noted that the above-described examples of the present solution is for the purpose of illustration only. Although the solution has been described in conjunction with a specific embodiment thereof, numerous modifications may be possible without materially departing from the teachings and advantages of the subject matter described herein. Other substitutions, modifications and changes may be made without departing from the spirit of the present solution. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.

Claims

Claims:
1 . A method of storing excess data in a RAID 60 array, comprising:
determining whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array; and
if the data received for storing in the RAID 60 exceeds the available data storage capacity of the RAID 60 array, storing excess data in a parity disk of the RAID 60 array.
2. The method of claim 1 , wherein storing the excess data in the parity disk comprises overwriting parity data in the parity disk of the RAID 60 array.
3. The method of claim 1 , wherein storing the excess data in the parity disk comprises writing the excess data across disks in each span of the RAID 60 array.
4. The method of claim 1 , further comprising including an additional disk in the RAID 60 array to reconstruct a state of the RAID 60 array that existed prior to storing the excess data in the parity disk of the RAID 60 array.
5. The method of claim 1 , further comprising including an additional disk in each span of the RAID 60 array to create a storage volume according to an earlier RAID 60 level.
6. A system for storing excess data in an online RAID 60 array, comprising: a data module to determine whether an incoming data for storing in an online RAID 60 array exceeds available data storage capacity of the online RAID 60 array; and
a storage module to store excess data in a parity arm of the online RAID 60 array, if the incoming data for storing in the online RAID 60 exceeds the available data storage capacity of the online RAID 60 array.
7. The system of claim 6, wherein the storage module to store the excess data in the parity arm by writing the incoming data diagonally on to storage discs of the online RAID 60 array.
8. The system of claim 6, further comprising an alarm module to generate an alarm upon storing the excess data in the parity arm of the online RAID 60 array.
9. The system of claim 6, wherein to store the excess data in the parity arm comprises replacing parity data with the excess data in the parity arm of the RAID 60 array.
10. The system of claim 6, wherein to store the excess data in the parity arm comprises writing the excess data at existing locations of parity data across all disks of the RAID 60 array.
1 1 . A non-transitory machine-readable storage medium comprising instructions for testing a cloud service, the instructions executable by a processor to:
determine whether data received for storing in a RAID 60 array exceeds available data storage capacity of the RAID 60 array; and
if the data received for storing in the RAID 60 array exceeds the available data storage capacity of the RAID 60 array, write excess data to one of two parity arms present in the RAID 60 array.
12. The storage medium of claim 1 1 , further comprising instructions to generate an alert event upon writing the excess data to one of two parity arms present in the RAID 60 array.
13. The storage medium of claim 1 1 , wherein the RAID 60 array is present online.
14. The storage medium of claim 1 1 , further comprising instructions to store parity data in other parity arm of the RAID 60 array.
15. The storage medium of claim 1 1 , wherein to store the excess data in the parity arm comprises to overwrite parity data in the parity arm of the RAID 60 array.
PCT/US2015/010155 2014-11-04 2015-01-05 Storing excess data in a raid 60 array WO2016073018A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN5529CH2014 2014-11-04
IN5529/CHE/2014 2014-11-04

Publications (1)

Publication Number Publication Date
WO2016073018A1 true WO2016073018A1 (en) 2016-05-12

Family

ID=55909571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/010155 WO2016073018A1 (en) 2014-11-04 2015-01-05 Storing excess data in a raid 60 array

Country Status (1)

Country Link
WO (1) WO2016073018A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070276999A1 (en) * 2003-11-26 2007-11-29 Golding Richard A Adaptive grouping in object raid
US20100312961A1 (en) * 2009-06-05 2010-12-09 Sunny Koul Method and system for storing excess data in a redundant array of independent disk level 6
US20110283064A1 (en) * 2007-10-19 2011-11-17 Hitachi, Ltd. Storage apparatus and data storage method using the same
WO2013142646A1 (en) * 2012-03-23 2013-09-26 DSSD, Inc. Method and system for multi-dimensional raid
US20140108617A1 (en) * 2012-07-12 2014-04-17 Unisys Corporation Data storage in cloud computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070276999A1 (en) * 2003-11-26 2007-11-29 Golding Richard A Adaptive grouping in object raid
US20110283064A1 (en) * 2007-10-19 2011-11-17 Hitachi, Ltd. Storage apparatus and data storage method using the same
US20100312961A1 (en) * 2009-06-05 2010-12-09 Sunny Koul Method and system for storing excess data in a redundant array of independent disk level 6
WO2013142646A1 (en) * 2012-03-23 2013-09-26 DSSD, Inc. Method and system for multi-dimensional raid
US20140108617A1 (en) * 2012-07-12 2014-04-17 Unisys Corporation Data storage in cloud computing

Similar Documents

Publication Publication Date Title
US10001947B1 (en) Systems, methods and devices for performing efficient patrol read operations in a storage system
US10210045B1 (en) Reducing concurrency bottlenecks while rebuilding a failed drive in a data storage system
US20190129614A1 (en) Load Balancing of I/O by Moving Logical Unit (LUN) Slices Between Non-Volatile Storage Represented by Different Rotation Groups of RAID (Redundant Array of Independent Disks) Extent Entries in a RAID Extent Table of a Mapped RAID Data Storage System
US8930745B2 (en) Storage subsystem and data management method of storage subsystem
US8719619B2 (en) Performance enhancement technique for raids under rebuild
US9851909B2 (en) Intelligent data placement
US20090276567A1 (en) Compensating for write speed differences between mirroring storage devices by striping
US8463992B2 (en) System and method for handling IO to drives in a raid system based on strip size
US8825950B2 (en) Redundant array of inexpensive disks (RAID) system configured to reduce rebuild time and to prevent data sprawl
US9298398B2 (en) Fine-grained control of data placement
US10095585B1 (en) Rebuilding data on flash memory in response to a storage device failure regardless of the type of storage device that fails
US10579540B2 (en) Raid data migration through stripe swapping
US10324794B2 (en) Method for storage management and storage device
US20150212736A1 (en) Raid set initialization
CN103534688A (en) Data recovery method, storage equipment and storage system
US10585763B2 (en) Rebuild rollback support in distributed SDS systems
US9760296B2 (en) Storage device and method for controlling storage device
US7962690B2 (en) Apparatus and method to access data in a raid array
US8954668B2 (en) Writing of data of a first block size in a raid array that stores and mirrors data in a second block size
US10452494B1 (en) Performing storage object recovery
US11157198B2 (en) Generating merge-friendly sequential IO patterns in shared logger page descriptor tiers
US20140317367A1 (en) Storage apparatus and data copy control method
US9104598B2 (en) Systems and methods for medium error reporting and handling in storage devices
US8880939B2 (en) Storage subsystem and method for recovering data in storage subsystem
WO2016073018A1 (en) Storing excess data in a raid 60 array

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15857257

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15857257

Country of ref document: EP

Kind code of ref document: A1