US20060206665A1 - Accelerated RAID with rewind capability - Google Patents

Accelerated RAID with rewind capability Download PDF

Info

Publication number
US20060206665A1
US20060206665A1 US11433152 US43315206A US2006206665A1 US 20060206665 A1 US20060206665 A1 US 20060206665A1 US 11433152 US11433152 US 11433152 US 43315206 A US43315206 A US 43315206A US 2006206665 A1 US2006206665 A1 US 2006206665A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
data
cache
log
area
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11433152
Inventor
Tim Orsley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QUANATUM Corp
Quantum Corp
Original Assignee
Quantum Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1441Resetting or repowering
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2066Optimisation of the communication load
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1004Adaptive RAID, i.e. RAID system adapts to changing circumstances, e.g. RAID1 becomes RAID5 as disks fill up
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/103Hybrid, i.e. RAID systems with parity comprising a mix of RAID types
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Abstract

A method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a logical mirror area and a logical stripe area, such that when storing data in the mirror area, duplicating the data by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, storing data as stripes of blocks, including data blocks and associated error-correction blocks.

Description

    FIELD OF THE INVENTION
  • The present invention relates to data protection in data storage devices, and in particular to data protection in disk arrays.
  • BACKGROUND OF THE INVENTION
  • Storage devices of various types are utilized for storing information such as in computer systems. Conventional computer systems include storage devices such as disk drives for storing information managed by an operating system file system. With decreasing costs of storage space, an increasing amount of data is stored on individual disk drives. However, in case of disk drive failure, important data can be lost. To alleviate this problem, some fault-tolerant storage devices utilize an array of redundant disk drives (RAID).
  • In typical data storage systems including storage devices such as primary disk drives, the data stored on the primary storage devices is backed-up to secondary storage devices such as tape, from time to time. However, any change to the data on the primary storage devices before the next back-up, can be lost if one or more of the primary storage devices fail.
  • True data protection can be achieved by keeping a log of all writes to a storage device, on a data block level. In one example, a user data set and a write log are maintained, wherein the data set has been completely backed up and thereafter a log of all writes is maintained. The backed-up data set and the write log allows returning to the state of the data set before the current state of the data set, by restoring the backed-up (baseline) data set and then executing all writes from that log up until that time.
  • To protect the log file itself, RAID configured disk arrays provide protection against data loss by protecting a single disk drive failure. Protecting the log file stream using RAID has been achieved by either a RAID mirror (known as RAID-1) shown by example in FIG. 1, or a RAID stripe (known as RAID-5) shown by example in FIG. 2. In the RAID mirror 10 including several disk drives 12, two disk drives store the data of one independent disk drive. In the RAID stripe 14, n+1 disk drives 12 are required to store the data of n independent disk drives (e.g., in FIG. 2, a stripe of five disk drives stores the data of four independent disk drives). The example RAID mirror 10 in FIG. 1 includes an array of eight disk drives 12 (e.g., drive0-drive7), wherein each disk drive 12 has e.g. 100 GB capacity. In each disk drive 12, half the capacity is used for user data, and another half for mirror data. As such, user data capacity of the disk array 10 is 400 GB and the other 400 GB is used for mirror data. In this example mirror configuration, drive1 protects drive0 data (M0), drive2 protects drive1 data (M1), etc. If drive0 fails, then the data M0 in drive1 can be used to recreate data M0 in drive0, and the data M7 in drive7 can be used to crate data M7 of drive0. As such, no data is lost in case of a single disk drive failure.
  • Referring back to FIG. 2, a RAID stripe configuration effectively groups capacity from all but one of the disk drives in the disk array 14 and writes the parity (XOR) of that capacity on the remaining disk drive (or across multiple drives as shown). In the example FIG. 2, the disk array 14 includes five disk drives 12 (e.g., drive0-drive4) each disk drive 12 having e.g. 100 GB capacity, divided into 5 sections. The blocks S0-S3 in the top portions of drive0-drive3 are for user data, and a block of drive4 is for parity data (i.e., XOR of S0-S3). In this example, the RAID stripe capacity is 400 GB for user data and 100 GB for parity data. The parity area is distributed among the disk drives 12 as shown. Spreading the parity data across the disk drives 12 allows spreading the task of reading the parity data over several disk drives as opposed to just one disk drive. Writing on a disk drive in a stripe configuration requires that the disk drive holding parity be read, a new parity calculated and the new parity written over the old parity. This requires a disk revolution and increases the write latency. The increased write latency decreases the throughput of the storage device 14.
  • On the other hand, the RAID mirror configuration (“mirror”) allows writing the log file stream to disk faster than the RAID stripe configuration (“stripe”). A mirror is faster than a stripe since in the mirror, each write activity is independent of other write activities, in that the same block can be written to the mirroring disk drives at the same time. However, a mirror configuration requires that the capacity to be protected be matched on another disk drive. This is costly as the capacity to be protected must be duplicated, requiring double the number of disk drives. A stripe reduces such capacity to 1/n where n is the number of disk drives in the disk drive array. As such, protecting data with parity across multiple disk drives makes a stripe slower than a mirror, but more cost effective.
  • There is, therefore, a need for a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system. There is also a need for such a system to provide the capability of returning to a desired previous data state.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention satisfies these needs. In one embodiment, the present invention provides a method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a hybrid of a logical mirror area (i.e., RAID mirror) and a logical stripe area (i.e., RAID stripe). When storing data in the mirror area, the data is duplicated by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, the data is stored as stripes of blocks, including data blocks and associated error-correction blocks.
  • In one version of the present invention, a log file stream is maintained as a log cache in the RAID mirror area for writing data from a host to the storage subsystem, and then data is transferred from the log file in the RAID mirror area to the final address in the RAID stripe area, preferably as a background task. In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
  • To further enhance performance, according to the present invention, a memory cache (RAM cache) is added in front of the log cache, wherein incoming host blocks are first written to RAM cache quickly and the host is acknowledged. The host perceives a faster write cycle than is possible if the data were written to a data storage unit while the host waited for an acknowledgement. This further enhances the performance of the above hybrid RAID subsystem.
  • While the data is en-route to a data storage unit through the RAM cache, power failure can result in data loss. As such, according to another aspect of the present invention, a flashback module (backup module) is added to the subsystem to protect the RAM cache data. The flashback module includes a non-volatile memory, such as flash memory, and a battery. During normal operations, the battery is trickle charged. Should any power failure then occur, the battery provides power to transfer the contents of the RAM cache to the flash memory. Upon restoration of power, the flash memory contents are transferred back to the RAM cache, and normal operations resume.
  • Read performance is further enhanced by pressing a data storage unit (e.g., disk drive) normally used as a spare data storage unit (“hot spare”) in the array, into temporary service in the hybrid RAID system. In a conventional RAID subsystem, any hot spare lies dormant but ready to take over if one of the data storage units in the array should fail. According to the present invention, rather than lying dormant, the hot spare can be used to replicate the data in the mirrored area of the hybrid RAID subsystem. Should any data storage unit in the array fail, this hot spare could immediately be delivered to take the place of that failed data storage unit without increasing exposure to data loss from a single data storage unit failure. However, while all the data storage units of the array are working properly, the replication of the mirror area would make the array more responsive to read requests by allowing the hot spare to supplement the mirror area.
  • The mirror area acts as a temporary store for the log, prior to storing the write data in its final location in the stripe area. In another version of the present invention, prior to purging the data from the mirror area, the log can be written sequentially to an archival storage medium such as tape. If a baseline backup of the entire RAID subsystem stripe is created just before the log files are archived, each successive state of the RAID subsystem can be recreated by re-executing the write requests within the archived log files. This would allow any earlier state of the stripe of the RAID subsystem to be recreated (i.e., infinite roll-back or rewind). This is beneficial in allowing recovery from e.g. user error such as accidentally erasing a file, from a virus infection, etc.
  • As such, the present invention provides a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system, and also provides the capability of returning to a desired previous data state.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other features, aspects and advantages of the present invention will become understood with reference to the following description, appended claims and accompanying figures where:
  • FIG. 1 shows a block diagram of an example disk array configured as a RAID mirror;
  • FIG. 2 shows a block diagram of an example disk array configured as a RAID stripe; FIG. 3A shows a block diagram of an example hybrid RAID data organization in a disk array according to an embodiment of the present invention;
  • FIG. 3B shows an example flowchart of an embodiment of the steps of data storage according to the present invention;
  • FIG. 3C shows a block diagram of an example RAID subsystem logically configured as hybrid RAID stripe and mirror, according to the hybrid RAID data organization FIG. 3A;
  • FIG. 4A shows an example data set and a log of updates to the data set after a back-up;
  • FIG. 4B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 4C shows an example flowchart of another embodiment of the steps of data storage according to the present invention
  • FIG. 5A shows another block diagram of the disk array of FIGS. 3A and 3B, further including a flashback module according to the present invention;
  • FIG. 5B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 5C shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 6A shows a block diagram of another example hybrid RAID data organization in a disk array including a hot spare used as a temporary RAID mirror according to the present invention;
  • FIG. 6B shows an example flowchart of another embodiment of the steps of data storage according to the present invention;
  • FIG. 6C shows a block diagram of an example RAID subsystem logically configured as the hybrid RAID data organization of FIG. 6A that further includes a hot spare used as a temporary RAID mirror;
  • FIG. 7A shows a block diagram of another disk array including a hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention;
  • FIG. 7B shows a block diagram of another disk array including hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention;
  • FIG. 8A shows an example of utilizing a hybrid RAID subsystem in a storage area network (SAN), according to the present invention;
  • FIG. 8B shows an example of utilizing a hybrid RAID as a network attached storage (NAS), according to the present invention; and
  • FIG. 8C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring to FIG. 3A, an example fault-tolerant storage subsystem 16 having an array of failure independent data storage units 18, such as disk drives, using a hybrid RAID data organization according to an embodiment of the present invention is shown. The data storage units 18 can be other storage devices, such as e.g. optical storage devices, DVD-RAM, etc. As discussed, protecting data with parity across multiple disk drives makes a RAID stripe slow but cost effective. A RAID mirror provides better data transfer performance because the target sector is simultaneously written on two disk drives, but requires that the capacity to be protected be matched on another disk drive. Whereas a RAID stripe reduces such capacity to 1/n where n is the number of drives in the disk array, but in a RAID stripe, both the target and the parity sector must be read then written, causing write latency.
  • In the example of FIG. 3A, an array 17 of six disk drives 18 (e.g., drive0-drive5) is utilized for storing data from, and reading data back to, a host system, and is configured to include both a RAID mirror data organization and a RAID stripe data organization according to the present invention. In the disk array 17, the RAID mirror (“mirror”) configuration provides performance advantage when transferring data to disk drives 18 using e.g. a log file stream approach, and the RAID stripe (“stripe”) configuration provides cost effectiveness by using the stripe organization for general purpose storage of user data sets.
  • Referring to the example steps in the flowchart of FIG. 3B, according to an embodiment of the present invention, this is achieved by dividing the capacity of the disk array 17 of FIG. 3A into at least two areas (segments), including a mirror area 20 and a stripe area 22 (step 100). A data set 24 is maintained in the stripe area 22 (step 102), and an associated log file/stream 26 is maintained in the mirror area 20 (step 104). The log file 26 is maintained as a write log cache in the mirror area 20, such that upon receiving a write request from a host, the host data is written to the log file 26 (step 106), and then data is transferred from the log file 26 in the mirror area 20 to a final address in the data set 24 in the stripe area 22 (preferably, performed as a background task) (step 108). In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host. Preferably, the log is backed-up to tape continually or on a regular basis (step 110). The above steps are repeated as write requests arrive from the host. The disk array 17 can include additional hybrid RAID mirror and RAID stripe configured areas according to the present invention.
  • Referring to FIG. 3C, the example hybrid RAID subsystem 16 according to the present invention further includes a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18). In the example of FIG. 3C, an array 17 of N=6 disk drives (drive0-drive5, e.g. 100 GB each) is configured such that portions of the capacity of the disk drives 18 are used as a RAID mirror for the write log cache 26 and write log cache mirror data 27 (i.e., M0-M5). And, remaining portions of the capacity of the disk drives 18 are used a RAID stripe for user data (e.g., S0-S29) and parity data (e.g., XOR0-XOR29). In this example, 400 GB of user data is stored in the hybrid RAID subsystem 16, compared to the same capacity in the RAID mirror 10 of FIG. 1 and the RAID stripe 14 of FIG. 2. The subsystem 16 communicates with a host 29 via a host interface 31. Other numbers of disk drives and with different storage capacities can also be used in the RAID subsystem 16 of FIG. 3C, according to the present invention.
  • FIG. 4A shows an example user data set 24 and a write log 26, wherein the data set 24 has been completely backed up at e.g. midnight and thereafter a log 26 of all writes has been maintained (e.g., at times t1-t6). In this example, each write log entry 26 a includes updated data (udata) and the address (addr) in the data set where the updated data is to be stored, and a corresponding time stamp (ts). The data set at each time t1-t6 is also shown in FIG. 4A. The backed-up data set 24 and the write log 26 allows returning to the state of the data set 24 at any time before the current state of the data set (e.g., at time t6), by restoring the backed-up (baseline) data set 24 and then executing all writes from that log 26 up until that time. For example, if data for address addr=0 (e.g., logical block address 0) were updated at time t2, but then corrupted at time t5, then the data from addr=0 from time t2 can be retrieved by restoring the baseline backup and running the write log through time t2. The log file 26 is first written in the RAID mirror area 20 and then data is transferred from the log file 26 in the RAID mirror area 20 to the final address in the RAID stripe area 22 (preferably as a background task), according to the present invention.
  • As the write log 26 may grow large, it is preferably offloaded to secondary storage devices such as tape drives, to free up disk space to log more changes to the data set 24. As such, the disk array 17 (FIG. 3C) is used as a write log cache in a three step process: (1) when the host needs to write data to a disk, rather than writing to the final destination in a disk drive, that data is first written to the log 26, satisfying the host (2) then when the disk drive is not busy, that data from the log 26 is transferred to the final destination data set on the disk drive, transparent to the host and (3) the log data is backed-up to e.g. tape to free up storage space, to log new data from the host. The log and the final destination data are maintained in a hybrid RAID configuration as described.
  • Referring to the example steps in the flowchart of FIG. 4B, upon receiving a host read request (step 120), a determination is made if the requested data is in the write log 26, maintained as a cache in the mirror area 20, (i.e., cache hit) (step 122), and if so, the requested data is transferred to the host 20 from the log 26 (step 124). Statistically, since recently written data is more likely to be read back than previously written data, there is a tradeoff such that the larger the log area, the higher the probability that the requested data is in the log 26 (in the mirror area 20). When reading multiple blocks from the mirror area 20, different blocks can be read from different disk drives simultaneously, increasing read performance. In step 122, if there is no log cache hit, then the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 126). Stripe read performance is inferior to a mirror but not as dramatically as write performance is inferior.
  • A such, the stripe area 22 is used for flushing the write log data, thereby permanently storing the data set in the stripe area 22, and also used to read data blocks that are not in the write log cache 26 in the mirror area 20. The hybrid RAID system 16 is an improvement over a conventional RAID stripe without a RAID mirror, since according to the present invention most recently written data is likely in the log 26 stored in the mirror area 20, which provides a faster read than a stripe. The hybrid RAID system provides equivalent of RAID mirror performance for all writes and for most reads since most recently written data is most likely to be read back. As such, the RAID stripe 22 is only accessed to retrieve data not found in the log cache 26 stored in the RAID mirror 20, whereby the hybrid RAID system 16 essentially provides the performance of a RAID mirror, but at cost effectiveness of a RAID stripe.
  • Therefore, if the stripe 22 is written to as a foreground process (e.g., real-time), then there is write performance penalty (i.e. the host is waiting for an acknowledgement that the write is complete). The log cache 26 permits avoidance of such real-time writes to the stripe 22. Because the disk array 17 is divided into two logical data areas (i.e., a mirrored log write area 20 and a striped read area 22) using a mirror configuration for log writes avoids the write performance penalty of a stripe. Provided the mirror area 20 is sufficiently large to hold all log writes that occur during periods of peak activity, updates to the stripe area 22 can be performed in the background. The mirror area 20 is essentially a write cache, and writing the log 26 to the mirror area 20 with background writes to the stripe area 22 allows the hybrid subsystem 16 to match mirror performance at stripe-like cost.
  • Referring to the example steps in the flowchart of FIG. 4C, to further enhance performance, according to the present invention, a cache memory (e.g., RAM write cache 32, FIG. 5A) is added in front of the log cache 26 in the disk array 17 (step 130), and as above the data set 24 and the log file 26 are maintained in the stripe area 22 and the mirror area 20, respectively (steps 132, 134). Upon receiving host write requests (step 136) incoming host blocks are first written to the RAM write cache 32 quickly and the host is acknowledged (step 138). The host perceives a faster write cycle than is possible if the data were written to disk while the host waited for an acknowledgement. This enhances the performance of conventional RAID system and further enhances the performance of the above hybrid RAID subsystem 16. The host data in the RAM write cache 32 is copied sequentially to the log 26 in the mirror area 20 (i.e., disk mirror write cache) (step 140), and the log data is later copied to the data set 24 in the stripe area 22 (i.e., disk stripe data set) e.g. as a background process (step 142). Sequential writes to the disk mirror write cache 26 and random writes to the disk stripe data set 24, provide fast sequential writes.
  • However, power failure while the data is en-route to disk (e.g., to the write log cache on disk) through the RAM write cache 32 can result in data loss because RAM is volatile. Therefore, as shown in the example block diagram of another embodiment of a hybrid RAID subsystem 16 in FIG. 5A, a flashback module 34 (backup module) can be added to the disk array 17 to protect RAM cache data according to the present invention. Without the module 34, write data would not be secure until stored at its destination address on disk.
  • The module 34 includes a non-volatile memory 36 such as Flash memory, and a battery 38. Referring to the example steps in the flowchart of FIG. 5B, during normal operations, the battery 38 is trickle charged from an external power source 40 (step 150). Should any power failure then occur, the battery 38 provides the RAID controller 30 with power sufficient (step 152) to transfer the contents of the RAM write cache 32 to the flash memory 36 (step 154). Upon restoration of power, the contents of the flash memory 36 are transferred back to the RAM write cache 32, and normal operations resume (step 156). This allows acknowledging the host write request (command) once the data is written in the RAM cache 32 (which is faster than writing it to the mirror disks). Should a failure of an element of the RAID subsystem 16 preclude resumption of normal operations, the flashback module 34 can be moved to a another hybrid subsystem 16 to restore data from the flash memory 36. With the flashback module 34 protecting the RAM write cache 32 against power loss, writes can be accumulated in the RAM cache 32 and written to the mirrored disk log file 26 sequentially (e.g., in the background).
  • To minimize the size (and the cost) of the RAM write cache 32 (and thus the corresponding size and cost of flash memory 36 in the flashback module 34), write data should be transferred to disk as quickly as possible. Since sequential throughput of a hard disk drive is substantially better than random performance, the fastest way to transfer data from the RAM write cache 32 to disk is via the log file 26 (i.e., a sequence of address/data pairs above) in the mirror area 20. This is because when writing a data block to the mirror area 20, the data block is written to two different disk drives. Depending on the physical disk address of the incoming blocks from the host to be written, the disk drives of the mirror 20 may be accessed randomly. However, as a log file is written sequentially based on entries in time, the blocks are written to the log file in a sequential manner, regardless of their actual physical location in the data set 24 on the disk drives.
  • In the above hybrid RAID system architecture according to the present invention, data requested by the host 29 from the RAID subsystem 16 can be in the RAM write cache 32, in the log cache area 26 in the mirror 20 area or in the general purpose stripe area 22. Referring to the example steps in the flowchart of FIG. 5C, upon receiving a host read request (step 160), a determination is made if the requested data is in the RAM cache 32 (step 162), and if so, the requested data is transferred to the host 29 from the RAM cache 32 (step 164). If the requested data is not in the RAM cache 32, then a determination is made if the requested data is in the write log file 26 in the mirror area 20 (step 166), and if so, the requested data is transferred to the host from the log 26 (step 168). If the requested data is not in the log 26, then the data set 24 in the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 169).
  • Since data in the mirror area 20 is replicated, twice the number of actuators are available to pursue read data requests effectively doubling responsiveness. While this mirror benefit is generally recognized, the benefit may be enhanced because the mirror does not contain random data but rather data that has recently been written. As discussed, because the likelihood that data will be read is probably directly proportional to the time since the data has been written, the mirror area 20 may be more likely to contain the desired data. A further acceleration can be realized if the data is read back in the same order it was written regardless of the potential randomness of the final data addresses since the mirror area 20 stores data in the written order and a read in that order creates a sequential stream.
  • According to another aspect of the present invention, read performance of the subsystem 16 can further be enhanced. In a conventional RAID system, one of the disk drives in the array can be reserved as a spare disk drive (“hot spare”), wherein if one of the other disk drives in the array should fail, the hot spare is used to take the place of that failed drive. According to the present invention, read performance can be further enhanced by pressing a disk drive normally used as a hot spare in the disk array 17, into temporary service in the hybrid RAID subsystem 16. FIG. 6A shows the hybrid RAID subsystem 16 of FIG. 3A, further including a hot spare disk drive 18 a (i.e., drive6) according to the present invention.
  • Referring to the example steps in the flowchart of FIG. 6B, according to the present invention, the status of the hot spare 18 a is determined (step 170) and upon detecting the hot spare 18 a is lying dormant (i.e., not being used as a failed device replacement) (step 172), the hot spare 18 a is used to replicate the data in the mirrored area 20 of the hybrid RAID subsystem 16 (step 174). Then upon receiving a read request from the host (step 176), it is determined if the requested data is in the hot spare 18 a and the mirror area 20 (step 178). If so, a copy of the requested data is provided to the host from the hot spare 18 a with minimum latency or from the mirror area 20, if faster (180). Otherwise, a copy of a requested data is provided to the host from the mirror area 20 or the stripe area 22 (step 182). Thereafter, it is determined if the hot spare 18 a is required to replace a failed disk drive (step 184). If not, the process goes back to step 176, otherwise the hot spare 18 a is used to replace the failed disk drive (step 186).
  • As such, in FIG. 6A should any disk drive 18 in the array 17 fail, the hot spare 18 a can immediately be delivered to take the place of that failed disk drive without increasing exposure to data loss from a single disk drive failure. For example, if drive1 fails, drive0 and drive2-drive5 can start using the spare drive6 and rebuild drive6 to contain data of drive1 prior to failure. However, while all the disk drives 18 of the array 17 are working properly, the replication of the mirror area 20 would make the subsystem 16 more responsive to read requests by allowing the hot spare 18 a to supplement the mirror area 20.
  • Depending upon the size of the mirrored area 20, the hot spare 18 a may be able to provide multiple redundant data copies for further performance boost. For example, if the hot spare 18 a matches the capacity of the mirrored area 20 of the array 17, the mirrored area data can then be replicated twice on the hot spare 18 a. For example, in the hot spare 18 a data can be arranged wherein the data is replicated on each concentric disk track (i.e., one half of a track contains a copy of that which is on the other half of that track). In that case, rotational latency of the hot spare 18 a in response to random requests is effectively halved (i.e., smaller read latency).
  • As such, the hot spare 18 a is used to make the mirror area 20 of the hybrid RAID subsystem 16 faster. FIG. 6C shows an example block diagram of a hybrid RAID subsystem 16 including a RAID controller 30 that implements the hybrid RAID data organization of FIG. 6A, for seven disk drives (drive0-drive6), wherein drive6 is the hot spare 18 a. Considering drive0-drive1 in FIG. 6C, for example, M0 data is in drive0 and is duplicated in drive1, whereby drive1 protects drive0. In addition, M0 data is written to the spare drive6 using replication, such that if requested M0 data is in the write log 26 in the mirror area 20, it can be read back from drive0, drive1, or the spare drive6. Since M0 data is replicated twice in drive6, drive6 appears to have high r.p.m. because as described, replication lowers read latency. Spare drive6 can be configured to store all the mirrored blocks in a replicated fashion, similar to that for M0 data, to improve the read performance of the hybrid subsystem 16.
  • Because a hot spare disk drive should match capacity of other disk drives in the disk array (primary array) and since in this example the mirror area data (M0-M5) is half the capacity of a disk drive 18, the hot spare 18 a can replicate the mirror area 20 twice. If the hot spare 18 a includes a replication of the mirror area, the hot spare 18 a can be removed from the subsystem 16 and backed-up. The backup can be performed off-line, not using network bandwidth. A new baseline could be created from the hot spare 18 a.
  • If for example, previously a full backup of the disk array has been made to tape, and that the hot spare 18 a contains all writes since that backup, then the backup can be restored from tape to a secondary disk array and then all writes from the log file 26 written to the stripe 22 of the secondary disk array. To speed this process only the most recent update to a given block need be written. The order of writes need not take place in a temporal order but can be optimized to minimize time between reads of the hot spare and/or writes to the secondary array. The stripe of the secondary array is then in the same state as that of the primary array, as of the time the hot spare was removed from the primary array. Backing up the secondary array to tape at this point creates a new baseline that can then be updated with newer hot spares over time to create newer baselines facilitating fast emergency restores. Such new baseline creation can be done without a host but rather with an appliance including a disk array and a tape drive. If the new baseline tape backup fails, the process can revert to the previous baseline and a tape backup of the hot spare.
  • FIG. 7A shows a block diagram of an embodiment of a hybrid RAID subsystem 16 implementing said hybrid RAID data organization, and further including a hot spare 18 a as a redundant mirror and a flashback module 34, according to the present invention. Writing to the log 26 in the mirror area 20 and the flashback module 34, removes the write performance penalty normally associated with replication on a mirror. Replication on a mirror involves adding a quarter rotation to all writes. When the target track is acquired, average latency to one of the replicated sectors is one quarter rotation but half a rotation is need to write the other sector. Since average latency on a standard mirror is half a rotation, an additional quarter rotation is required for writes. With the flashback module 34, acknowledgment of write non-volatility to the host can occur upon receipt of the write in RAM write cache 32 in the RAID controller 30. Writes from RAM write cache 32 to the disk log file write cache 26 occur in the background during periods of non-peak activity. By writing sequentially to the log file 26, the likelihood of such non-peak activity is greatly increased. FIG. 7B shows a block diagram of another embodiment of hybrid RAID subsystem 16 of FIG. 7A, wherein the flashback module 34 is part of the data organization manager 28 that includes the RAID controller 30.
  • Another embodiment of a hybrid RAID subsystem 16 according to the present invention provides data block service and can be used as any block device (e.g:, single disk drive, RAID, etc.). Such a hybrid RAID subsystem can be used in any system wherein a device operating at a data block level can be used. FIG. 8A shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention in a example block device such as storage area network (SAN) 42. In SAN, connected devices exchange data blocks.
  • FIG. 8B shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention as a network attached storage (NAS) in a network 44. In NAS, connected devices exchange files, as such a file server 46 is positioned in front of the hybrid RAID subsystem 16. The file server portion of a NAS device can be simplified with a focus solely on file service, and data integrity is provided by the hybrid RAID subsystem 16.
  • The present invention provides further example enhancements to the hybrid RAID subsystem, described herein below. As mentioned, the mirror area 20 (FIG. 3A) acts as a temporary store for the log cache 26, prior to storing the write data in its final location in the stripe 22. Before purging the data from the temporary mirror 20, the log 26 can be written sequentially to an archival storage medium such as tape. Then, to return to a prior state of the data set, if a baseline backup of the entire RAID subsystem stripe 22 is created just before the log files are archived, each successive state of the RAID subsystem 16 can be recreated by re-executing the write requests within the archived log file system. This would allow any earlier state of the stripe 22 of the RAID subsystem 16 to be recreated (i.e., infinite roll-back or rewind). This is beneficial e.g. in allowing recovery from user error such as accidentally erasing a file, in allowing recovery from a virus infection, etc. Referring to the example steps in the flowchart of FIG. 8C, to recreate a state of the data set 24 in the stripe 22 at a selected time, a copy of the data set 24 created at a back-up time prior to the selected time, is obtained (step 190) and a copy of cache log 26 associated with said data set copy is obtained (step 192). Said associated cache log 26 includes entries 26 a (FIG. 4A) created time-sequentially immediately subsequent to said back-up time. Each data block in each entry of said associated cache log 26 is time-sequentially transferred to the corresponding block address in the data set copy, until a time stamp indicating said selected time is reached in an entry 26 a of the associated cache log (step 194).
  • The present invention further provides compressing the data in the log 26 stored in the mirror area 20 of the hybrid RAID system 16 for cost effectiveness. Compression is not employed in a conventional RAID subsystem because of variability in data redundancy. For example, a given data block is to be read, modified and rewritten. If the read data consumes the entire data block and the modified data does contain as much redundancy as did the original data, then the compressed modified data cannot fit in the data block on disk.
  • However, a read/modify/write operation is not a valid operation in the mirror area 20 in the present invention because the mirror area 20 contains a sequential log file of writes. While a given data block may be read from the mirror area 20, after any modification, the writing of the data block would be appended to the existing log file stream 26, not overwritten in place. Because of this, variability in compression is not an issue in the mirror area 20. Modern compression techniques can e.g. halve the size of typical data, whereby use of compression in the mirror area 20 effectively e.g. doubles its size. This allows doubling the mirror area size or cutting the actual mirror area size in half, without reducing capacity relative to a mirror area without compression. The compression technique can similarly be performed for the RAM write cache 32.
  • For additional data protection, in another version of the present invention, the data in the RAID subsystem 16 may be replicated to a system 16 a (FIG. 7B) at a remote location. The remote system 16 a may not be called upon except in the event of an emergency in which the primary RAID subsystem 16 is shut down. However, the remote system 16 a can provide further added value in the case of the present invention. In particular, the primary RAID subsystem 16 sends data in the log file 26 in mirror area 20 to the remote subsystem 16 a wherein in this example the remote subsystem 16 a comprises a hybrid RAID subsystem according to the present invention. If the log file data is compressed the transmission time to the remote system 16 a can be reduced. Since the load on the remote subsystem 16 a is less than that on the primary subsystem 16 (i.e., the primary subsystem 16 responds to both read and write requests whereas the remote subsystem 16 a need only respond to writes), the remote subsystem 16 a can be the source of parity information for the primary subsystem 16. As such, within the remote subsystem 16 a, in the process of writing data from the mirror area to its final address on the stripe in the subsystem 16 a, the associated parity data is generated. The remote subsystem 16 a can then send the parity data (preferably compressed) to the primary subsystem 16 which can then avoid generating parity data itself, accelerating the transfer process for a given data block between the mirror and the stripe areas in the primary subsystem 16.
  • The present invention goes beyond standard RAID by protecting data integrity, not just providing device reliability. Infinite roll-back provides protection during the window of vulnerability between backups. A hybrid mirror/stripe data organization results in improved performance. With the addition of the flashback module 34, a conventional RAID mirror is outperformed at a cost which approaches that of a stripe. Further performance enhancement is attained with replication on an otherwise dormant hot spare and that hot spare can be used by a host-less appliance to generate a new baseline backup.
  • The present invention can be implemented in various data processing systems such as Enterprise systems, networks, SAN, NAS, medium and small systems (e.g., in a personal computer a write log is used, and data transferred to the user data set in background). As such in the description herein, the “host” and “host system” refer to any source of information that is in communication with the hybrid RAID system for transferring data to, and from, the hybrid RAID subsystem.
  • The present invention has been described in considerable detail with reference to certain preferred versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.

Claims (34)

  1. 1-55. (canceled)
  2. 56. A method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, comprising the steps of:
    dividing the data storage area on the data storage units into a logical mirror area and a logical stripe area, such that when storing data in the mirror area, duplicating the data by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, storing data as stripes of blocks, including data blocks and associated error-correction blocks;
    storing a data set in the stripe area, and storing an associated log cache in the mirror area;
    in response to a request from a host to write data to the storage subsystem: storing the host data in the log cache in the mirror area, and acknowledging completion of the write to the host;
    copying said host data from the log cache in the mirror area to the data set in the stripe area.
  3. 57. The method of claim 56, wherein:
    the log cache comprises a write log having multiple time-sequential entries, each entry including a data block, the data block address in the data set, and a data block time stamp.
  4. 58. The method of claim 57, wherein:
    said request from the host includes said host data and a block address in the data set for storing the host data;
    the step of storing the host data in the log cache in response to said host request further includes the steps of entering the host data, said block address and a time stamp in an entry in the log cache.
  5. 59. The method of claim 57, wherein:
    the step of copying said host data from the log cache in the mirror area to the data set in the stripe area, further comprises the steps of: copying the host data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
  6. 60. The method of claim 57, further comprising the steps of:
    archiving said log cache entry in an archive; and
    purging said entry from the cache log.
  7. 61. The method of claim 58 further comprising the steps of:
    in response to a request to recreate a state of the data set at a selected time:
    obtaining a copy of the data set created at a back-up time prior to the selected time;
    obtaining a cache log associated with said data set copy, the associated cache log including entries created time-sequentially immediately subsequent to said back-up time; and
    time-sequentially transferring each data block in each entry of said associated cache log, to the corresponding block address in the data set copy, until said selected time stamp is reached in an entry of the associated cache log.
  8. 62. The method of claim 57, wherein the storage subsystem further includes a cache memory, the method further comprising the steps of:
    in response to a request to write data to the storage subsystem: storing the data in the cache memory, acknowledging completion of the write, and copying the data from the cache memory to the log cache in the mirror area.
  9. 63. The method of claim 62, further comprising the steps of:
    copying said data from the log cache in the mirror area to the data set in the stripe area.
  10. 64. The method of claim 63, further comprising the steps of:
    in response to a request to read data from the storage subsystem:
    determining if the requested data is in the cache memory, and if so, providing the requested data from the cache memory,
    otherwise, determining if the requested data is in the log cache in the mirror area, and if so, providing the requested data from the log cache,
    otherwise, determining if the requested data is in the data set in the stripe area, and if so, providing the requested data from the data set.
  11. 65. The method of claim 57, further comprising the steps of compressing the data stored in the mirror area.
  12. 66. The method of claim 57, wherein the data storage units comprise data disk drives.
  13. 67. A fault-tolerant storage subsystem comprising:
    an array of failure independent data storage units;
    a controller that logically divides the data storage area on data the storage units into a logical mirror area and a logical stripe area, wherein the controller stores data in the mirror area by duplicating the data and keeping a duplicate copy of the data on a pair of storage units, and the controller stores data in the stripe area as stripes of blocks, including data blocks and associated error-correction blocks;
    the controller further maintains a data set in the stripe area, and an associated log cache in the mirror area; and
    in response to a request to write incoming data to the storage subsystem, the controller stores the incoming data in the log cache in the mirror area, and acknowledges completion of the write, and the controller copies said incoming data from the log cache in the mirror area to the data set in the stripe area.
  14. 68. The storage subsystem of claim 67, wherein:
    the log cache comprises a write log having multiple time sequential entries, each entry including a data block, the data block address in the data set, and time stamp.
  15. 69. The storage subsystem of claim 68, wherein:
    said request includes said incoming data and a block address in the data set for storing the incoming data; and
    the controller enters the incoming data, said block address and a time stamp in an entry in the log cache.
  16. 70. The storage subsystem of claim 69, wherein in response to a request to read data from the data set, the controller further:
    determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
    otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
  17. 71. The storage subsystem of claim 69, wherein:
    the controller copies said incoming data from the log cache in the mirror area to the data set in the stripe area, by copying the incoming data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
  18. 72. The storage subsystem of claim 69, further comprising a cache memory, wherein:
    in response to a request to write data to the data set, the controller stores the data in the cache memory, and acknowledges completion of the write; and
    the controller further copies the data from the cache memory to the log cache in the mirror area.
  19. 73. The storage subsystem of claim 72, wherein the controller further copies said data from the log cache in the mirror area to the data set in the stripe area.
  20. 74. The storage subsystem of claim 73, wherein in response to a request to read data from the data set, the controller further:
    determines if the requested data is in the cache memory, and if so, provides the requested data from the cache memory,
    otherwise, the controller determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
    otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
  21. 75. The storage subsystem of claim 68, wherein the controller further compresses the data stored in the mirror area.
  22. 76. A data organization manager for a fault-tolerant storage subsystem having an array of failure independent data storage units, the data organization manager comprising:
    a controller that logically divides the data storage area on the data storage units into a hybrid of logical mirror area and a logical stripe area, wherein the controller stores data in the mirror area by duplicating the data and keeping a duplicate copy of the data on a pair of storage units, and the controller stores data in the stripe area as stripes of blocks, including data blocks and associated error-correction blocks;
    the controller maintains a data set in the stripe area, and an associated log cache in the mirror area, and in response to a request to write data to the storage subsystem, the controller further: stores the data in the log cache in the mirror area, acknowledges completion of the write, and copies said data from the log cache in the mirror area to the data set in the stripe area.
  23. 77. The data organization manager of claim 76, wherein:
    the log cache comprises a write log having multiple time sequential entries, each entry including a data block, the data block address in the data set, and time stamp;
    said request includes said data and a block address in the data set for storing the data; and
    the controller enters the data, said block address and a time stamp in an entry in the log cache.
  24. 78. The data organization manager of claim 76, wherein in response to a request to read data from the storage subsystem, the controller further:
    determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache;
    otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
  25. 79. The data organization manager of claim 77, wherein:
    the controller copies said data from the log cache in the mirror area to the data set in the stripe area, by copying the data in said log cache entry in the mirror area to said block address in the data set in the stripe area.
  26. 80. The data organization manager of claim 79, wherein in response to a request to recreate a state of the data set at a selected time, the controller further:
    obtains a copy of the data set created at a back-up time prior to the selected time;
    obtains a cache log associated with said data set copy, the associated cache log including entries created time sequentially immediately subsequent to said back-up time; and
    time sequentially transfers each data block in each entry of said associated cache log, to the corresponding block address in the data set copy, until said selected time stamp is reached in an entry of the associated cache log.
  27. 81. The data organization manager of claim 77, further comprising a cache memory, wherein:
    in response to a request to write data to the data set, the controller stores the data in the cache memory, and acknowledges completion of the write; and
    the controller further copies the data from the cache memory to the log cache in the mirror area.
  28. 82. The data organization manager of claim 81, wherein the controller further copies said data from the log cache in the mirror area to the data set in the stripe area.
  29. 83. The data organization manager of claim 76, wherein in response to a request to read data from the data set, the controller further:
    determines if the requested data is in the cache memory, and if so, provides the requested data from the cache memory,
    otherwise, the controller determines if the requested data is in the log cache in the mirror area, and if so, provides the requested data from the log cache,
    otherwise, the controller determines if the requested data is in the data set in the stripe area, and if so, provides the requested data from the data set.
  30. 84. The data organization manager of claim 81, further comprising a memory backup module including non-volatile memory and a battery, wherein the storage subsystem is normally powered from a power supply;
    wherein, upon detecting power failure from the power supply, the controller powers the cache memory and the non-volatile memory from the battery instead, and copies the data content of the cache memory to the non-volatile memory, and upon detecting restoration of power from the power supply, the controller copies back said data content from the non-volatile memory to the cache memory.
  31. 85. The data organization manager of claim 84, wherein said cache memory comprises random access memory (RAM), and said non-volatile memory comprises flash memory (FLASH).
  32. 86. The data organization manager of claim 84, wherein said battery comprises a rechargeable battery that is normally trickle charged by the power supply.
  33. 87. The data organization manager of claim 76, wherein the controller further reserves one of the storage units as a spare for use in case one of the other storage units fails, such that while the spare storage unit is not in use, the controller further:
    replicates the log cache data stored in the mirror area into the spare storage unit, such that multiple copies of that data are stored in the spare storage unit; and
    upon receiving a request to read data from the data set, the controller determines if the requested data is in the spare storage unit, and if so, the controller selects a copy of the requested data in the spare storage unit that can be provided with minimum read latency relative to other copies of the selected data, and provides the selected copy of the requested data.
  34. 88. The data organization manager of claim 77, wherein the controller further compresses the data stored in the mirror area and the cache.
US11433152 2002-09-20 2006-05-13 Accelerated RAID with rewind capability Abandoned US20060206665A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10247859 US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability
US11433152 US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11433152 US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10247859 Division US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability

Publications (1)

Publication Number Publication Date
US20060206665A1 true true US20060206665A1 (en) 2006-09-14

Family

ID=31946445

Family Applications (2)

Application Number Title Priority Date Filing Date
US10247859 Expired - Fee Related US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability
US11433152 Abandoned US20060206665A1 (en) 2002-09-20 2006-05-13 Accelerated RAID with rewind capability

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10247859 Expired - Fee Related US7076606B2 (en) 2002-09-20 2002-09-20 Accelerated RAID with rewind capability

Country Status (3)

Country Link
US (2) US7076606B2 (en)
EP (1) EP1400899A3 (en)
JP (1) JP2004118837A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080276124A1 (en) * 2007-05-04 2008-11-06 Hetzler Steven R Incomplete write protection for disk array
US20090204758A1 (en) * 2008-02-13 2009-08-13 Dell Products, Lp Systems and methods for asymmetric raid devices
US20090303630A1 (en) * 2008-06-10 2009-12-10 H3C Technologies Co., Ltd. Method and apparatus for hard disk power failure protection
US20100037017A1 (en) * 2008-08-08 2010-02-11 Samsung Electronics Co., Ltd Hybrid storage apparatus and logical block address assigning method
US20100161883A1 (en) * 2008-12-24 2010-06-24 Kabushiki Kaisha Toshiba Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US20110225353A1 (en) * 2008-10-30 2011-09-15 Robert C Elliott Redundant array of independent disks (raid) write cache sub-assembly
US20120151133A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US8819478B1 (en) * 2008-06-30 2014-08-26 Emc Corporation Auto-adapting multi-tier cache
US8856427B2 (en) 2011-06-08 2014-10-07 Panasonic Corporation Memory controller and non-volatile storage device
CN105068760A (en) * 2013-10-18 2015-11-18 华为技术有限公司 Data storage method, data storage apparatus and storage device
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9996421B2 (en) 2013-10-18 2018-06-12 Huawei Technologies Co., Ltd. Data storage method, data storage apparatus, and storage device

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418620B1 (en) * 2001-02-16 2008-08-26 Swsoft Holdings, Ltd. Fault tolerant distributed storage method and controller using (N,K) algorithms
JP4186602B2 (en) 2002-12-04 2008-11-26 株式会社日立製作所 Update data write method using the journal log
JP2004213435A (en) * 2003-01-07 2004-07-29 Hitachi Ltd Storage device system
US6965979B2 (en) * 2003-01-29 2005-11-15 Pillar Data Systems, Inc. Methods and systems of host caching
JP4165747B2 (en) 2003-03-20 2008-10-15 株式会社日立製作所 Storage system, a program of the control device and a control device
US7668876B1 (en) * 2003-04-25 2010-02-23 Symantec Operating Corporation Snapshot-based replication infrastructure for efficient logging with minimal performance effect
US20040254962A1 (en) * 2003-06-12 2004-12-16 Shoji Kodama Data replication for enterprise applications
US7149858B1 (en) 2003-10-31 2006-12-12 Veritas Operating Corporation Synchronous replication for system and data security
JP2005166016A (en) * 2003-11-11 2005-06-23 Nec Corp Disk array device
US7234074B2 (en) * 2003-12-17 2007-06-19 International Business Machines Corporation Multiple disk data storage system for reducing power consumption
JP4634049B2 (en) * 2004-02-04 2011-02-23 株式会社日立製作所 Abnormality notification control in the disk array system
JP4112520B2 (en) * 2004-03-25 2008-07-02 株式会社東芝 Correcting code generating apparatus, correcting code generation method, an error correction device, and an error correction method
US20050235336A1 (en) 2004-04-15 2005-10-20 Kenneth Ma Data storage system and method that supports personal video recorder functionality
JP4519563B2 (en) * 2004-08-04 2010-08-04 株式会社日立製作所 Storage systems and data processing systems
US7519629B2 (en) * 2004-09-30 2009-04-14 International Business Machines Corporation System and method for tolerating multiple storage device failures in a storage system with constrained parity in-degree
JP4428202B2 (en) * 2004-11-02 2010-03-10 日本電気株式会社 Disk array subsystem, distributed arrangement method in a disk array subsystem, a control method, program
US7702864B2 (en) * 2004-11-18 2010-04-20 International Business Machines Corporation Apparatus, system, and method for writing stripes in parallel to unique persistent storage devices
US7644046B1 (en) * 2005-06-23 2010-01-05 Hewlett-Packard Development Company, L.P. Method of estimating storage system cost
US7529968B2 (en) * 2005-11-07 2009-05-05 Lsi Logic Corporation Storing RAID configuration data within a BIOS image
US7761426B2 (en) * 2005-12-07 2010-07-20 International Business Machines Corporation Apparatus, system, and method for continuously protecting data
JP2007264894A (en) * 2006-03-28 2007-10-11 Kyocera Mita Corp Data storage system
US7617361B2 (en) * 2006-03-29 2009-11-10 International Business Machines Corporation Configureable redundant array of independent disks
KR100771521B1 (en) * 2006-10-30 2007-10-30 삼성전자주식회사 Flash memory device having a multi-leveled cell and programming method thereof
US7904647B2 (en) 2006-11-27 2011-03-08 Lsi Corporation System for optimizing the performance and reliability of a storage controller cache offload circuit
US20080168224A1 (en) * 2007-01-09 2008-07-10 Ibm Corporation Data protection via software configuration of multiple disk drives
US8370715B2 (en) * 2007-04-12 2013-02-05 International Business Machines Corporation Error checking addressable blocks in storage
US8032702B2 (en) 2007-05-24 2011-10-04 International Business Machines Corporation Disk storage management of a tape library with data backup and recovery
US7853751B2 (en) * 2008-03-12 2010-12-14 Lsi Corporation Stripe caching and data read ahead
JP2009252114A (en) * 2008-04-09 2009-10-29 Hitachi Ltd Storage system and data saving method
US20090282194A1 (en) * 2008-05-07 2009-11-12 Masashi Nagashima Removable storage accelerator device
JP5220185B2 (en) 2008-05-16 2013-06-26 フュージョン−アイオー・インコーポレーテッド Detecting a failed data storage mechanism, a device for replacing, the system and method
CN102037628A (en) * 2008-05-22 2011-04-27 Lsi公司 Battery backup system with sleep mode
CN101325610B (en) * 2008-07-30 2011-12-28 杭州华三通信技术有限公司 Virtual tape libraries and disk backup systems power control method
JP5353887B2 (en) * 2008-08-06 2013-11-27 富士通株式会社 The control unit of the disk array device, the data transfer device and the power recovery processing method
US20110258362A1 (en) * 2008-12-19 2011-10-20 Mclaren Moray Redundant data storage for uniform read latency
US20100287407A1 (en) * 2009-05-05 2010-11-11 Siemens Medical Solutions Usa, Inc. Computer Storage Synchronization and Backup System
US8281227B2 (en) 2009-05-18 2012-10-02 Fusion-10, Inc. Apparatus, system, and method to increase data integrity in a redundant storage system
US8307258B2 (en) 2009-05-18 2012-11-06 Fusion-10, Inc Apparatus, system, and method for reconfiguring an array to operate with less storage elements
US8732396B2 (en) * 2009-06-08 2014-05-20 Lsi Corporation Method and apparatus for protecting the integrity of cached data in a direct-attached storage (DAS) system
US8930622B2 (en) 2009-08-11 2015-01-06 International Business Machines Corporation Multi-level data protection for flash memory system
US8176284B2 (en) 2009-08-11 2012-05-08 Texas Memory Systems, Inc. FLASH-based memory system with variable length page stripes including data protection information
US7941696B2 (en) * 2009-08-11 2011-05-10 Texas Memory Systems, Inc. Flash-based memory system with static or variable length page stripes including data protection information and auxiliary protection stripes
WO2011073940A1 (en) * 2009-12-17 2011-06-23 International Business Machines Corporation Data management in solid state storage systems
US9785561B2 (en) * 2010-02-17 2017-10-10 International Business Machines Corporation Integrating a flash cache into large storage systems
US9311184B2 (en) * 2010-02-27 2016-04-12 Cleversafe, Inc. Storing raid data as encoded data slices in a dispersed storage network
US8112663B2 (en) * 2010-03-26 2012-02-07 Lsi Corporation Method to establish redundancy and fault tolerance better than RAID level 6 without using parity
US8181062B2 (en) * 2010-03-26 2012-05-15 Lsi Corporation Method to establish high level of redundancy, fault tolerance and performance in a raid system without using parity and mirroring
US20110296105A1 (en) * 2010-06-01 2011-12-01 Hsieh-Huan Yen System and method for realizing raid-1 on a portable storage medium
US8554741B1 (en) * 2010-06-16 2013-10-08 Western Digital Technologies, Inc. Timeline application for log structured storage devices
US8738962B2 (en) 2010-11-17 2014-05-27 International Business Machines Corporation Memory mirroring with memory compression
US8966184B2 (en) 2011-01-31 2015-02-24 Intelligent Intellectual Property Holdings 2, LLC. Apparatus, system, and method for managing eviction of data
JP5505329B2 (en) * 2011-02-22 2014-05-28 日本電気株式会社 The disk array device and a control method thereof
CN102682012A (en) * 2011-03-14 2012-09-19 成都市华为赛门铁克科技有限公司 Method and device for reading and writing data in file system
US9396067B1 (en) 2011-04-18 2016-07-19 American Megatrends, Inc. I/O accelerator for striped disk arrays using parity
US9300590B2 (en) 2011-06-24 2016-03-29 Dell Products, Lp System and method for dynamic rate control in Ethernet fabrics
US9798615B2 (en) 2011-07-05 2017-10-24 Dell Products, Lp System and method for providing a RAID plus copy model for a storage network
US8799557B1 (en) * 2011-10-13 2014-08-05 Netapp, Inc. System and method for non-volatile random access memory emulation
US9235524B1 (en) 2011-12-30 2016-01-12 Emc Corporation System and method for improving cache performance
US8627012B1 (en) * 2011-12-30 2014-01-07 Emc Corporation System and method for improving cache performance
US9104529B1 (en) 2011-12-30 2015-08-11 Emc Corporation System and method for copying a cache system
US8930947B1 (en) 2011-12-30 2015-01-06 Emc Corporation System and method for live migration of a virtual machine with dedicated cache
US9009416B1 (en) 2011-12-30 2015-04-14 Emc Corporation System and method for managing cache system content directories
US9158578B1 (en) 2011-12-30 2015-10-13 Emc Corporation System and method for migrating virtual machines
US9053033B1 (en) 2011-12-30 2015-06-09 Emc Corporation System and method for cache content sharing
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US10073656B2 (en) 2012-01-27 2018-09-11 Sandisk Technologies Llc Systems and methods for storage virtualization
US8856619B1 (en) * 2012-03-09 2014-10-07 Google Inc. Storing data across groups of storage nodes
GB201211041D0 (en) * 2012-06-22 2012-08-01 Ibm An apparatus for restoring redundancy
US9059868B2 (en) 2012-06-28 2015-06-16 Dell Products, Lp System and method for associating VLANs with virtual switch ports
US20140068183A1 (en) * 2012-08-31 2014-03-06 Fusion-Io, Inc. Systems, methods, and interfaces for adaptive persistence
WO2014132373A1 (en) * 2013-02-28 2014-09-04 株式会社 日立製作所 Storage system and memory device fault recovery method
JP6248435B2 (en) * 2013-07-04 2017-12-20 富士通株式会社 Control method for a storage apparatus, and a storage device
US10019352B2 (en) 2013-10-18 2018-07-10 Sandisk Technologies Llc Systems and methods for adaptive reserve storage
JP6244974B2 (en) * 2014-02-24 2017-12-13 富士通株式会社 Control method for a storage apparatus, and a storage device
US8850108B1 (en) 2014-06-04 2014-09-30 Pure Storage, Inc. Storage cluster
US9612952B2 (en) * 2014-06-04 2017-04-04 Pure Storage, Inc. Automatically reconfiguring a storage memory topology
US9367243B1 (en) 2014-06-04 2016-06-14 Pure Storage, Inc. Scalable non-uniform storage sizes
US9836234B2 (en) 2014-06-04 2017-12-05 Pure Storage, Inc. Storage cluster
US9218244B1 (en) 2014-06-04 2015-12-22 Pure Storage, Inc. Rebuilding data across storage nodes
US9213485B1 (en) 2014-06-04 2015-12-15 Pure Storage, Inc. Storage system architecture
US9946894B2 (en) * 2014-06-27 2018-04-17 Panasonic Intellectual Property Management Co., Ltd. Data processing method and data processing device
US9747229B1 (en) 2014-07-03 2017-08-29 Pure Storage, Inc. Self-describing data format for DMA in a non-volatile solid-state storage
US9495255B2 (en) 2014-08-07 2016-11-15 Pure Storage, Inc. Error recovery in a storage cluster
US9483346B2 (en) 2014-08-07 2016-11-01 Pure Storage, Inc. Data rebuild on feedback from a queue in a non-volatile solid-state storage
US9563524B2 (en) 2014-12-11 2017-02-07 International Business Machines Corporation Multi level data recovery in storage disk arrays
US9747177B2 (en) * 2014-12-30 2017-08-29 International Business Machines Corporation Data storage system employing a hot spare to store and service accesses to data having lower associated wear
US20160202924A1 (en) * 2015-01-13 2016-07-14 Telefonaktiebolaget L M Ericsson (Publ) Diagonal organization of memory blocks in a circular organization of memories
US9948615B1 (en) 2015-03-16 2018-04-17 Pure Storage, Inc. Increased storage unit encryption based on loss of trust
US9940234B2 (en) 2015-03-26 2018-04-10 Pure Storage, Inc. Aggressive data deduplication using lazy garbage collection
US10082985B2 (en) 2015-03-27 2018-09-25 Pure Storage, Inc. Data striping across storage nodes that are assigned to multiple logical arrays
US9672125B2 (en) 2015-04-10 2017-06-06 Pure Storage, Inc. Ability to partition an array into two or more logical arrays with independently running software
US9817576B2 (en) 2015-05-27 2017-11-14 Pure Storage, Inc. Parallel update to NVRAM
US10108355B2 (en) 2015-09-01 2018-10-23 Pure Storage, Inc. Erase block state detection
US9768953B2 (en) 2015-09-30 2017-09-19 Pure Storage, Inc. Resharing of a split secret
US9727244B2 (en) 2015-10-05 2017-08-08 International Business Machines Corporation Expanding effective storage capacity of a data storage system while providing support for address mapping recovery
US9843453B2 (en) 2015-10-23 2017-12-12 Pure Storage, Inc. Authorizing I/O commands with I/O tokens
US10007457B2 (en) 2015-12-22 2018-06-26 Pure Storage, Inc. Distributed transactions with token-associated execution

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5297258A (en) * 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
US5392244A (en) * 1993-08-19 1995-02-21 Hewlett-Packard Company Memory systems with data storage redundancy management
US5504883A (en) * 1993-02-01 1996-04-02 Lsc, Inc. Method and apparatus for insuring recovery of file control information for secondary storage systems
US5649152A (en) * 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US5960451A (en) * 1997-09-16 1999-09-28 Hewlett-Packard Company System and method for reporting available capacity in a data storage system with variable consumption characteristics
US6098128A (en) * 1995-09-18 2000-08-01 Cyberstorage Systems Corporation Universal storage management system
US6148368A (en) * 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US6170063B1 (en) * 1998-03-07 2001-01-02 Hewlett-Packard Company Method for performing atomic, concurrent read and write operations on multiple storage devices
US6223252B1 (en) * 1998-05-04 2001-04-24 International Business Machines Corporation Hot spare light weight mirror for raid system
US6247149B1 (en) * 1997-10-28 2001-06-12 Novell, Inc. Distributed diagnostic logging system
US20020156971A1 (en) * 2001-04-19 2002-10-24 International Business Machines Corporation Method, apparatus, and program for providing hybrid disk mirroring and striping
US6567889B1 (en) * 1997-12-19 2003-05-20 Lsi Logic Corporation Apparatus and method to provide virtual solid state disk in cache memory in a storage controller
US20030200473A1 (en) * 1990-06-01 2003-10-23 Amphus, Inc. System and method for activity or event based dynamic energy conserving server reconfiguration
US6674447B1 (en) * 1999-12-06 2004-01-06 Oridus, Inc. Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback
US6704838B2 (en) * 1997-10-08 2004-03-09 Seagate Technology Llc Hybrid data storage and reconstruction system and method for a data storage device
US6718434B2 (en) * 2001-05-31 2004-04-06 Hewlett-Packard Development Company, L.P. Method and apparatus for assigning raid levels
US20040139128A1 (en) * 2002-07-15 2004-07-15 Becker Gregory A. System and method for backing up a computer system

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200473A1 (en) * 1990-06-01 2003-10-23 Amphus, Inc. System and method for activity or event based dynamic energy conserving server reconfiguration
US5297258A (en) * 1991-11-21 1994-03-22 Ast Research, Inc. Data logging for hard disk data storage systems
US5504883A (en) * 1993-02-01 1996-04-02 Lsc, Inc. Method and apparatus for insuring recovery of file control information for secondary storage systems
US5392244A (en) * 1993-08-19 1995-02-21 Hewlett-Packard Company Memory systems with data storage redundancy management
US5649152A (en) * 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US6073222A (en) * 1994-10-13 2000-06-06 Vinca Corporation Using a virtual device to access data as it previously existed in a mass data storage system
US6085298A (en) * 1994-10-13 2000-07-04 Vinca Corporation Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer
US6098128A (en) * 1995-09-18 2000-08-01 Cyberstorage Systems Corporation Universal storage management system
US6148368A (en) * 1997-07-31 2000-11-14 Lsi Logic Corporation Method for accelerating disk array write operations using segmented cache memory and data logging
US5960451A (en) * 1997-09-16 1999-09-28 Hewlett-Packard Company System and method for reporting available capacity in a data storage system with variable consumption characteristics
US6704838B2 (en) * 1997-10-08 2004-03-09 Seagate Technology Llc Hybrid data storage and reconstruction system and method for a data storage device
US6247149B1 (en) * 1997-10-28 2001-06-12 Novell, Inc. Distributed diagnostic logging system
US6567889B1 (en) * 1997-12-19 2003-05-20 Lsi Logic Corporation Apparatus and method to provide virtual solid state disk in cache memory in a storage controller
US6170063B1 (en) * 1998-03-07 2001-01-02 Hewlett-Packard Company Method for performing atomic, concurrent read and write operations on multiple storage devices
US6223252B1 (en) * 1998-05-04 2001-04-24 International Business Machines Corporation Hot spare light weight mirror for raid system
US6674447B1 (en) * 1999-12-06 2004-01-06 Oridus, Inc. Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback
US20020156971A1 (en) * 2001-04-19 2002-10-24 International Business Machines Corporation Method, apparatus, and program for providing hybrid disk mirroring and striping
US6718434B2 (en) * 2001-05-31 2004-04-06 Hewlett-Packard Development Company, L.P. Method and apparatus for assigning raid levels
US20040139128A1 (en) * 2002-07-15 2004-07-15 Becker Gregory A. System and method for backing up a computer system

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8555108B2 (en) 2003-08-14 2013-10-08 Compellent Technologies Virtual disk drive system and method
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US9436390B2 (en) 2003-08-14 2016-09-06 Dell International L.L.C. Virtual disk drive system and method
US9047216B2 (en) 2003-08-14 2015-06-02 Compellent Technologies Virtual disk drive system and method
US9021295B2 (en) 2003-08-14 2015-04-28 Compellent Technologies Virtual disk drive system and method
US8560880B2 (en) 2003-08-14 2013-10-15 Compellent Technologies Virtual disk drive system and method
US10067712B2 (en) 2003-08-14 2018-09-04 Dell International L.L.C. Virtual disk drive system and method
US7886111B2 (en) 2006-05-24 2011-02-08 Compellent Technologies System and method for raid management, reallocation, and restriping
US8230193B2 (en) 2006-05-24 2012-07-24 Compellent Technologies System and method for raid management, reallocation, and restriping
US9244625B2 (en) 2006-05-24 2016-01-26 Compellent Technologies System and method for raid management, reallocation, and restriping
US20080276124A1 (en) * 2007-05-04 2008-11-06 Hetzler Steven R Incomplete write protection for disk array
US8214684B2 (en) 2007-05-04 2012-07-03 International Business Machines Corporation Incomplete write protection for disk array
US20090204758A1 (en) * 2008-02-13 2009-08-13 Dell Products, Lp Systems and methods for asymmetric raid devices
US20090303630A1 (en) * 2008-06-10 2009-12-10 H3C Technologies Co., Ltd. Method and apparatus for hard disk power failure protection
US8819478B1 (en) * 2008-06-30 2014-08-26 Emc Corporation Auto-adapting multi-tier cache
US9619178B2 (en) * 2008-08-08 2017-04-11 Seagate Technology International Hybrid storage apparatus and logical block address assigning method
US20100037017A1 (en) * 2008-08-08 2010-02-11 Samsung Electronics Co., Ltd Hybrid storage apparatus and logical block address assigning method
US20110225353A1 (en) * 2008-10-30 2011-09-15 Robert C Elliott Redundant array of independent disks (raid) write cache sub-assembly
US20100161883A1 (en) * 2008-12-24 2010-06-24 Kabushiki Kaisha Toshiba Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive
US8949524B2 (en) 2010-12-13 2015-02-03 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US20120272005A1 (en) * 2010-12-13 2012-10-25 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US9286000B2 (en) 2010-12-13 2016-03-15 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8543760B2 (en) * 2010-12-13 2013-09-24 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8458397B2 (en) * 2010-12-13 2013-06-04 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US9547452B2 (en) 2010-12-13 2017-01-17 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US20120151133A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Saving log data using a disk system as primary cache and a tape library as secondary cache
US8856427B2 (en) 2011-06-08 2014-10-07 Panasonic Corporation Memory controller and non-volatile storage device
CN105068760A (en) * 2013-10-18 2015-11-18 华为技术有限公司 Data storage method, data storage apparatus and storage device
US9996421B2 (en) 2013-10-18 2018-06-12 Huawei Technologies Co., Ltd. Data storage method, data storage apparatus, and storage device

Also Published As

Publication number Publication date Type
JP2004118837A (en) 2004-04-15 application
US20040059869A1 (en) 2004-03-25 application
US7076606B2 (en) 2006-07-11 grant
EP1400899A3 (en) 2011-04-06 application
EP1400899A2 (en) 2004-03-24 application

Similar Documents

Publication Publication Date Title
US5572660A (en) System and method for selective write-back caching within a disk array subsystem
US5574882A (en) System and method for identifying inconsistent parity in an array of storage
US5596708A (en) Method and apparatus for the protection of write data in a disk array
US6857057B2 (en) Virtual storage systems and virtual storage system operational methods
US6397229B1 (en) Storage-controller-managed outboard incremental backup/restore of data
US6480970B1 (en) Method of verifying data consistency between local and remote mirrored data storage systems
US7010645B2 (en) System and method for sequentially staging received data to a write cache in advance of storing the received data
US5089958A (en) Fault tolerant computer backup system
US6041423A (en) Method and apparatus for using undo/redo logging to perform asynchronous updates of parity and data pages in a redundant array data storage environment
US5315602A (en) Optimized stripe detection for redundant arrays of disk drives
US6862609B2 (en) Redundant storage for multiple processors in a ring network
US5490248A (en) Disk array system having special parity groups for data blocks with high update activity
US5954822A (en) Disk array apparatus that only calculates new parity after a predetermined number of write requests
US5596709A (en) Method and apparatus for recovering parity protected data
US6785783B2 (en) NUMA system with redundant main memory architecture
US5526482A (en) Storage device array architecture with copyback cache
US9003138B1 (en) Read signature command
US5790773A (en) Method and apparatus for generating snapshot copies for data backup in a raid subsystem
US6385706B1 (en) Apparatus and methods for copying a logical object to a primary storage device using a map of storage locations
US6397308B1 (en) Apparatus and method for differential backup and restoration of data in a computer storage system
US20030158999A1 (en) Method and apparatus for maintaining cache coherency in a storage system
US8214612B1 (en) Ensuring consistency of replicated volumes
US5410667A (en) Data record copy system for a disk drive array data storage subsystem
US5959860A (en) Method and apparatus for operating an array of storage devices
US6983396B2 (en) Apparatus for reducing the overhead of cache coherency processing on each primary controller and increasing the overall throughput of the system

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUANATUM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ORSELY, TIM;REEL/FRAME:017895/0995

Effective date: 20020831

AS Assignment

Owner name: CREDIT SUISSE, NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159

Effective date: 20070712

Owner name: CREDIT SUISSE,NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159

Effective date: 20070712

AS Assignment

Owner name: QUANTUM INTERNATIONAL, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE (US) HOLDINGS, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: ADVANCED DIGITAL INFORMATION CORPORATION, WASHINGT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: QUANTUM CORPORATION, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE, LLC, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329

Owner name: CERTANCE HOLDINGS CORPORATION, WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007

Effective date: 20120329