GB2402803A - Arrangement and method for detection of write errors in a storage system - Google Patents
Arrangement and method for detection of write errors in a storage system Download PDFInfo
- Publication number
- GB2402803A GB2402803A GB0313419A GB0313419A GB2402803A GB 2402803 A GB2402803 A GB 2402803A GB 0313419 A GB0313419 A GB 0313419A GB 0313419 A GB0313419 A GB 0313419A GB 2402803 A GB2402803 A GB 2402803A
- Authority
- GB
- United Kingdom
- Prior art keywords
- arrangement
- combination
- group
- storage system
- check block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/18—Error detection or correction; Testing, e.g. of drop-outs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/1007—Addressing errors, i.e. silent errors in RAID, e.g. sector slipping and addressing errors
Abstract
An arrangement for detection of write errors in a disk storage system 100 by using phase fields F, to improve data integrity and have a low impact on disk performance. User data blocks D are divided into groups 120 and a check block P is inserted after each group. The check block P contains the phase field F and is updated each time the group is written. The phase field F could be a single bit field which is inverted after each write or it could be a multi-bit counter which is incremented after each write. The check block P may also contain an XOR combination of the data blocks D or it may also contain an XOR combination of the logical block address or LBA 220 of the group.
Description
ARRANGEMENT AND METHOD FOR
DETECTION OF WRITE ERRORS IN A STORAGE SYSTEM
Field of the Invention s
This invention relates to storage systems and particularly to disk storage systems for electronic data storage.
Background of the Invention
Due to advances in recording technology the capacity of hard drives is doubling annually. In 2003 the areal density is expected to reach 100 Gbits per square inch and a 3.5H drive will be capable of storing 300 GB. The reliability of a hard drive is specified in terms of its MTBF and the unrecoverable error rate. Typical specifications for current server-class drives are 1,000,000 hours and 1 unrecoverable error in 1015 bits read. However increases in areal density make it harder to maintain reliability due to lower flying heights, media defects, etc. RAID (Redundant Array of Independent Disks) arrays (e.g., RAID-1 or RAID-5) are often used to further improve the reliability of storage systems. However with highcapacity drives a single level of redundancy is no longer sufficient to reduce the probability of data loss to a negligible level.
It is also possible for a disk drive to occasionally return stale data on a read command because a previous write command has not written to the correct location on the recording medium or it failed to record on the medium. This may be due to an intermittent hardware failure or a latent design defect. For example, the drive might write the data to the wrong LBA (Logical Block Address) due to a firmware bug, or it may write off track, or it may fail to write at all because a drop of lubricant (commonly referred to as lube') lifts the head off the disk surface.
There is increasing interest in using commodity drives such as Advanced Technology Attachment (ATA) drives in server applications because they are about 3 times cheaper in terms of cents/MB. However these drives were originally intended for intermittent use in PC's and so they may be less reliable than server-class drives. Also ATA drives only support 512-byte blocks and so block-level LRC (Longitudinal Redundancy Check) cannot be used to detect data corruption.
For a single disk drive the controller could read back each block and verify it just after it has been written.
Any type of redundant RAID (Redundant Array of Independent Disks) array could be implemented in a way that allows the read data to be checked. For example, with a RAID-5 array the controller could check that the read data is consistent with the other data drives and the parity drive.
However, these approaches have the disadvantage(s) that both methods drastically reduce the overall throughput in terms of I/O (Input/Output) commands per second, since the first method requires an extra revolution and the second method requires several drives to be accessed for each read command).
A need therefore exists for detection of write errors in a storage system wherein the above mentioned disadvantage(s) may be alleviated.
Statement of Invention
In accordance with a first aspect of the present invention there is provided an arrangement for detection of write errors in a storage system, the arrangement comprising: means for storing data blocks in groups, each group comprising a plurality of data blocks and a check block, wherein the check block is updated each time the group is written to storage; and means for detecting write errors by checking the check block.
Preferably, the check block is a combination of data blocks of the group.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, the check block is a combination of a logical block address associated with the group.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, the check block is a combination of a phase field which is updated each time the group is written.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, the phase field comprises a single bit value which is inverted each time the group is written.
Preferably, the phase field comprises a multi-bit value which is updated each time the group is written.
The arrangement preferably further comprises a non-volatile table for
phase field values.
Preferably, the non-volatile table comprises a reserved disk drive area, a working copy of the table being cached in a controller of the system.
The arrangement preferably further comprises a non-volatile log arranged to record an entry before a write operation, the entry being arranged for one of A-B: A invalidation, and B deletion on completion of the write operation.
Preferably, the log is arranged to retain updates to the working copy of the table in the controller which have not yet been stored in the
non-volatile table.
Preferably, the log is stored in memory for also holding code for a controller of the system.
Preferably, the storage system comprises a disk storage system.
Preferably, the disk storage system comprises an ATA disk drive.
Preferably, the disk storage system comprises a RAID system.
In a second aspect, the present invention provides a method for detection of write errors in a storage system, the method comprising: storing data blocks in groups, each group comprising a plurality of data blocks and a check block; updating the check block each time the group is written; and detecting possible write errors by checking the check block.
Preferably, the check block is a combination of data blocks of the group.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, check block is a combination of a logical block address associated with the group.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, wherein the check block is a combination of a phase field which is updated each time the group is written.
Preferably, the combination is a logical Exclusive-OR combination.
Preferably, the phase field comprises a single bit value which is i inverted each time the group is written.
Preferably, the phase field comprises a multi-bit value which is updated each time the group is written.
Preferably, phase field values are stored in a non-volatile table.
Preferably, the non-volatile table comprises a reserved disk drive area, a working copy of the table being cached in a controller of the system.
The method preferably further comprises recording an entry in a nonvolatile log before a write operation, and performing one of operations AB: A invalidating the entry, and B deleting the entry on completion of the write operation.
The method preferably further comprises retaining in the log updates to the working copy of the table in the controller which have not yet been stored in the non-volatile table.
Preferably, the log is stored in memory also holding code for a controller of the system.
Preferably, the storage system comprises a disk storage system.
Preferably, the disk storage system comprises an ATA disk drive.
Preferably, the disk storage system comprises a RAID system.
In a third aspect, the present invention provides a computer program element comprising computer program means for performing substantially the method of the second aspect. r 5
Brief Description of the Drawings
One arrangement and method for detection of write errors in a storage system by using a phase field incorporating the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which: FIG. 1 shows a block schematic diagram of a disk drive storage system incorporating the invention; and FIG. 2 shows a block schematic diagram of a method for computing a parity block using the system of FIG. 1.
Description of Preferred Embodiment
Briefly stated, in its preferred embodiment this invention uses interleaved parity blocks containing a phase field (e.g., a single bit flag) to detect nearly all instances of data corruption by a disk drive.
The parity blocks also provide an additional level of error correction.
These features are particularly useful for ATA drives since they tend to have a higher uncorrectable error rate than server drives. (ATA drives typically specify a hard error rate of 1 in 1014 bits and so the chance of a GB drive containing a block with a hard read error is 0.8%. If these drives are then used to build a 10+P RAID-5 array the chance of a rebuild failing after replacing a drive is 8.) Referring now to FIG. 1, a magnetic disk storage system 100 includes a disk 110, in which information is stored in blocks D and P of, typically, 512 bytes. When storing data on disk one parity block P is inserted following every N. e.g. as shown, every eight 512-byte blocks or 4 KB.
These N+1 blocks are considered a group 120. Consequently the effective data capacity of the drive is reduced by N/(N+I).
As illustrated in FIG. 2, the parity block P contains the group parity which is computed as follows: Step 210 - XOR corresponding bytes from each of the data blocks in that group.
Step 220 - XOR the physical LBA of the first block in the group into the first few bytes of the result of step 210. This LBA seed allows detection of addressing errors on nearly all reads and some writes.
Step 230 - XOR a phase field F into the last few bits of the result of step 220. The phase field F may be a single bit value which is inverted each time the group is written.
Alternatively it may be a multi-bit counter which is updated (e.g., incremented) each time the group is written. The phase field detects most of the remaining addressing errors on writes.
Except when the drive encounters a hard read error, the disk controller (not shown) reads and writes the drive in complete groups. It performs the computation above for each group. For a write, the result is written to the parity block. For a read, the result is XOR'ed with the contents of the read parity block and if the result is non-zero then there is an error in that group.
The parity blocks P allow the controller to handle the following drive errors: À If the drive encounters an unrecoverable medium error in one data block of a group the controller restarts the read at the next block.
It then reconstructs the missing block by using the group parity, assuming that the LBA and phase are correct. Finally it reassigns the bad LBA and rewrites the block.
À If the drive reads the wrong LBA the group parity check will be nonzero because of the LBA seed. The controller then retries the read once and returns a medium error if the parity fails again.
À If the drive has previously written the wrong LBA, or the medium was not written at all and the host then submits a request to read the correct LBA, the group parity check will be non-zero because of the phase field F. The controller then retries the read once and returns a medium error if the parity fails again.
À If the drive has previously written the wrong LBA and the host then submits a request to read the incorrect LBA, the group parity check will be incorrect because of the LBA seed. The controller retries the read once and returns a medium error.
When the controller returns a medium error the data can still be recovered if the drive is a component of a redundant array (not shown).
Since the controller always reads and writes a complete group on disk, short or unaligned writes require a read-modify-write. However RAID-5 has a similar penalty and so there is no additional overhead in this case.
The disk controller must store the current phase of each group in a nonvolatile store 130. For example, when using a single-bit phase flag the resulting bit map occupies about 2.6 MB for a 100 GB drive with 4 KB groups. The controller initializes all of the phase flags to zero when the drive is formatted. The phase flag bit map 130 may be implemented in various ways. Flash memory is not directly suitable because it would wear out rapidly if the same group is written repeatedly. Battery-backed SRAMs (Static Random Access Memories) would be bulky and expensive. A preferred solution is to store the bit map in a reserved area of the disk drive and cache a working copy in SDRAM (Static Dynamic Random Access Memory) in the controller. However, to avoid updating the reserved area for every write command, the changes must be batched up in some way and protected from power failure and drive resets.
In addition, if a disk write is interrupted by a power failure or a reset then the state of the phase flag on disk is in doubt. This must not cause a subsequent read to fail with a medium error, since there is nothing wrong with the drive (however it is acceptable to return old data, new data or a mixture of the two since the controller has not completed the write to the host).
These two problems can be solved by making an entry in a non-volatile log just before issuing a disk write, and deleting (or invalidating) it when the write completes. The same log can also be used to retain updates to the bit map in SDRAM which have not yet been flushed to disk. A typical log entry requires 8 bytes as follows:
Bytes Description
0:3 Address of first Group to be written.
4:5 Number of consecutive Groups to be written. (Non-zero, which indicates a valid log entry.) 6 Initialised to FFh (the 'h' suffix denoting hexadecimal notation). Set to OOh after the disk write completes.
7 Initialised to FFh. Set to OOh after the bit map has been updated on disk.
The log can be stored in a small battery-backed SRAM, i.e., NVRAM (Non-Volatile RAM).
In some implementations it may be convenient to store the log in additional sectors of the flash memory that contains the controller code.
When a log sector has been completely used it is erased to all FFh. A word write to flash typically takes about 500 s and each disk write requires 3 flash writes. This allows nearly 700 disk writes per second. Wear on the flash memory is automatically evened out since the log is written sequentially. Also the log entries are formatted so that each byte is written only once per disk write. For example, 1 MB of flash with an endurance of 1Os cycles would last over 4 years at 100 disk writes per second.
To ensure high-availability, storage systems often employ dual (active-active) controllers. In this environment it is desirable to maintain mirror copies of the non-volatile log in each controller. This ensures that the protection provided by the phase fields will not be lost if a controller fails. The two logs must be kept in sync by exchanging messages between the controllers. Each controller must inform the other controller to update its log before it writes a group to disk and again when the write completes. However, in practice this will typically not be a big overhead because higher-level functions such as RAID-5 exchange similar messages anyway.
A means must also be provided to resynchronise the two controllers, e.g. if one of the controllers is replaced after a failure. This is most easily achieved by flushing the outstanding updates out to disk from the log in the other controller and clearing the log in the replacement controller.
It will be understood that the scheme for detection of write errors in a storage system by using phase flags described above provides the following advantages: À Improved data integrity. The scheme is particularly useful when using low-cost desk-top drives. These are normally limited to 512-byte blocks and so there is no room to store a check field in each block.
However, it could also be applied to server-class drives.
À Low performance impact, especially when used in conjunction with RAID-5 (no additional disk accesses are needed to check the read data).
In the simplest case the phase field is a single bit which is inverted on each write. However for better protection it could also be a multi-bit counter which is updated, for example, incremented by a positive or negative value (i.e., decremented).
It will be appreciated that the method described above for detection of write errors in a storage system will typically be carried out in software running on a processor (not shown) within the system, and that the software may be provided as a computer program element carried on any suitable data carrier (also not shown) such as a magnetic or optical computer disc.
It will also be understood that although the invention has been described above in the context of a magnetic disk storage system, the invention may be alternatively be applied to other storage systems such as those based on optical disks or magnetic tape.
Claims (37)
1. An arrangement for detection of write errors in a storage system, the arrangement comprising: means for storing data blocks in groups, each group comprising a plurality of data blocks and a check block, wherein the check block is updated each time the group is written to storage; and means for detecting write errors by checking the check block.
2. The arrangement of claim 1, wherein the check block is a combination of data blocks of the group.
3. The arrangement of claim 2, wherein the combination is a logical Exclusive-OR combination.
4. The arrangement of any one of claims 1-3, wherein the check block is a combination of a logical block address associated with the group.
5. The arrangement of claim 4, wherein the combination is a logical Exclusive-OR combination.
6. The arrangement of any one of claims 1-5, wherein the check block is a combination of a phase field which is updated each time the group is written.
7. The arrangement of claim 6, wherein the combination is a logical Exclusive-OR combination.
8. The arrangement of claim 6 or 7, wherein the phase field comprises a single bit value which is inverted each time the group is written.
9. The arrangement of claim 6 or 7, wherein the phase field comprises a multi-bit value which is updated each time the group is written.
10. The arrangement of any one of claims 6-9, further comprising a
non-volatile table for phase field values.
11. The arrangement of claim 10 wherein the non-volatile table comprises a reserved disk drive area, a working copy of the table being cached in a controller of the system.
12. The arrangement of any one of claims 1-11, further comprising a nonvolatile log arranged to record an entry before a write operation, the entry being arranged for one of A-B: A invalidation, and B deletion on completion of the write operation.
13. The arrangement of claim 12 when dependent on claim 11, wherein the log is arranged to retain updates to the working copy of the table in the controller which have not yet been stored in the non-volatile table.
14. The arrangement of claim 12 or 13 wherein the log is stored in memory for also holding code for a controller of the system.
15. The arrangement of any one of claims 1-14, wherein the storage system comprises a disk storage system.
16. The arrangement of claim 15, wherein the disk storage system comprises an ATA disk drive.
17. The arrangement of claim 15 or 16, wherein the disk storage system comprises a RAID system.
18. A method for detection of write errors in a storage system, the method comprising: storing data blocks in groups, each group comprising a plurality of data blocks and a check block; updating the check block each time the group is written; and detecting possible write errors by checking the check block.
19. The method of claim 18, wherein the check block is a combination of data blocks of the group.
20. The method of claim 19, wherein the combination is a logical Exclusive-OR combination.
21. The method of any one of claims 18-20, wherein the check block is a combination of a logical block address associated with the group.
22. The method of claim 21, wherein the combination is a logical Exclusive-OR combination.
23. The method of any one of claims 18-22, wherein the check block is a combination of a phase field which is updated each time the group is written.
24. The method of claim 23, wherein the combination is a logical Exclusive-OR combination.
25. The method of claim 23 or 24, wherein the phase field comprises a single bit value which is inverted each time the group is written.
26. The method of claim 23 or 24, wherein the phase field comprises a multi-bit value which is updated each time the group is written.
27. The method of any one of claims 23-26, wherein phase field values are stored in a non-volatile table.
28. The method of claim 27 wherein the non-volatile table comprises a reserved disk drive area, a working copy of the table being cached in a controller of the system.
29. The method of any one of claims 18-28, further comprising recording an entry in a non-volatile log before a write operation, and performing one of operations A-B: A invalidating the entry, and B deleting the entry on completion of the write operation.
30. The method of claim 29, when dependent on claim 28, further comprising retaining in the log updates to the working copy of the table in the controller which have not yet been stored in the non-volatile table.
31. The method of claim 29 or 30 wherein the log is stored in memory also holding code for a controller of the system.
32. The method of any one of claims 18-31, wherein the storage system comprises a disk storage system.
33. The arrangement of claim 32, wherein the disk storage system comprises an ATA disk drive.
34. The method of claim 32 or 33, wherein the disk storage system comprises a RAID system.
35. A computer program element comprising computer program means for performing substantially the method of any one of claims 18-34.
36. An arrangement, for detection of write errors in a storage system, substantially as hereinbefore described with reference to the accompanying drawings.
37. A method, for detection of write errors in a storage system, substantially as hereinbefore described with reference to the accompanying drawings.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0313419A GB2402803B (en) | 2003-06-11 | 2003-06-11 | Arrangement and method for detection of write errors in a storage system |
CNB2004100028291A CN1324474C (en) | 2003-06-11 | 2004-01-17 | System and method for detecting write errors in a storage device |
US10/839,106 US7380198B2 (en) | 2003-06-11 | 2004-05-05 | System and method for detecting write errors in a storage device |
JP2004141188A JP2005004733A (en) | 2003-06-11 | 2004-05-11 | Arrangement and method of disposition for detecting write error in storage system |
US12/047,368 US7464322B2 (en) | 2003-06-11 | 2008-03-13 | System and method for detecting write errors in a storage device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0313419A GB2402803B (en) | 2003-06-11 | 2003-06-11 | Arrangement and method for detection of write errors in a storage system |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0313419D0 GB0313419D0 (en) | 2003-07-16 |
GB2402803A true GB2402803A (en) | 2004-12-15 |
GB2402803B GB2402803B (en) | 2006-06-28 |
Family
ID=27589831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0313419A Expired - Lifetime GB2402803B (en) | 2003-06-11 | 2003-06-11 | Arrangement and method for detection of write errors in a storage system |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP2005004733A (en) |
CN (1) | CN1324474C (en) |
GB (1) | GB2402803B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7873878B2 (en) * | 2007-09-24 | 2011-01-18 | International Business Machines Corporation | Data integrity validation in storage systems |
CN102034516B (en) * | 2010-12-10 | 2013-07-24 | 创新科存储技术有限公司 | Method for detecting read-write error of storage medium |
CN102043685A (en) * | 2010-12-31 | 2011-05-04 | 成都市华为赛门铁克科技有限公司 | RAID (redundant array of independent disk) system and data recovery method thereof |
TWI522804B (en) * | 2014-04-23 | 2016-02-21 | 威盛電子股份有限公司 | Flash memory controller and data storage device and flash memory control method |
CN113391941B (en) * | 2021-06-18 | 2022-07-22 | 苏州浪潮智能科技有限公司 | RAID read-write timeout processing method, device, equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5335235A (en) * | 1992-07-07 | 1994-08-02 | Digital Equipment Corporation | FIFO based parity generator |
US5574736A (en) * | 1994-01-07 | 1996-11-12 | International Business Machines Corporation | Data storage device and method of operation |
US5602857A (en) * | 1993-09-21 | 1997-02-11 | Cirrus Logic, Inc. | Error correction method and apparatus |
US5623595A (en) * | 1994-09-26 | 1997-04-22 | Oracle Corporation | Method and apparatus for transparent, real time reconstruction of corrupted data in a redundant array data storage system |
EP0825534A2 (en) * | 1996-08-13 | 1998-02-25 | Hewlett-Packard Company | Method and apparatus for parity block generation |
US5805799A (en) * | 1995-12-01 | 1998-09-08 | Quantum Corporation | Data integrity and cross-check code with logical block address |
US6233648B1 (en) * | 1997-12-26 | 2001-05-15 | Kabushiki Kaisha Toshiba | Disk storage system and data update method used therefor |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5533190A (en) * | 1994-12-21 | 1996-07-02 | At&T Global Information Solutions Company | Method for maintaining parity-data consistency in a disk array |
-
2003
- 2003-06-11 GB GB0313419A patent/GB2402803B/en not_active Expired - Lifetime
-
2004
- 2004-01-17 CN CNB2004100028291A patent/CN1324474C/en not_active Expired - Fee Related
- 2004-05-11 JP JP2004141188A patent/JP2005004733A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5335235A (en) * | 1992-07-07 | 1994-08-02 | Digital Equipment Corporation | FIFO based parity generator |
US5602857A (en) * | 1993-09-21 | 1997-02-11 | Cirrus Logic, Inc. | Error correction method and apparatus |
US5574736A (en) * | 1994-01-07 | 1996-11-12 | International Business Machines Corporation | Data storage device and method of operation |
US5623595A (en) * | 1994-09-26 | 1997-04-22 | Oracle Corporation | Method and apparatus for transparent, real time reconstruction of corrupted data in a redundant array data storage system |
US5805799A (en) * | 1995-12-01 | 1998-09-08 | Quantum Corporation | Data integrity and cross-check code with logical block address |
EP0825534A2 (en) * | 1996-08-13 | 1998-02-25 | Hewlett-Packard Company | Method and apparatus for parity block generation |
US6233648B1 (en) * | 1997-12-26 | 2001-05-15 | Kabushiki Kaisha Toshiba | Disk storage system and data update method used therefor |
Non-Patent Citations (1)
Title |
---|
P Massiglia, "The RAID book", 1997, peer-to-peer.com, pages 102-103 * |
Also Published As
Publication number | Publication date |
---|---|
CN1573703A (en) | 2005-02-02 |
CN1324474C (en) | 2007-07-04 |
GB0313419D0 (en) | 2003-07-16 |
JP2005004733A (en) | 2005-01-06 |
GB2402803B (en) | 2006-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7464322B2 (en) | System and method for detecting write errors in a storage device | |
US10761766B2 (en) | Memory management system and method | |
US11941257B2 (en) | Method and apparatus for flexible RAID in SSD | |
JP3164499B2 (en) | A method for maintaining consistency of parity data in a disk array. | |
JP3129732B2 (en) | Storage array with copy-back cache | |
US7206991B2 (en) | Method, apparatus and program for migrating between striped storage and parity striped storage | |
US7984328B1 (en) | System and method for reducing unrecoverable media errors | |
US6898668B2 (en) | System and method for reorganizing data in a raid storage system | |
US11531590B2 (en) | Method and system for host-assisted data recovery assurance for data center storage device architectures | |
US7234024B1 (en) | Application-assisted recovery from data corruption in parity RAID storage using successive re-reads | |
US10114699B2 (en) | RAID consistency initialization method | |
US7240237B2 (en) | Method and system for high bandwidth fault tolerance in a storage subsystem | |
US7577804B2 (en) | Detecting data integrity | |
JP2010026812A (en) | Magnetic disk device | |
GB2402803A (en) | Arrangement and method for detection of write errors in a storage system | |
US10922025B2 (en) | Nonvolatile memory bad row management | |
JP3699797B2 (en) | Disk array device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
746 | Register noted 'licences of right' (sect. 46/1977) |
Effective date: 20090520 |
|
PE20 | Patent expired after termination of 20 years |
Expiry date: 20230610 |