US20050066230A1 - Data reliabilty bit storage qualifier and logical unit metadata - Google Patents
Data reliabilty bit storage qualifier and logical unit metadata Download PDFInfo
- Publication number
- US20050066230A1 US20050066230A1 US10/669,196 US66919603A US2005066230A1 US 20050066230 A1 US20050066230 A1 US 20050066230A1 US 66919603 A US66919603 A US 66919603A US 2005066230 A1 US2005066230 A1 US 2005066230A1
- Authority
- US
- United States
- Prior art keywords
- data
- information
- block
- bit
- drq
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/008—Reliability or availability analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/104—Metadata, i.e. metadata associated with RAID systems with parity
Definitions
- the present invention relates generally to storage systems.
- the present invention relates to RAID storage systems.
- Disc drives are typically used in a stand-alone fashion, such as in a personal computer (PC) configuration where a single disc drive is utilized as the primary data storage peripheral.
- PC personal computer
- a plurality of drives can be arranged into a multi-drive array, sometimes referred to as RAID (“Redundant Array of Inexpensive (or Independent) Discs”).
- the present invention includes embodiments that minimize storage space and storage time in storage systems.
- the present invention stores data with additional information.
- the additional information qualifies the reliability of the stored data.
- the additional information is not combined with another type of information so that it directly indicates the quality of the stored data. If the stored data is maintained with redundancy, the additional information qualifying the data is also maintained with identical redundancy.
- FIG. 1 is a block diagram of a storage system.
- FIG. 2 is a block diagram of the controller shown in FIG. 1 .
- FIGS. 3A and 3B show one method of storing block metadata.
- FIGS. 4A and 4B show an embodiment of the present invention.
- FIGS. 5A-5C are used to explain a method of storing block metadata according to the present invention.
- FIGS. 6A and 6B are used to explain another method of storing block metadata according to the present invention.
- FIG. 7 shows another use of the present invention.
- FIG. 1 is a block diagram of a storage system 100 that can incorporate the present invention.
- Storage system 100 includes a disc array 110 that has a controller 120 and a disc array group 130 .
- the controller 120 communicates via a bus 145 to an operating environment, such as a host computer or an area network (local, wide and storage).
- Controller 120 provides an array management function that includes presenting disc array group 130 as one or more virtual discs to the operating environment.
- Controller 120 can provide additional functions, such as capacity management, host communication, cache management and other functions common for such a storage system.
- Controller 120 communicates with each of discs 1 , 2 and 3 by respective buses 125 A, 125 B and 125 C.
- FIG. 2 shows a block diagram of controller 120 .
- a bus 205 couples a processor 200 , a non-volatile memory 210 , a DRAM 220 and a path controller 230 .
- Path controller 230 is coupled to a cache memory 240 , a device interface 250 and an operating environment interface 260 by respective buses 245 , 255 and 265 .
- Device interface 250 is coupled to disc array group 130 by buses 125 A, 125 B and 125 C.
- Operating environment interface 260 is coupled to the operating environment by bus 145 .
- Controller 120 can be implemented as a single integrated circuit.
- One alternative is path controller 230 , device interface 250 and operating environment interface 260 are a single integrated circuit, with the other blocks being separate integrated circuits.
- the present invention is not limited to the physical implementation of controller 120 , nor is it limited to the blocks shown in FIG. 2 or the shown interconnection. Another controller 120 or portions of it can be used in storage system 100 to, among other reasons, provide additional redundancy.
- DRQ Data Reliability Qualifier
- FIG. 3A illustrates a disc array group 300 that is used in a RAID configuration.
- the disc array group 300 includes devices 310 , 320 , 330 and 340 .
- the devices are preferred disc drives.
- Devices 310 , 320 , 330 and 340 store data blocks 1 - 12 in a RAID 5 configuration as shown.
- the data blocks are stored in respective portions 314 , 324 , 334 and 344 of devices 310 , 320 , 330 and 340 .
- Portions 318 , 328 , 338 and 348 store metadata information about the data blocks.
- portions 318 , 328 , 338 and 348 store so-called “Forced Error” (“FE”) bits. These FE-bits are used to signify if the data in the associated data blocks on the respective drives are unreliable. For example, an FE bit in portion 318 of drive 310 is associated with data block 1 .
- FE Forward Error
- FIG. 3B shows an FE-bit table 350 that can be stored in the controller 120 , specifically cache memory 240 in FIG. 2 .
- controller 120 will access FE-bit table 350 when the operating environment requests access to the drive array group 300 . In this way, controller 120 will know whether the data in the requested data block is unreliable. If an FE bit is set for an accessed data block, controller 120 will send an error message to the operating environment. When writing new data to a block designated as having unreliable data, controller 120 clears the corresponding FE-bit in FE-bit table 350 , writes the data to the device and also writes the associated FE-bit stored on the device.
- storing the FE-bits independently on each device perturbs the use of storage space, particularly the distribution of parity and data in a RAID system with redundancy. Also, writing the data blocks and the FE-bits independently requires extra I/Os to the devices. Likewise, the FE-bit table 350 ultimately uses storage space on media or requires a system where power may never fail, and updating it independently requires additional overhead.
- the present invention removes the need for the FE-bit table 350 and the portions 318 , 328 , 338 and 348 of devices 310 , 320 , 330 and 340 in FIG. 3A .
- FIG. 4A a storage scheme of the present invention is illustrated.
- a data block 400 is shown that includes a data portion 410 and appended information 420 .
- An example of appended information 420 is a “Data Integrity Field” (“DIF”) that includes a “Reference Tag” (“REF TAG”), usually a “Virtual Block Address” (“VBA)” or a “Logical Block Address” (“LBA”), portion 422 , a “Metadata Tag” (“META TAG”), usually a “Logical Unit ID” with other possible “metadata” flags, portion 424 , and a check sum (“CHECK SUM”) portion 426 .
- Reference Tag portion 422 contains information that identifies the logical or virtual address for data portion 410 .
- Check sum portion 426 contains information that is used to detect errors in data portion 410 .
- Metadata Tag portion 424 contains additional portions 424 A and 424 B.
- Portion 424 B can contain information about the device, such as a device identifier (“Logical Unit ID”).
- Portion 424 A contains a “Data Reliability Qualifier” or DRQ-bit that qualifies not only the data in data portion 410 but all redundant copies of that data.
- the DRQ flag is logically appended to the contents of the data block and maintained with identical redundancy as the bits in the data portion. It should be viewed as a copy of “logical metadata” in the same sense as the data portion is considered a copy, with possible redundancy, of a “logical block” of a “logical unit” created using any of the techniques known as “virtualization”.
- Portion 424 A can contain additional metadata bits that qualify the data.
- portion 424 A can contain a “Parity” flag bit, set to “0” (or “FALSE”) for data blocks 400 , that indicates that the block in question contains some form of parity for other user data blocks.
- FIG. 4B shows a storage scheme for the parity data according to the present invention.
- a data parity block 450 includes a parity data portion 460 and appended information 470 .
- Appended information 470 includes a “Reference Tag” (“REF TAG”), usually a “Parity Virtual Block Address” (“Parity VBA)”, portion 472 , a “Metadata Tag” (“META TAG”), usually a “Logical Unit ID” with other possible “metadata” flags, portion 474 and a check sum (“CHECK SUM”) portion 476 .
- Reference Tag portion 472 when qualified by a “Parity” flag in Metadata Tag portion 474 contains information that identifies the so-called “sliver” for which parity data portion 460 provides redundancy.
- the “Parity Virtual Block Address” in the DIF of the parity block may specify the “Virtual Block Address” (“VBA”) of the data block with the lowest such address of the RAID “sliver” (where address in this context means address in the virtual unit).
- VBA Virtual Block Address
- Check sum portion 476 contains information that is used to detect errors in parity data portion 410 .
- Metadata Tag portion 474 contains additional portions 474 A and 474 B.
- Portion 474 B can contain information about the device, such as a device identifier (“Logical Unit ID”).
- Portion 474 A can contain a bit that is a function of the other DRQ-bits in portions 424 A.
- the DRQ parity bit can be generated by an exclusive-OR function of all the other data block DRQ-bits.
- the 1-bit portions 424 A can be exclusive-ORed together to generate the single DRQ parity bit that will saved in portion 474 A.
- the DRQ parity bit is created as a function of the other DRQ bits in portions 424 A in the same sliver.
- the present invention has several advantages over the scheme described in FIGS. 3A and 3B .
- the additional accessing of a device to write FE-bit information is not required since the separate FE-bit portions 318 , 328 , 338 and 348 are eliminated.
- the need to store the FE-bit table is eliminated. Since the FE-bit table maintenance can consume a substantial amount of processing overhead, such elimination will save critical path CPU cycles. Also, considering that the DRQ bit is automatically retrieved when the data is, there is no real performance degradation to check for it being set, which it usually is not.
- data block 10 is illustrated as unreliable.
- the associated DRQ bit is set and exclusive-ORed with the other DRQ bits stored in portions 424 A and stored in portion 474 A of the parity block P 4 stored on device 510 .
- the DRQ bit set in portion 424 A can be used to indicate its unreliability. Any attempt to reconstruct data block 10 will also reconstruct the DRQ bit in portion 424 A since the DRQ parity bit in portion 474 A along with the DRQ bits in the other data blocks 424 A allows this reconstruction using the standard exclusive-OR mechanism.
- FIG. 5B shows the situation where device 520 that stores block 10 is “missing.” In that case, a regeneration of data block 10 can be performed but the fact the data is unreliable will be retained via the regeneration of the associated DRQ bit because the associated DRQ parity bit information in portion 474 A of the parity block P 4 when combined with the DRQ bits of the other data block portions 424 indicates the data of data block 10 is unreliable. That is, the regenerated DRQ bit for data block 10 will be “1” (or “TRUE”).
- FIG. 5C shows another situation in which the present invention is particularly useful.
- data block 10 is shown as unreliable as well as “missing” (as are all blocks on device 520 ) and data block 12 is also shown as unreliable. If an attempt to regenerate data block 10 is made, the regeneration will succeed but the regenerated data will still be shown as unreliable because the parity DRQ bit in portion 474 A of parity data block P 4 when combined with other DRQ bits in portions 424 A including the DRQ bit in data block 12 showing it as unreliable will produce a DRQ bit for data block 10 that is “1” (“TRUE”).
- the DRQ bit associated with data block 12 is saved to portion 424 A of data block 12 and combined with the other DRQ bits for data block portions 424 to produce the parity DRQ bit in portion 474 A of parity data block P 4 .
- FIG. 6A exemplifies when a device 620 is inoperative or “missing” in a disk array group 600 .
- the storage system controller receives data blocks P 4 , 11 and 12 from respective devices 610 , 630 and 640 .
- the controller will perform error detection of each block to ensure that the data is “good” (reliable from the point of view of the drive). If the data is “good”, the storage system controller will exclusive-OR the parity data P 4 in device 510 with data blocks 11 and 12 in respective devices 630 and 640 .
- the result will be the regeneration of data block 10 that was stored on device 620 .
- data blocks 11 and 12 in respective devices 630 and 640 will be exclusive-ORed with the new data.
- the result is new parity data that will be saved in the location of parity data block P 4 in device 610 .
- the new parity data block will have associated information data that includes a parity DRQ bit that is the exclusive-OR of the DRQ bits associated with data blocks 11 and 12 and the DRQ bit for data block 10 itself, which may or may not be “0” at the discretion of the issuer of the write.
- FIG. 6B shows when device 620 is inoperative and data block 12 of device 640 is unreliable.
- the storage system controller receives data blocks P 4 , 11 and 12 from respective devices 610 , 630 and 640 .
- the controller will perform error detection of each block to ensure that the data is “good”. If any of the data is not “good”, then the controller informs the host environment that the read cannot be performed. Otherwise, the data block 10 is regenerated as well as its associated DRQ bit. If the regenerated DRQ bit indicates the data is reliable, then the read can succeed.
- the use of the data in data block 12 for regeneration is independent of the quality of that data at the “logical block” level. If the device declares it as being “good”, then it can be used for regeneration as shown in this case.
- writing data will be explained.
- the data in block 10 cannot be “stored in the parity data” in block P 4 of device 610 because block 12 is unreadable. That is, the data in block 10 would normally be “stored” by generating a new parity block that is the exclusive-OR of the data in block 10 that is being written and the data in blocks 11 and 12 . Normally, this situation results in a block that cannot be written.
- the data in block 12 can be made “good” by writing it with either “best guess” data or some pattern.
- the DRQ bit in the associated information data for block 12 will be set to “1” to remember that the data in block 12 is “unreliable”. Now the data in block 10 can be “stored in the parity” because the data in block 12 has been “made good.”
- the parity DRQ bit associated with parity block P 4 will be generated using exclusive-OR from the new DRQ bit for data block 10 , the existing DRQ bit for data block 11 and the set DRQ bit that represents the data in block 12 as unreliable.
- FIG. 7 shows another use of the present invention. Shown is an array 700 that includes devices 710 , 720 , 730 and 740 configured as RAID 0 .
- the data is striped but there is no parity. As such, the data is not recoverable.
- data block 14 shown as the striped-out data block in device 720
- the data block is made readable again by either writing a “best guess” of the data in data block 14 , or a pattern.
- Such a pattern can be all zeros.
- the data in data block 14 cannot be trusted and is, therefore, “unreliable.” So the associated DRQ bit is set to indicate that the data in data block 14 is not trustworthy.
- the “Data Reliability Qualifier” should be understood as “logical metadata” associated with “logical blocks” even for RAID-0, where there is no redundancy.
- the controller when the controller receives a write long command from the host, the data does not pass through to a drive in the array. Instead, the command is converted to a regular write command, and the ‘extra’ bytes (one or two, depending upon what is supported) are stripped off. The extra bytes are treated in a binary fashion—if they are zero, the DRQ bit is assumed a “0”. If they are non-zero, the DRQ bit is assumed “1” or set. Preferably, there is no effect on the actual sector ECC in this implementation.
- One aspect of the present invention is the elimination of separate FE bit table lookup and I/Os to determine the reliability of a particular piece of data by embedding the DRQ information with the data.
- the equivalent to the FE bit table information exists, but in a different form—it is distributed or embedded with the data, and its redundancy and distribution is the same as that of the data. This allows the minimization of the performance overhead associated with determining the data reliability. It also allows elimination of the storage mapping complexity (both on disk and in controller memory) associated with a separate FE bit table when compared to other DIF-enabled systems.
- Another aspect of the present invention is the DRQ bit has the same redundancy as the data, achieved by using the same parity algorithm on the DRQ bit as on the data.
- a storage area includes without limitation a portion of a single surface of a storage medium, a surface of the storage medium, the entire storage medium, a device containing at least one storage medium and a system containing at least one device.
- the DRQ bits are disclosed as part of a “Data Integrity Field,” the DRQ bit does not have to be contained like that.
- the DRQ bit can simply be appended (or prepended) to the data, or part of other data appended to the data. This eliminates the case where the data reliability information becomes unavailable, while the data is available (which could certainly happen with a separate FE-bit table), thus having no way to figure out which data is reliable and which is not.
- the present invention accompanies data with reliability information, such as (without limitation) appending or embedding.
- the mechanism proposed for the specific “Data Reliability Qualifier” (DRQ) can be extended to incorporate other “logical metadata” which qualifies the data with regard to other aspects.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
- The present invention relates generally to storage systems. In particular, the present invention relates to RAID storage systems.
- Disc drives are typically used in a stand-alone fashion, such as in a personal computer (PC) configuration where a single disc drive is utilized as the primary data storage peripheral. However, in applications requiring vast amounts of data storage capacity, data reliability or high input/output (I/O) bandwidth, a plurality of drives can be arranged into a multi-drive array, sometimes referred to as RAID (“Redundant Array of Inexpensive (or Independent) Discs”).
- One impetus behind the development of such multi-drive arrays is the disparity between central processing unit (CPU) speeds and disc drive I/O speeds. An array of smaller, inexpensive drives functioning as a single storage device will usually provide improved operational performance over a single, expensive drive.
- The present invention includes embodiments that minimize storage space and storage time in storage systems. In particular, the present invention stores data with additional information. The additional information qualifies the reliability of the stored data. Preferably, the additional information is not combined with another type of information so that it directly indicates the quality of the stored data. If the stored data is maintained with redundancy, the additional information qualifying the data is also maintained with identical redundancy.
- These and various other features as well as advantages which characterize the present invention will be apparent upon reading of the following detailed description and review of the associated drawings.
-
FIG. 1 is a block diagram of a storage system. -
FIG. 2 is a block diagram of the controller shown inFIG. 1 . -
FIGS. 3A and 3B show one method of storing block metadata. -
FIGS. 4A and 4B show an embodiment of the present invention. -
FIGS. 5A-5C are used to explain a method of storing block metadata according to the present invention. -
FIGS. 6A and 6B are used to explain another method of storing block metadata according to the present invention. -
FIG. 7 shows another use of the present invention. - While this invention is susceptible of embodiment in many different forms, there are shown in the drawings and are described below in detail specific embodiments of the invention with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not to be limited to the specific embodiments described and shown.
-
FIG. 1 is a block diagram of astorage system 100 that can incorporate the present invention.Storage system 100 includes adisc array 110 that has acontroller 120 and adisc array group 130. Thecontroller 120 communicates via abus 145 to an operating environment, such as a host computer or an area network (local, wide and storage).Controller 120 provides an array management function that includes presentingdisc array group 130 as one or more virtual discs to the operating environment.Controller 120 can provide additional functions, such as capacity management, host communication, cache management and other functions common for such a storage system.Controller 120 communicates with each ofdiscs respective buses -
FIG. 2 shows a block diagram ofcontroller 120. Abus 205 couples aprocessor 200, anon-volatile memory 210, aDRAM 220 and apath controller 230.Path controller 230 is coupled to acache memory 240, adevice interface 250 and anoperating environment interface 260 byrespective buses Device interface 250 is coupled todisc array group 130 bybuses Operating environment interface 260 is coupled to the operating environment bybus 145.Controller 120 can be implemented as a single integrated circuit. One alternative ispath controller 230,device interface 250 andoperating environment interface 260 are a single integrated circuit, with the other blocks being separate integrated circuits. The present invention is not limited to the physical implementation ofcontroller 120, nor is it limited to the blocks shown inFIG. 2 or the shown interconnection. Anothercontroller 120 or portions of it can be used instorage system 100 to, among other reasons, provide additional redundancy. - As background, data are stored on a storage device such as a disc drive. The data may become corrupted because of physical defects on the media of the storage device. The data may also be corrupted for other reasons beside physically defective media. One example is when the data has been lost from a “write back” cache and which data was lost is known. Another example is when data cannot be reconstructed for an inoperative disk drive because the redundant copy is on physically defective media. In these cases, the data are not good or reliable even though the media where the data resides is not defective. Therefore, Data Reliability Qualifier (DRQ) bits are used to signal
storage system 100 that the data are not reliable. The storage system then can force an error message to the operating environment when the data is accessed. For purposes of the present invention, data is used in a general sense to include actual user, system and operating environment data; system and operating environment information not generally available to a user; programs; etc. -
FIG. 3A illustrates adisc array group 300 that is used in a RAID configuration. Thedisc array group 300 includesdevices Devices RAID 5 configuration as shown. There are four RAID slivers illustrated, with one sliver illustrated asblocks respective portions devices Portions portions portion 318 ofdrive 310 is associated withdata block 1. -
FIG. 3B shows an FE-bit table 350 that can be stored in thecontroller 120, specificallycache memory 240 inFIG. 2 . In operation,controller 120 will access FE-bit table 350 when the operating environment requests access to thedrive array group 300. In this way,controller 120 will know whether the data in the requested data block is unreliable. If an FE bit is set for an accessed data block,controller 120 will send an error message to the operating environment. When writing new data to a block designated as having unreliable data,controller 120 clears the corresponding FE-bit in FE-bit table 350, writes the data to the device and also writes the associated FE-bit stored on the device. However, storing the FE-bits independently on each device perturbs the use of storage space, particularly the distribution of parity and data in a RAID system with redundancy. Also, writing the data blocks and the FE-bits independently requires extra I/Os to the devices. Likewise, the FE-bit table 350 ultimately uses storage space on media or requires a system where power may never fail, and updating it independently requires additional overhead. - The present invention removes the need for the FE-bit table 350 and the
portions devices FIG. 3A . Referring toFIG. 4A , a storage scheme of the present invention is illustrated. Adata block 400 is shown that includes adata portion 410 and appendedinformation 420. An example ofappended information 420 is a “Data Integrity Field” (“DIF”) that includes a “Reference Tag” (“REF TAG”), usually a “Virtual Block Address” (“VBA)” or a “Logical Block Address” (“LBA”),portion 422, a “Metadata Tag” (“META TAG”), usually a “Logical Unit ID” with other possible “metadata” flags,portion 424, and a check sum (“CHECK SUM”)portion 426.Reference Tag portion 422 contains information that identifies the logical or virtual address fordata portion 410. Checksum portion 426 contains information that is used to detect errors indata portion 410.Metadata Tag portion 424 containsadditional portions Portion 424B can contain information about the device, such as a device identifier (“Logical Unit ID”).Portion 424A, according to the present invention, contains a “Data Reliability Qualifier” or DRQ-bit that qualifies not only the data indata portion 410 but all redundant copies of that data. The DRQ flag is logically appended to the contents of the data block and maintained with identical redundancy as the bits in the data portion. It should be viewed as a copy of “logical metadata” in the same sense as the data portion is considered a copy, with possible redundancy, of a “logical block” of a “logical unit” created using any of the techniques known as “virtualization”.Portion 424A can contain additional metadata bits that qualify the data. Some of these bits may also be “logical metadata” and maintained with identical redundancy to the data bits. Some of these bits may be “physical metadata” and apply only to the particular copy to which they are appended. For example,portion 424A can contain a “Parity” flag bit, set to “0” (or “FALSE”) for data blocks 400, that indicates that the block in question contains some form of parity for other user data blocks. -
FIG. 4B shows a storage scheme for the parity data according to the present invention. Adata parity block 450 includes aparity data portion 460 and appendedinformation 470.Appended information 470 includes a “Reference Tag” (“REF TAG”), usually a “Parity Virtual Block Address” (“Parity VBA)”,portion 472, a “Metadata Tag” (“META TAG”), usually a “Logical Unit ID” with other possible “metadata” flags,portion 474 and a check sum (“CHECK SUM”)portion 476.Reference Tag portion 472 when qualified by a “Parity” flag inMetadata Tag portion 474 contains information that identifies the so-called “sliver” for whichparity data portion 460 provides redundancy. In particular, the “Parity Virtual Block Address” in the DIF of the parity block may specify the “Virtual Block Address” (“VBA”) of the data block with the lowest such address of the RAID “sliver” (where address in this context means address in the virtual unit). Checksum portion 476 contains information that is used to detect errors inparity data portion 410.Metadata Tag portion 474 containsadditional portions Portion 474B can contain information about the device, such as a device identifier (“Logical Unit ID”).Portion 474A, according to the present invention, can contain a bit that is a function of the other DRQ-bits inportions 424A. The DRQ parity bit can be generated by an exclusive-OR function of all the other data block DRQ-bits. To illustrate, the 1-bit portions 424A can be exclusive-ORed together to generate the single DRQ parity bit that will saved inportion 474A. Generally, then, the DRQ parity bit is created as a function of the other DRQ bits inportions 424A in the same sliver. - As is apparent, the present invention has several advantages over the scheme described in
FIGS. 3A and 3B . First, the additional accessing of a device to write FE-bit information is not required since the separate FE-bit portions - One use of the present invention will be described with reference to
FIGS. 5A-5C . InFIG. 5A , data block 10 is illustrated as unreliable. According to the present invention, the associated DRQ bit is set and exclusive-ORed with the other DRQ bits stored inportions 424A and stored inportion 474A of the parity block P4 stored ondevice 510. When thedata block 10 is subsequently read, the DRQ bit set inportion 424A can be used to indicate its unreliability. Any attempt to reconstruct data block 10 will also reconstruct the DRQ bit inportion 424A since the DRQ parity bit inportion 474A along with the DRQ bits in theother data blocks 424A allows this reconstruction using the standard exclusive-OR mechanism. -
FIG. 5B shows the situation wheredevice 520 that stores block 10 is “missing.” In that case, a regeneration of data block 10 can be performed but the fact the data is unreliable will be retained via the regeneration of the associated DRQ bit because the associated DRQ parity bit information inportion 474A of the parity block P4 when combined with the DRQ bits of the other data blockportions 424 indicates the data of data block 10 is unreliable. That is, the regenerated DRQ bit for data block 10 will be “1” (or “TRUE”). -
FIG. 5C shows another situation in which the present invention is particularly useful. In that figure, data block 10 is shown as unreliable as well as “missing” (as are all blocks on device 520) and data block 12 is also shown as unreliable. If an attempt to regenerate data block 10 is made, the regeneration will succeed but the regenerated data will still be shown as unreliable because the parity DRQ bit inportion 474A of parity data block P4 when combined with other DRQ bits inportions 424A including the DRQ bit in data block 12 showing it as unreliable will produce a DRQ bit for data block 10 that is “1” (“TRUE”). Like data block 10, the DRQ bit associated with data block 12 is saved toportion 424A of data block 12 and combined with the other DRQ bits for data blockportions 424 to produce the parity DRQ bit inportion 474A of parity data block P4. - Another use of the present invention will be explained with reference to
FIGS. 6A and 6B .FIG. 6A exemplifies when adevice 620 is inoperative or “missing” in adisk array group 600. If a read request is made that resolves todevice 620, the storage system controller receives data blocks P4, 11 and 12 fromrespective devices device 510 with data blocks 11 and 12 inrespective devices device 620. For a write toinoperative device 520, data blocks 11 and 12 inrespective devices device 610. The new parity data block will have associated information data that includes a parity DRQ bit that is the exclusive-OR of the DRQ bits associated with data blocks 11 and 12 and the DRQ bit for data block 10 itself, which may or may not be “0” at the discretion of the issuer of the write. -
FIG. 6B shows whendevice 620 is inoperative and data block 12 ofdevice 640 is unreliable. As described above, if a read request is made that accessesinoperative device 620, the storage system controller receives data blocks P4, 11 and 12 fromrespective devices - With further reference to
FIG. 6B , writing data will be explained. In the case where data is to be written to block 10 of missingdevice 620 and block 12 ofdevice 640 is unreadable (not “good”), the data inblock 10 cannot be “stored in the parity data” in block P4 ofdevice 610 becauseblock 12 is unreadable. That is, the data inblock 10 would normally be “stored” by generating a new parity block that is the exclusive-OR of the data inblock 10 that is being written and the data inblocks block 12, however, can be made “good” by writing it with either “best guess” data or some pattern. The DRQ bit in the associated information data forblock 12 will be set to “1” to remember that the data inblock 12 is “unreliable”. Now the data inblock 10 can be “stored in the parity” because the data inblock 12 has been “made good.” The parity DRQ bit associated with parity block P4 will be generated using exclusive-OR from the new DRQ bit fordata block 10, the existing DRQ bit for data block 11 and the set DRQ bit that represents the data inblock 12 as unreliable. -
FIG. 7 shows another use of the present invention. Shown is anarray 700 that includesdevices RAID 0. In other words, the data is striped but there is no parity. As such, the data is not recoverable. In the case where data block 14 (shown as the striped-out data block in device 720) is unreadable (not “good”), the data block is made readable again by either writing a “best guess” of the data indata block 14, or a pattern. Such a pattern can be all zeros. However, the data in data block 14 cannot be trusted and is, therefore, “unreliable.” So the associated DRQ bit is set to indicate that the data in data block 14 is not trustworthy. The “Data Reliability Qualifier” should be understood as “logical metadata” associated with “logical blocks” even for RAID-0, where there is no redundancy. - In another use of the present invention, when the controller receives a write long command from the host, the data does not pass through to a drive in the array. Instead, the command is converted to a regular write command, and the ‘extra’ bytes (one or two, depending upon what is supported) are stripped off. The extra bytes are treated in a binary fashion—if they are zero, the DRQ bit is assumed a “0”. If they are non-zero, the DRQ bit is assumed “1” or set. Preferably, there is no effect on the actual sector ECC in this implementation.
- One aspect of the present invention is the elimination of separate FE bit table lookup and I/Os to determine the reliability of a particular piece of data by embedding the DRQ information with the data. The equivalent to the FE bit table information exists, but in a different form—it is distributed or embedded with the data, and its redundancy and distribution is the same as that of the data. This allows the minimization of the performance overhead associated with determining the data reliability. It also allows elimination of the storage mapping complexity (both on disk and in controller memory) associated with a separate FE bit table when compared to other DIF-enabled systems. Another aspect of the present invention is the DRQ bit has the same redundancy as the data, achieved by using the same parity algorithm on the DRQ bit as on the data.
- Several variations and modifications exist for the present invention. Although the preferred embodiment described herein is directed to a disk RAID storage system, it will be appreciated by those skilled in the art that the teachings of the present invention can be applied to other systems. For example, the storage devices can be magnetic, optical, tape, solid state, a SAN or JBOD, or a combination of two or more of them. Further, the present invention may be implemented in hardware, software or a combination of both. Also, a storage area includes without limitation a portion of a single surface of a storage medium, a surface of the storage medium, the entire storage medium, a device containing at least one storage medium and a system containing at least one device.
- And although the DRQ bits are disclosed as part of a “Data Integrity Field,” the DRQ bit does not have to be contained like that. The DRQ bit can simply be appended (or prepended) to the data, or part of other data appended to the data. This eliminates the case where the data reliability information becomes unavailable, while the data is available (which could certainly happen with a separate FE-bit table), thus having no way to figure out which data is reliable and which is not. With this invention, if the data is available, then the data reliability information is available, and the data's reliability can always be determined. Generally, the present invention accompanies data with reliability information, such as (without limitation) appending or embedding. The mechanism proposed for the specific “Data Reliability Qualifier” (DRQ) can be extended to incorporate other “logical metadata” which qualifies the data with regard to other aspects.
- It is to be understood that even though numerous characteristics and advantages of various embodiments of the invention have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts and values for the described variables, within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
Claims (21)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/669,196 US20050066230A1 (en) | 2003-09-23 | 2003-09-23 | Data reliabilty bit storage qualifier and logical unit metadata |
US12/626,183 US8112679B2 (en) | 2003-09-23 | 2009-11-25 | Data reliability bit storage qualifier and logical unit metadata |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/669,196 US20050066230A1 (en) | 2003-09-23 | 2003-09-23 | Data reliabilty bit storage qualifier and logical unit metadata |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/626,183 Continuation US8112679B2 (en) | 2003-09-23 | 2009-11-25 | Data reliability bit storage qualifier and logical unit metadata |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050066230A1 true US20050066230A1 (en) | 2005-03-24 |
Family
ID=34313675
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/669,196 Abandoned US20050066230A1 (en) | 2003-09-23 | 2003-09-23 | Data reliabilty bit storage qualifier and logical unit metadata |
US12/626,183 Expired - Fee Related US8112679B2 (en) | 2003-09-23 | 2009-11-25 | Data reliability bit storage qualifier and logical unit metadata |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/626,183 Expired - Fee Related US8112679B2 (en) | 2003-09-23 | 2009-11-25 | Data reliability bit storage qualifier and logical unit metadata |
Country Status (1)
Country | Link |
---|---|
US (2) | US20050066230A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060075281A1 (en) * | 2004-09-27 | 2006-04-06 | Kimmel Jeffrey S | Use of application-level context information to detect corrupted data in a storage system |
US20060074960A1 (en) * | 2004-09-20 | 2006-04-06 | Goldschmidt Marc A | Providing data integrity for data streams |
US20070089023A1 (en) * | 2005-09-30 | 2007-04-19 | Sigmatel, Inc. | System and method for system resource access |
US20080082865A1 (en) * | 2006-09-29 | 2008-04-03 | Kabushiki Kaisha Toshiba | Information recording apparatus, information processing apparatus, and write control method |
US7549089B1 (en) | 2004-09-27 | 2009-06-16 | Network Appliance, Inc. | Lost write detection in a storage redundancy layer of a storage server |
US20100131706A1 (en) * | 2003-09-23 | 2010-05-27 | Seagate Technology, Llc | Data reliability bit storage qualifier and logical unit metadata |
WO2010137067A1 (en) * | 2009-05-27 | 2010-12-02 | Hitachi, Ltd. | Storage system, control method therefor, and program |
JP2010282628A (en) * | 2009-06-08 | 2010-12-16 | Lsi Corp | Method and device for protecting maintainability of data cached in direct attached storage (das) system |
WO2014101872A1 (en) * | 2012-12-31 | 2014-07-03 | Huawei Technologies Co., Ltd. | Efficient high availability storage systems |
US8972799B1 (en) | 2012-03-29 | 2015-03-03 | Amazon Technologies, Inc. | Variable drive diagnostics |
US9037921B1 (en) * | 2012-03-29 | 2015-05-19 | Amazon Technologies, Inc. | Variable drive health determination and data placement |
US9754337B2 (en) | 2012-03-29 | 2017-09-05 | Amazon Technologies, Inc. | Server-side, variable drive health determination |
US9792192B1 (en) | 2012-03-29 | 2017-10-17 | Amazon Technologies, Inc. | Client-side, variable drive health determination |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8595595B1 (en) * | 2010-12-27 | 2013-11-26 | Netapp, Inc. | Identifying lost write errors in a raid array |
US8458145B2 (en) * | 2011-01-20 | 2013-06-04 | Infinidat Ltd. | System and method of storage optimization |
US8495469B2 (en) | 2011-05-16 | 2013-07-23 | International Business Machines Corporation | Implementing enhanced IO data conversion with protection information model including parity format of data integrity fields |
US20130198585A1 (en) * | 2012-02-01 | 2013-08-01 | Xyratex Technology Limited | Method of, and apparatus for, improved data integrity |
CN104765693B (en) * | 2014-01-06 | 2018-03-27 | 国际商业机器公司 | A kind of methods, devices and systems for data storage |
JP6318769B2 (en) * | 2014-03-28 | 2018-05-09 | 富士通株式会社 | Storage control device, control program, and control method |
CN109725831B (en) * | 2017-10-27 | 2022-06-10 | 伊姆西Ip控股有限责任公司 | Method, system and computer readable medium for managing storage system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4434487A (en) * | 1981-10-05 | 1984-02-28 | Digital Equipment Corporation | Disk format for secondary storage system |
US5774643A (en) * | 1995-10-13 | 1998-06-30 | Digital Equipment Corporation | Enhanced raid write hole protection and recovery |
US5826001A (en) * | 1995-10-13 | 1998-10-20 | Digital Equipment Corporation | Reconstructing data blocks in a raid array data storage system having storage device metadata and raid set metadata |
US5933592A (en) * | 1995-10-13 | 1999-08-03 | Digital Equipment Corporation | Promoting device level error to raidset level error to restore redundacy in a raid array data storage system |
US6161192A (en) * | 1995-10-13 | 2000-12-12 | Compaq Computer Corporation | Raid array data storage system with storage device consistency bits and raidset consistency bits |
US6658590B1 (en) * | 2000-03-30 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Controller-based transaction logging system for data recovery in a storage area network |
US7020805B2 (en) * | 2002-08-15 | 2006-03-28 | Sun Microsystems, Inc. | Efficient mechanisms for detecting phantom write errors |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5379411A (en) * | 1991-11-15 | 1995-01-03 | Fujitsu Limited | Fault indication in a storage device array |
US6880060B2 (en) * | 2002-04-24 | 2005-04-12 | Sun Microsystems, Inc. | Method for storing metadata in a physical sector |
US7103811B2 (en) * | 2002-12-23 | 2006-09-05 | Sun Microsystems, Inc | Mechanisms for detecting silent errors in streaming media devices |
US7225395B2 (en) * | 2003-08-18 | 2007-05-29 | Lsi Corporation | Methods and systems for end-to-end data protection in a memory controller |
US20050066230A1 (en) * | 2003-09-23 | 2005-03-24 | Bean Robert George | Data reliabilty bit storage qualifier and logical unit metadata |
US7873878B2 (en) * | 2007-09-24 | 2011-01-18 | International Business Machines Corporation | Data integrity validation in storage systems |
-
2003
- 2003-09-23 US US10/669,196 patent/US20050066230A1/en not_active Abandoned
-
2009
- 2009-11-25 US US12/626,183 patent/US8112679B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4434487A (en) * | 1981-10-05 | 1984-02-28 | Digital Equipment Corporation | Disk format for secondary storage system |
US5774643A (en) * | 1995-10-13 | 1998-06-30 | Digital Equipment Corporation | Enhanced raid write hole protection and recovery |
US5826001A (en) * | 1995-10-13 | 1998-10-20 | Digital Equipment Corporation | Reconstructing data blocks in a raid array data storage system having storage device metadata and raid set metadata |
US5933592A (en) * | 1995-10-13 | 1999-08-03 | Digital Equipment Corporation | Promoting device level error to raidset level error to restore redundacy in a raid array data storage system |
US6161192A (en) * | 1995-10-13 | 2000-12-12 | Compaq Computer Corporation | Raid array data storage system with storage device consistency bits and raidset consistency bits |
US6658590B1 (en) * | 2000-03-30 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Controller-based transaction logging system for data recovery in a storage area network |
US7020805B2 (en) * | 2002-08-15 | 2006-03-28 | Sun Microsystems, Inc. | Efficient mechanisms for detecting phantom write errors |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100131706A1 (en) * | 2003-09-23 | 2010-05-27 | Seagate Technology, Llc | Data reliability bit storage qualifier and logical unit metadata |
US8112679B2 (en) * | 2003-09-23 | 2012-02-07 | Seagate Technology Llc | Data reliability bit storage qualifier and logical unit metadata |
US20060074960A1 (en) * | 2004-09-20 | 2006-04-06 | Goldschmidt Marc A | Providing data integrity for data streams |
US7340672B2 (en) * | 2004-09-20 | 2008-03-04 | Intel Corporation | Providing data integrity for data streams |
US20060075281A1 (en) * | 2004-09-27 | 2006-04-06 | Kimmel Jeffrey S | Use of application-level context information to detect corrupted data in a storage system |
US7549089B1 (en) | 2004-09-27 | 2009-06-16 | Network Appliance, Inc. | Lost write detection in a storage redundancy layer of a storage server |
US20070089023A1 (en) * | 2005-09-30 | 2007-04-19 | Sigmatel, Inc. | System and method for system resource access |
US20080082865A1 (en) * | 2006-09-29 | 2008-04-03 | Kabushiki Kaisha Toshiba | Information recording apparatus, information processing apparatus, and write control method |
WO2010137067A1 (en) * | 2009-05-27 | 2010-12-02 | Hitachi, Ltd. | Storage system, control method therefor, and program |
US8713251B2 (en) | 2009-05-27 | 2014-04-29 | Hitachi, Ltd. | Storage system, control method therefor, and program |
EP2264607A3 (en) * | 2009-06-08 | 2011-07-20 | LSI Corporation | Method and apparatus for protecting the integrity of cached data in a direct-attached storage (DAS) system |
CN101982816A (en) * | 2009-06-08 | 2011-03-02 | Lsi公司 | Method and apparatus for protecting the integrity of cached data |
JP2010282628A (en) * | 2009-06-08 | 2010-12-16 | Lsi Corp | Method and device for protecting maintainability of data cached in direct attached storage (das) system |
TWI451257B (en) * | 2009-06-08 | 2014-09-01 | Lsi Corp | Method and apparatus for protecting the integrity of cached data in a direct-attached storage (das) system |
US8972799B1 (en) | 2012-03-29 | 2015-03-03 | Amazon Technologies, Inc. | Variable drive diagnostics |
US9037921B1 (en) * | 2012-03-29 | 2015-05-19 | Amazon Technologies, Inc. | Variable drive health determination and data placement |
US20150234716A1 (en) * | 2012-03-29 | 2015-08-20 | Amazon Technologies, Inc. | Variable drive health determination and data placement |
US9754337B2 (en) | 2012-03-29 | 2017-09-05 | Amazon Technologies, Inc. | Server-side, variable drive health determination |
US9792192B1 (en) | 2012-03-29 | 2017-10-17 | Amazon Technologies, Inc. | Client-side, variable drive health determination |
US10204017B2 (en) * | 2012-03-29 | 2019-02-12 | Amazon Technologies, Inc. | Variable drive health determination and data placement |
US10861117B2 (en) | 2012-03-29 | 2020-12-08 | Amazon Technologies, Inc. | Server-side, variable drive health determination |
WO2014101872A1 (en) * | 2012-12-31 | 2014-07-03 | Huawei Technologies Co., Ltd. | Efficient high availability storage systems |
US9037679B2 (en) | 2012-12-31 | 2015-05-19 | Futurewei Technologies, Inc. | Efficient high availability storage systems |
Also Published As
Publication number | Publication date |
---|---|
US20100131706A1 (en) | 2010-05-27 |
US8112679B2 (en) | 2012-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8112679B2 (en) | Data reliability bit storage qualifier and logical unit metadata | |
US7661020B1 (en) | System and method for reducing unrecoverable media errors | |
US7774643B2 (en) | Method and apparatus for preventing permanent data loss due to single failure of a fault tolerant array | |
US5913927A (en) | Method and apparatus for management of faulty data in a raid system | |
US5951691A (en) | Method and system for detection and reconstruction of corrupted data in a data storage subsystem | |
JP4547357B2 (en) | Redundancy for stored data structures | |
US7234074B2 (en) | Multiple disk data storage system for reducing power consumption | |
KR100265146B1 (en) | Method and apparatus for treatment of deferred write data for a dead raid device | |
US7281089B2 (en) | System and method for reorganizing data in a raid storage system | |
US7350101B1 (en) | Simultaneous writing and reconstruction of a redundant array of independent limited performance storage devices | |
US7571291B2 (en) | Information processing system, primary storage device, and computer readable recording medium recorded thereon logical volume restoring program | |
US7921301B2 (en) | Method and apparatus for obscuring data on removable storage devices | |
JPH0642193B2 (en) | Update recording method and apparatus for DASD array | |
US7234024B1 (en) | Application-assisted recovery from data corruption in parity RAID storage using successive re-reads | |
US7523257B2 (en) | Method of managing raid level bad blocks in a networked storage system | |
US5933592A (en) | Promoting device level error to raidset level error to restore redundacy in a raid array data storage system | |
US8489946B2 (en) | Managing logically bad blocks in storage devices | |
US20030023933A1 (en) | End-to-end disk data checksumming | |
US7130973B1 (en) | Method and apparatus to restore data redundancy and utilize spare storage spaces | |
US7024585B2 (en) | Method, apparatus, and program for data mirroring with striped hotspare | |
GB2402770A (en) | Writing version checking data for a data file onto two data storage systems. | |
US20060041789A1 (en) | Storage system with journaling | |
US6785788B1 (en) | System and method for implementing an enhanced raid disk storage system | |
JP3711631B2 (en) | Disk array device for computer system | |
JP4288929B2 (en) | Data storage apparatus and data storage method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEAN, ROBERT GEORGE;LUBBERS, CLARK EDWARD;ROBERSON, RANDY L.;REEL/FRAME:014583/0368;SIGNING DATES FROM 20030919 TO 20030922 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT Free format text: SECURITY AGREEMENT;ASSIGNORS:MAXTOR CORPORATION;SEAGATE TECHNOLOGY LLC;SEAGATE TECHNOLOGY INTERNATIONAL;REEL/FRAME:022757/0017 Effective date: 20090507 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE Free format text: SECURITY AGREEMENT;ASSIGNORS:MAXTOR CORPORATION;SEAGATE TECHNOLOGY LLC;SEAGATE TECHNOLOGY INTERNATIONAL;REEL/FRAME:022757/0017 Effective date: 20090507 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY HDD HOLDINGS, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: MAXTOR CORPORATION, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: SEAGATE TECHNOLOGY INTERNATIONAL, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: RELEASE;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:025662/0001 Effective date: 20110114 |
|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY INTERNATIONAL, CAYMAN ISLANDS Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: EVAULT INC. (F/K/A I365 INC.), CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 Owner name: SEAGATE TECHNOLOGY US HOLDINGS, INC., CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT AND SECOND PRIORITY REPRESENTATIVE;REEL/FRAME:030833/0001 Effective date: 20130312 |