US20020194529A1 - Resynchronization of mirrored storage devices - Google Patents
Resynchronization of mirrored storage devices Download PDFInfo
- Publication number
- US20020194529A1 US20020194529A1 US10/154,414 US15441402A US2002194529A1 US 20020194529 A1 US20020194529 A1 US 20020194529A1 US 15441402 A US15441402 A US 15441402A US 2002194529 A1 US2002194529 A1 US 2002194529A1
- Authority
- US
- United States
- Prior art keywords
- storage
- storage device
- data
- usage information
- resynchronizing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims description 49
- 238000013500 data storage Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 4
- 230000002085 persistent effect Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99955—Archiving or backup
Definitions
- the present invention relates generally to computer systems, and more particularly but not exclusively to file systems and storage devices.
- Storage devices are employed to store data that are accessed by computer systems. Examples of storage devices include volatile and non-volatile memory, floppy drives, hard disk drives, tape drives, optical drives, etc.
- a storage device may be locally attached to an input/output (I/O) channel of a computer. For example, a hard disk drive may be connected to a computer's disk controller.
- a storage device may also be accessible over a network. Examples of such a storage device include network attached storage (NAS) and storage area network (SAN) devices.
- NAS network attached storage
- SAN storage area network
- a storage device may be a single stand-alone component or be comprised of a system of storage devices such as in the case of Redundant Array Of Inexpensive Disks (RAID) groups and some Direct Access Storage Devices (DASD).
- RAID Redundant Array Of Inexpensive Disks
- DASD Direct Access Storage Devices
- mirror For mission-critical applications requiring high availability of stored data, various techniques for enhancing data reliability are typically employed.
- One such technique is to provide a “mirror” for each storage device.
- data are written to at least two storage devices.
- data may be read from either of the two storage devices so long as the two devices are operational and contain the same data. That is, either of the two storage devices may process read requests so long as the two devices are in synchronization.
- a first storage device and a second storage device form a mirrored pair of storage devices.
- first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device.
- a method of resynchronizing mirrored storage devices includes the act of creating a first storage usage information when both storage devices are accessible. When one of the storage devices goes down and then comes back up, a second storage usage information is created. A difference between the first storage usage information and the second storage usage information is determined and then used to resynchronize the previously down storage device with its mirror.
- FIG. 1 shows a schematic diagram of an example file layout.
- FIGS. 2 A- 2 D show schematic diagrams of inode files in the file layout of FIG. 1.
- FIGS. 3 A- 3 C show schematic diagrams illustrating the creation of a snapshot in the file layout of FIG. 1.
- FIG. 4 shows a schematic diagram of a computing environment in accordance with an embodiment of the present invention.
- FIG. 5 shows a logical diagram illustrating the relationship between a file system, a storage device manager, and a storage system in accordance with an embodiment of the present invention.
- FIG. 6 shows a state diagram of a mirror in accordance with an embodiment of the present invention.
- FIG. 7 shows a flow diagram of a method of resynchronizing a mirrored storage device in accordance with an embodiment of the present invention.
- FIGS. 8A and 8B show schematic diagrams further illustrating an action in the flow diagram of FIG. 7.
- File layout 150 may be adopted by a file system to organize files. Similar file layouts are also disclosed in the following commonly-assigned disclosures, which are incorporated herein by reference in their entirety: (a) U.S. Pat. No. 6,289,356, filed on Sep. 14, 1998; (b) U.S. Pat. No. 5,963,962, filed on Jun. 30, 1998; and (c) U.S. Pat. No. 5,819,292, filed on May. 31, 1995. It should be understood, however, that the present invention may also be adapted for use with other file layouts.
- file layout 150 has a tree structure with a root inode 100 as a base. Root inode 100 includes multiple blocks for describing one or more inode files 110 (i.e., 110 A, 110 B, . . . ). Each inode file 110 contains information about a file in file layout 150 .
- a file may comprise one or more blocks of data, with each block being a storage location in a storage device.
- an inode file 110 may contain data or point to blocks containing data.
- a file may be accessed by consulting root inode 100 to find the inode file 110 that contains or points to the file's data.
- data file 122 is stored in one or more blocks pointed to by inode 110 B; inode 110 B is in turn identified by root inode 100 .
- File layout 150 also includes a block map file 120 and an inode map file 121 .
- Block map file 120 identifies free (i.e., unused) blocks, while inode map file 121 identifies free inodes.
- Block map file 120 and inode map file 121 may be accessed just like any other file in file layout 150 .
- block map file 120 and inode map file 121 may be stored in blocks pointed to by an inode file 110 , which is identified by root inode 100 .
- root inode 100 is stored in a predetermined location in a storage device. This facilitates finding root inode 100 upon system boot-up. Because block map file 120 , inode map file 121 , and inode files 110 may be found by consulting root inode 100 as described above, they may be stored anywhere in the storage device.
- FIG. 2A there is shown a schematic diagram of an inode file 110 identified by a root inode 100 .
- An inode file 110 includes a block 111 for storing general inode information such as a file's size, owner, permissions, etc.
- An inode file 110 also includes one or more blocks 112 (i.e., 112 A, 112 B, . . . ). Depending on the size of the file, blocks 112 may contain the file's data or pointers to the file's data. In the example of FIG. 2A, the file is small enough to fit all of its data in blocks 112 .
- an inode file 110 includes 16 blocks 112 , with each block 112 accommodating 4 bytes (i.e., 32 bits).
- files having a size of 64 bytes (i.e., 4-bytes ⁇ 16) or less may be stored directly in an inode file 110 .
- FIG. 2B shows a schematic diagram of an inode file 110 that contains pointers in its blocks 112 .
- a pointer in a block 112 points to a data block 210 (i.e., 210 A, 210 B , . . . ) containing data.
- a data block 210 i.e., 210 A, 210 B , . . .
- each of 16 blocks 112 may point to a 4 KB (kilo-byte) data block 210 .
- an inode file 110 may accommodate files having a size of 64 KB (i.e.,16 ⁇ 4 KB) or less.
- FIG. 2C shows a schematic diagram of another inode file 110 that contains pointers in its blocks 112 .
- Each of the blocks 112 points to indirect blocks 220 (i.e., 220 A, 220 B , . . . ), each of which has blocks that point to a data block 230 (i.e., 230 A, 230 B , . . . ) containing data.
- Pointing to an indirect block 220 allows an inode file 110 to accommodate larger files.
- an inode file 110 has 16 blocks 112 that each point to an indirect block 220 ; each indirect block 220 in turn has 1024 blocks that each point to a 4 KB data block 230 .
- an inode file 110 may accommodate files having a size of 64 MB (mega-bytes) (i.e., 16 ⁇ 1024 ⁇ 4KB) or less.
- an inode file 110 may have several levels of indirection to accommodate even larger files.
- FIG. 2D shows a schematic diagram of an inode file 110 that points to double indirect blocks 240 (i.e., 240 A, 240 B , . . . ), which point to single indirect blocks 250 (i.e., 250 A, 250 B , . . . ), which in turn point to data blocks 260 (i.e., 260 A, 260 B , . . . ).
- an inode file 110 has 16 blocks 112 that each points to a double indirect block 240 containing 1024 blocks; each block in a double indirect block 240 points to a single indirect block 250 that contains 1024 blocks; each block in a single indirect block 250 points to a 4 KB data block 260 .
- an inode file 110 may accommodate files having a size of 64 GB (giga-bytes) (i.e., 16 ⁇ 1024 ⁇ 1024 ⁇ 4 KB) or less.
- FIG. 3A there is shown a schematic diagram of a root inode 100 with one or more branches 310 (i.e., 310 A, 310 B , . . . ).
- branches 310 i.e., 310 A, 310 B , . . .
- FIG. 3A and the following FIGS. 3B and 3C do not show the details of each branch from a root inode 100 for clarity of illustration.
- Each branch 310 may include an inode file plus one or more levels of indirection to data blocks, if any.
- FIG. 3B shows a schematic diagram of a snapshot 300 created by copying a root inode 100 .
- Snaphot is a trademark of Network Appliance, Inc. It is used for purposes of this disclosure to designate a persistent consistency point (CP) image.
- a persistent consistency point image (PCPI) is a point-in-time representation of the storage system, and more particularly, of the active file system, stored on a storage device (e.g., on disk) or in other persistent memory and having a name or other unique identifier that distinguishes it from other PCPIs taken at other points in time.
- a PCPI can also include other information (metadata) about the active file system at the particular point in time for which the image is taken.
- the terms “PCPI” and “snapshot” shall be used interchangeably through out this disclosure without derogation of Network Appliance's trademark rights.
- a snapshot 300 being a copy of a root inode 100 , identifies all blocks identified by the root inode 100 at the time snapshot 300 was created. Because a snapshot 300 identifies but does not copy branches 310 , a snapshot 300 does not consume a large amount of storage space. Generally speaking, a snapshot 300 provides storage usage information at a given moment in time.
- FIG. 3C shows a schematic diagram illustrating what happens when data in a 103 branch 310 are modified by a write command.
- writes may only be performed on unused blocks. That is, a used block is not overwritten when its data are modified; instead, an unused block is allocated to contain the modified data.
- modifying data in branch 310 E results in the creation of a new branch 311 containing the modified data.
- Branch 311 is created on new, unused blocks.
- the old branch 310 E remains in the storage device and is still identified by snapshot 300 .
- Root inode 100 breaks its pointer to branch 310 E and now points to the new branch 311 . Because branch 310 E is still identified by snapshot 300 , its data blocks may be readily recovered if desired.
- a snapshot 300 may be replaced by a new snapshot 300 from time to time to release old blocks, thereby making them available for new writes.
- a consistency point count may be atomically increased every time a consistency point is established. For example, a consistency point count may be increased by one every time a snapshot 300 is created to establish a PCPI.
- the PCPI (which is a snapshot 300 in this example) may be used to recreate the file system.
- a consistency point count gives an indication of how up to date a file system is. The higher the consistency point count, the more up to date the file system. For example, a file system with a consistency point count of 7 is more up to date than a version of that file system with a consistency point count of 4 .
- FIG. 4 there is shown a schematic diagram of a computing environment in accordance with an embodiment of the present invention.
- one or more computers 401 i.e., 401 A, 401 B, . . . .
- a computer 401 may be any type of data processing device capable of sending write and read requests to filer 400 .
- a computer 401 may be, without limitation, a personal computer, mini-computer, mainframe computer, portable computer, workstation, wireless terminal, personal digital assistant, cellular phone, etc.
- Network 402 may include various types of communication networks such as wide area networks, local area networks, the Internet, etc. Other nodes on network 402 such as gateways, routers, bridges, firewalls, etc. are not depicted in FIG. 4 for clarity of illustration.
- Filer 400 provides data storage services over network 402 .
- filer 400 processes data read and write requests from a computer 401 .
- filer 400 does not necessarily have to be accessible over network 402 .
- a filer 400 may also be locally attached to an I/O channel of a computer 401 , for example.
- filer 400 may include a network interface 410 , a storage operating system 450 , and a storage system 460 .
- Storage operating system 450 may further include a file system 452 and a storage device manager 454 .
- Storage system 460 may include one or more storage devices.
- Components of filer 400 may be implemented in hardware, software, and/or firmware.
- filer 400 may be a computer having one or more processors running computer-readable program code of storage operating system 450 in memory.
- Software components of filer 400 may be stored on computer-readable storage media (e.g., memories, CD-ROMS, tapes, disks, ZIP drive , . . . ) or transmitted over wired or wireless link to a computer 401 .
- Network interface 410 includes components for receiving storage-related service requests over network 402 .
- Network interface 410 forwards a received service request to storage operating system 450 , which processes the request by reading data from storage system 460 in the case of a read request, or by writing data to storage system 460 in the case of a write request.
- Data read from storage system 460 are transmitted over network 402 to the requesting computer 401 .
- data to be written to storage system 460 are received over network 402 from a computer 401 .
- FIG. 5 shows a logical diagram further illustrating the relationship between a file system 452 , a storage device manager 454 , and a storage system 460 in accordance with an embodiment of the present invention.
- file system 452 and storage device manager 454 are implemented in software while storage system 460 is implemented in hardware.
- file system 452 , storage device manager 454 , and storage system 460 may be implemented in hardware, software, and/or firmware.
- data structures, tables, and maps may be employed to define the logical interconnection between file system 452 and storage device manager 454 .
- storage device manager 454 and storage system 460 may communicate via a disk controller.
- File system 452 manages files that are stored in storage system 460 .
- file system 452 uses a file layout 150 (see FIG. 1) to organize files. That is, in one embodiment, file system 452 views files as a tree of blocks with a root inode as a base. File system 452 is capable of creating snapshots and consistency points in a manner previously described.
- file system 452 organizes files in accordance with the Write-Anywhere-File Layout (WAFL) disclosed in the incorporated disclosures U.S. Pat. Nos. 6,289,356, 5,963,962, and 5,819,292.
- WAFL Write-Anywhere-File Layout
- the present invention is not so limited and may also be used with other file systems and layouts.
- Storage device manager 454 manages the storage devices in storage system 460 .
- Storage device manager 454 receives read and write commands from file system 452 and processes the commands by accordingly accessing storage system 460 .
- Storage device manager 454 takes a block's logical address from file system 452 and translates that logical address to a physical address in one or more storage devices in storage system 460 .
- storage device manager 454 manages storage devices in accordance with RAID level 4 , and accordingly stripes data blocks across storage devices and uses separate parity storage devices. It should be understood, however, that the present invention may also be used with data storage architectures other than RAID level 4 . For example, embodiments of the present invention may be used with other RAID levels, DASD's, and non-arrayed storage devices.
- storage device manager 454 is logically organized as a tree of objects that include a volume 501 , a mirror 502 , plexes 503 (i.e., 503 A, 503 B), and RAID groups 504 - 507 .
- implementing a mirror in a logical layer below file system 452 advantageously allows for a relatively transparent fail-over mechanism. For example, because file system 452 does not necessarily have to know of the existence of the mirror, a failing plex 503 does not have to be reported to file to system 452 . When a plex fails, file system 452 may still read and write data as before. This minimizes disruption to file system 452 and also simplifies its design.
- volume 501 represents a file system.
- Mirror 502 is one level below volume 501 and manages a pair of mirrored plexes 503 .
- Plex 503 A is a duplicate of plex 503 B, and vice versa.
- Each plex 503 represents a full copy of the file system of volume 501 .
- consistency points are established from time to time for each plex 503 . As will be described further below, this allows storage device manager 454 to determine which plex is more up to date in the event both plexes go down and one of them needs to be resynchronized with the other.
- each plex 503 is one or more RAID groups that have associated storage devices in storage system 460 .
- storage devices 511 - 513 belong to RAID group 504
- storage devices 514 - 516 belong to RAID group 505
- storage devices 517 - 519 belong to RAID group 506
- storage devices 520 - 522 belong to RAID group 507 .
- RAID group 504 mirrors RAID group 506
- RAID group 505 mirrors RAID group 507 .
- storage devices 511 - 522 do not have to be housed in the same cabinet or facility.
- storage devices 511 - 516 may be located in a data center in one city, while storage devices 517 - 522 may be in another data center in another city. This advantageously allows data to remain available even if a facility housing one set of storage devices is hit by a disaster (e.g., fire, earthquake).
- a disaster e.g., fire, earthquake
- storage devices 511 - 522 include hard disk drives communicating with storage device manager 454 over a Fiber Channel Arbitrated Loop link and configured in accordance with RAID level 4 .
- RAID level 4 significantly improves data availability.
- RAID level 4 does not include mirroring.
- a storage system according to RAID level 4 may survive a single disk failure, it may not be able to survive double disk failures.
- Implementing a mirror with RAID level 4 improves data availability by providing back up copies in the event of a double disk failure in one of the RAID groups.
- plex 503 A and plex 503 B mirror each other, data may be accessed through either plex 503 A or plex 503 B. This allows data to be accessed from a surviving plex in the event one of the plexes goes down and becomes inaccessible. This is particularly advantageous in mission-critical applications where a high degree of data availability is required. To further improve data availability, plex 503 A and plex 503 B may also utilize separate pieces of hardware to communicate with storage system 460 .
- FIG. 6 shows a state diagram of mirror 502 in accordance with an embodiment of the present invention.
- mirror 502 may be in normal (state 601 ), degraded (state 602 ), or resync (state 603 ) state.
- Mirror 502 is in the normal state when both plexes are working and online.
- data may be read from either plex.
- FIG. 5 as an example, a block in storage device 511 may be read and passed through RAID group 504 , plex 503 A, mirror 502 , volume 501 , and then to file system 452 .
- the same block may be read from storage device 517 and passed through RAID group 506 , plex 503 B, mirror 502 , volume 501 , and then to file system 452 .
- data are written to both plexes in response to a write command from file system 452 .
- the writing of data to both plexes may progress simultaneously.
- Data may also be written to each plex sequentially.
- write data received from file system 452 may be forwarded by mirror 502 to an available plex.
- mirror 502 may then forward the same data to the other plex.
- the data may first be stored through plex 503 A. Once plex 503 A sends a confirmation that the data were successfully written to storage system 460 , mirror 502 may then forward the same data to plex 503 B. In response, plex 503 B may initiate writing of the data to storage system 460 .
- mirror 502 may go to the degraded state when either plex 503 A or plex 503 B goes down.
- a plex 503 may go down for a variety of reasons including when its associated storage devices fail, are placed offline, etc.
- a down plex loses synchronization with its mirror as time passes. The longer the down time, the more the down plex becomes outdated.
- read and write commands are processed by the surviving plex.
- plex 503 A assumes responsibility for processing all read and write commands.
- having a mirrored pair of plexes allows storage device manager 454 to continue to operate even after a plex goes down.
- mirror 502 goes to the resync state when the down plex (now a “previously down plex”) becomes operational again.
- the previously down plex is resynchronized with the surviving plex.
- information in the previously down plex is updated to match that in the surviving plex.
- a technique for resynchronizing a previously down plex is later described in connection with FIG. 7.
- resynchronization of a previously down plex with a surviving plex is performed by storage device manager 454 . Performing resynchronization in a logical layer below file system 452 allows the resynchronization process to be relatively transparent to file system 452 . This advantageously minimizes disruption to file system 452 .
- data writes may only be performed on unused blocks. Because an unused block by definition has not been allocated in either plex while one of the plexes is down, data may be written to both plexes even if the mirror is still in the resync state. In other words, data may be written to the previously down plex even while it is still being resynchronized. As can be appreciated, the capability to write to the previously down plex while it is being resynchronized advantageously reduces the complexity of the resynchronization process.
- mirror 502 From the resync state, mirror 502 returns to the normal state after the previously down plex is resynchronized with the surviving plex.
- FIG. 7 shows a flow diagram of a method for resynchronizing a mirrored storage device in accordance with an embodiment of the present invention.
- a snapshot arbitrarily referred to as a “base snapshot” is created by file system 452 at the request of storage device manager 454 .
- the base snapshot like a snapshot 300 (see FIG. 3), includes information about files in a file system.
- file system 452 periodically creates a new base snapshot (and deletes the old one) while both plexes remain accessible.
- mirror 502 goes to the degraded state as indicated in action 706 .
- action 708 to action 706 mirror 502 remains in the degraded state while one of the plexes remains down.
- mirror 502 goes to the resync state when the down plex becomes operational.
- another snapshot arbitrarily referred to as a “resync snapshot” is created by file system 452 at the request of storage device manager 454 .
- the resync snapshot is just like a snapshot 300 except that it is created when mirror 502 is in the resync state. Because file system 452 , in one embodiment, only sees the most current plex, the resync snapshot is a copy of a root inode in the surviving plex.
- file system 452 determines the difference by:
- the base snapshot is created at an earlier time when both plexes are up (normal state), whereas the resync snapshot is created at a later time when a plex that has gone down goes back up (resync state).
- the difference between the base and resync snapshots represents data that were written to the surviving plex while mirror 502 is in the degraded state.
- FIGS. 8A and 8B further illustrate action 714 .
- FIGS. 8A and 8B represent storage locations of a storage device, with each cell representing one or more blocks.
- cell A 1 holds a base snapshot 801 .
- Base snapshot 801 identifies blocks in cells A 2 , B 3 , and C 1 .
- cell C 4 holds a resync snapshot 802 created while mirror 502 is in the resync state.
- resync snapshot 802 identifies blocks in cells A 2 , B 3 , and C 1 .
- Resync snapshot 802 additionally identifies blocks in cell D 2 .
- the blocks in cell D 2 compose the difference between base snapshot 801 and resync snapshot 802 .
- the difference between the base and resync snapshots is copied to the formerly down plex.
- this is performed by storage device manager 454 by copying to the formerly down plex the blocks that are in the resync snapshot but not in the base snapshot.
- FIG. 8B blocks in cell D 2 are copied to the formerly down plex.
- this speeds up the resynchronization process and thus shortens the period when only one plex is operational.
- copying the difference to the formerly down plex consumes less processing time and I/O bandwidth.
- action 718 the resync snapshot is made the base snapshot.
- action 719 the previous base snapshot is deleted. Thereafter, mirror 502 goes to the normal state as indicated in action 720 . The cycle then continues with file system 452 periodically creating base snapshots while both plexes remain accessible.
- the flow diagram of FIG. 7 may also be used in the event both plexes go down.
- the plex with the higher consistency point count is designated the surviving plex while the other plex is designated the down plex.
- the down plex is resynchronized with the surviving plex as in FIG. 7.
- plex 503 A and 503 B both go down and plex 503 A has a higher consistency point count than plex 503 B
- plex 503 A is designated the surviving plex while plex 503 B is designated the down plex.
- plex 503 B may then be resynchronized with plex 503 A as in actions 710 , 712 , 714 , 716 , 718 , etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
In one embodiment, a first storage device and a second storage device form a mirror. When the first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device.
Description
- This application is a continuation-in-part of U.S. application Ser. No. 09/684,487 (Atty. Docket No. 103.1031/P00-1031), filed on 10/4/2000 by Srinivasan Viswanathan and Steven R. Kleiman, entitled “Recovery of File System Data in File Servers Mirrored File System Volumes”. The just mentioned U.S. application is incorporated herein by reference in its entirety.
- 1. Field Of The Invention
- The present invention relates generally to computer systems, and more particularly but not exclusively to file systems and storage devices.
- 2. Description Of The Background Art
- Storage devices are employed to store data that are accessed by computer systems. Examples of storage devices include volatile and non-volatile memory, floppy drives, hard disk drives, tape drives, optical drives, etc. A storage device may be locally attached to an input/output (I/O) channel of a computer. For example, a hard disk drive may be connected to a computer's disk controller. A storage device may also be accessible over a network. Examples of such a storage device include network attached storage (NAS) and storage area network (SAN) devices. A storage device may be a single stand-alone component or be comprised of a system of storage devices such as in the case of Redundant Array Of Inexpensive Disks (RAID) groups and some Direct Access Storage Devices (DASD).
- For mission-critical applications requiring high availability of stored data, various techniques for enhancing data reliability are typically employed. One such technique is to provide a “mirror” for each storage device. In a mirror arrangement, data are written to at least two storage devices. Thus, data may be read from either of the two storage devices so long as the two devices are operational and contain the same data. That is, either of the two storage devices may process read requests so long as the two devices are in synchronization.
- When one of the storage devices fails, its mirror may be used to continue processing read and write requests. However, this also means that the failing storage device will be out of synchronization with its mirror. To avoid losing data in the event the mirror also fails, it is desirable to resynchronize the two storage devices as soon as the failing storage device becomes operational. Unfortunately, prior techniques for resynchronizing mirrored storage devices take a long time and consume a relatively large amount of processing time and1/O bandwidth. These not only increase the probability of data loss, but also result in performance degradation.
- In one embodiment, a first storage device and a second storage device form a mirrored pair of storage devices. When the first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device.
- In one embodiment, a method of resynchronizing mirrored storage devices includes the act of creating a first storage usage information when both storage devices are accessible. When one of the storage devices goes down and then comes back up, a second storage usage information is created. A difference between the first storage usage information and the second storage usage information is determined and then used to resynchronize the previously down storage device with its mirror.
- These and other features of the present invention will be readily apparent to persons of ordinary skill in the art upon reading the entirety of this disclosure, which includes the accompanying drawings and claims.
- FIG. 1 shows a schematic diagram of an example file layout.
- FIGS.2A-2D show schematic diagrams of inode files in the file layout of FIG. 1.
- FIGS.3A-3C show schematic diagrams illustrating the creation of a snapshot in the file layout of FIG. 1.
- FIG. 4 shows a schematic diagram of a computing environment in accordance with an embodiment of the present invention.
- FIG. 5 shows a logical diagram illustrating the relationship between a file system, a storage device manager, and a storage system in accordance with an embodiment of the present invention.
- FIG. 6 shows a state diagram of a mirror in accordance with an embodiment of the present invention.
- FIG. 7 shows a flow diagram of a method of resynchronizing a mirrored storage device in accordance with an embodiment of the present invention.
- FIGS. 8A and 8B show schematic diagrams further illustrating an action in the flow diagram of FIG. 7.
- The use of the same reference label in different drawings indicates the same or like components.
- In the present disclosure, numerous specific details are provided, such as examples of systems, components, and methods to provide a thorough understanding of embodiments of the invention. Persons of ordinary skill in the art will recognize, however, that the invention can be practiced without one or more of the specific details. In other instances, well-known details are not shown or described to avoid obscuring aspects of the invention.
- Referring now to FIG. 1, there is shown a schematic diagram of an
example file layout 150.File layout 150 may be adopted by a file system to organize files. Similar file layouts are also disclosed in the following commonly-assigned disclosures, which are incorporated herein by reference in their entirety: (a) U.S. Pat. No. 6,289,356, filed on Sep. 14, 1998; (b) U.S. Pat. No. 5,963,962, filed on Jun. 30, 1998; and (c) U.S. Pat. No. 5,819,292, filed on May. 31, 1995. It should be understood, however, that the present invention may also be adapted for use with other file layouts. - As shown in FIG. 1,
file layout 150 has a tree structure with aroot inode 100 as a base.Root inode 100 includes multiple blocks for describing one or more inode files 110 (i.e., 110A, 110B, . . . ). Eachinode file 110 contains information about a file infile layout 150. A file may comprise one or more blocks of data, with each block being a storage location in a storage device. - As will be explained below, an
inode file 110 may contain data or point to blocks containing data. Thus, a file may be accessed by consultingroot inode 100 to find theinode file 110 that contains or points to the file's data. Using FIG. 1 as an example, data file 122 is stored in one or more blocks pointed to byinode 110B;inode 110B is in turn identified byroot inode 100. -
File layout 150 also includes ablock map file 120 and aninode map file 121.Block map file 120 identifies free (i.e., unused) blocks, whileinode map file 121 identifies free inodes.Block map file 120 andinode map file 121 may be accessed just like any other file infile layout 150. In other words, blockmap file 120 andinode map file 121 may be stored in blocks pointed to by aninode file 110, which is identified byroot inode 100. - In one embodiment,
root inode 100 is stored in a predetermined location in a storage device. This facilitates findingroot inode 100 upon system boot-up. Becauseblock map file 120,inode map file 121, andinode files 110 may be found by consultingroot inode 100 as described above, they may be stored anywhere in the storage device. - Referring to FIG. 2A, there is shown a schematic diagram of an
inode file 110 identified by aroot inode 100. Aninode file 110 includes ablock 111 for storing general inode information such as a file's size, owner, permissions, etc. Aninode file 110 also includes one or more blocks 112 (i.e., 112A, 112B, . . . ). Depending on the size of the file, blocks 112 may contain the file's data or pointers to the file's data. In the example of FIG. 2A, the file is small enough to fit all of its data in blocks 112. - In one embodiment, an
inode file 110 includes 16 blocks 112, with each block 112 accommodating 4 bytes (i.e., 32 bits). Thus, in the just mentioned embodiment, files having a size of 64 bytes (i.e., 4-bytes ×16) or less may be stored directly in aninode file 110. - FIG. 2B shows a schematic diagram of an
inode file 110 that contains pointers in its blocks 112. In the example of FIG.2B, a pointer in a block 112 points to a data block 210 (i.e., 210A, 210B , . . . ) containing data. This allows aninode file 110 to accommodate files that are too large to fit in the inode file itself. In one embodiment, each of 16 blocks 112 may point to a 4 KB (kilo-byte) data block 210. Thus, in the just mentioned embodiment, aninode file 110 may accommodate files having a size of 64 KB (i.e.,16 ×4 KB) or less. - FIG. 2C shows a schematic diagram of another
inode file 110 that contains pointers in its blocks 112. Each of the blocks 112 points to indirect blocks 220 (i.e., 220A, 220B , . . . ), each of which has blocks that point to a data block 230 (i.e., 230A, 230B , . . . ) containing data. Pointing to an indirect block 220 allows aninode file 110 to accommodate larger files. In one embodiment, aninode file 110 has 16 blocks 112 that each point to an indirect block 220; each indirect block 220 in turn has 1024 blocks that each point to a 4 KB data block 230. Thus, in the just mentioned embodiment, aninode file 110 may accommodate files having a size of 64 MB (mega-bytes) (i.e., 16 ×1024 ×4KB) or less. - As can be appreciated, an
inode file 110 may have several levels of indirection to accommodate even larger files. For example, FIG. 2D shows a schematic diagram of aninode file 110 that points to double indirect blocks 240 (i.e., 240A, 240B , . . . ), which point to single indirect blocks 250 (i.e., 250A, 250B , . . . ), which in turn point to data blocks 260 (i.e., 260A, 260B , . . . ). In one embodiment, aninode file 110 has 16 blocks 112 that each points to a double indirect block 240 containing 1024 blocks; each block in a double indirect block 240 points to a single indirect block 250 that contains 1024 blocks; each block in a single indirect block 250 points to a 4 KB data block 260. Thus, in the just mentioned embodiment, aninode file 110 may accommodate files having a size of 64 GB (giga-bytes) (i.e., 16 ×1024 ×1024 ×4 KB) or less. - Referring now to FIG. 3A, there is shown a schematic diagram of a
root inode 100 with one or more branches 310 (i.e., 310A, 310B , . . . ). FIG. 3A and the following FIGS. 3B and 3C do not show the details of each branch from aroot inode 100 for clarity of illustration. Each branch 310 may include an inode file plus one or more levels of indirection to data blocks, if any. - FIG. 3B shows a schematic diagram of a
snapshot 300 created by copying aroot inode 100. It is to be noted that “Snapshot” is a trademark of Network Appliance, Inc. It is used for purposes of this disclosure to designate a persistent consistency point (CP) image. A persistent consistency point image (PCPI) is a point-in-time representation of the storage system, and more particularly, of the active file system, stored on a storage device (e.g., on disk) or in other persistent memory and having a name or other unique identifier that distinguishes it from other PCPIs taken at other points in time. A PCPI can also include other information (metadata) about the active file system at the particular point in time for which the image is taken. The terms “PCPI” and “snapshot” shall be used interchangeably through out this disclosure without derogation of Network Appliance's trademark rights. - A
snapshot 300, being a copy of aroot inode 100, identifies all blocks identified by theroot inode 100 at thetime snapshot 300 was created. Because asnapshot 300 identifies but does not copy branches 310, asnapshot 300 does not consume a large amount of storage space. Generally speaking, asnapshot 300 provides storage usage information at a given moment in time. - FIG. 3C shows a schematic diagram illustrating what happens when data in a103 branch 310 are modified by a write command. In one embodiment, writes may only be performed on unused blocks. That is, a used block is not overwritten when its data are modified; instead, an unused block is allocated to contain the modified data. Using FIG. 3C as an example, modifying data in
branch 310E results in the creation of a new branch 311 containing the modified data. Branch 311 is created on new, unused blocks. Theold branch 310E remains in the storage device and is still identified bysnapshot 300. Root inode 100, on the other hand, breaks its pointer to branch 310E and now points to the new branch 311. Becausebranch 310E is still identified bysnapshot 300, its data blocks may be readily recovered if desired. - As data identified by
root inode 100 are modified, the number of retained old blocks may start to consume a large amount storage space. Thus, depending on the application, asnapshot 300 may be replaced by anew snapshot 300 from time to time to release old blocks, thereby making them available for new writes. - A consistency point count may be atomically increased every time a consistency point is established. For example, a consistency point count may be increased by one every time a
snapshot 300 is created to establish a PCPI. When a file system becomes corrupted (e.g.,root inode 100 lost information after an unclean shutdown), the PCPI (which is asnapshot 300 in this example) may be used to recreate the file system. As can be appreciated, a consistency point count gives an indication of how up to date a file system is. The higher the consistency point count, the more up to date the file system. For example, a file system with a consistency point count of 7 is more up to date than a version of that file system with a consistency point count of 4. - Turning now to FIG. 4, there is shown a schematic diagram of a computing environment in accordance with an embodiment of the present invention. In the example of FIG. 4, one or more computers401 (i.e., 401A, 401B, . . . . ) are coupled to a
filer 400 over anetwork 402. A computer 401 may be any type of data processing device capable of sending write and read requests to filer 400. A computer 401 may be, without limitation, a personal computer, mini-computer, mainframe computer, portable computer, workstation, wireless terminal, personal digital assistant, cellular phone, etc. -
Network 402 may include various types of communication networks such as wide area networks, local area networks, the Internet, etc. Other nodes onnetwork 402 such as gateways, routers, bridges, firewalls, etc. are not depicted in FIG. 4 for clarity of illustration. -
Filer 400 provides data storage services overnetwork 402. In one embodiment, filer 400 processes data read and write requests from a computer 401. Of course,filer 400 does not necessarily have to be accessible overnetwork 402. Depending on the application, afiler 400 may also be locally attached to an I/O channel of a computer 401, for example. - As shown in FIG. 4,
filer 400 may include anetwork interface 410, astorage operating system 450, and astorage system 460.Storage operating system 450 may further include afile system 452 and astorage device manager 454.Storage system 460 may include one or more storage devices. Components offiler 400 may be implemented in hardware, software, and/or firmware. For example,filer 400 may be a computer having one or more processors running computer-readable program code ofstorage operating system 450 in memory. Software components offiler 400 may be stored on computer-readable storage media (e.g., memories, CD-ROMS, tapes, disks, ZIP drive , . . . ) or transmitted over wired or wireless link to a computer 401. -
Network interface 410 includes components for receiving storage-related service requests overnetwork 402.Network interface 410 forwards a received service request tostorage operating system 450, which processes the request by reading data fromstorage system 460 in the case of a read request, or by writing data tostorage system 460 in the case of a write request. Data read fromstorage system 460 are transmitted overnetwork 402 to the requesting computer 401. Similarly, data to be written tostorage system 460 are received overnetwork 402 from a computer 401. - FIG. 5 shows a logical diagram further illustrating the relationship between a
file system 452, astorage device manager 454, and astorage system 460 in accordance with an embodiment of the present invention. In one embodiment,file system 452 andstorage device manager 454 are implemented in software whilestorage system 460 is implemented in hardware. As can be appreciated, however,file system 452,storage device manager 454, andstorage system 460 may be implemented in hardware, software, and/or firmware. For example, data structures, tables, and maps may be employed to define the logical interconnection betweenfile system 452 andstorage device manager 454. As another example,storage device manager 454 andstorage system 460 may communicate via a disk controller. -
File system 452 manages files that are stored instorage system 460. In one embodiment,file system 452 uses a file layout 150 (see FIG. 1) to organize files. That is, in one embodiment,file system 452 views files as a tree of blocks with a root inode as a base.File system 452 is capable of creating snapshots and consistency points in a manner previously described. In one embodiment,file system 452 organizes files in accordance with the Write-Anywhere-File Layout (WAFL) disclosed in the incorporated disclosures U.S. Pat. Nos. 6,289,356, 5,963,962, and 5,819,292. However, the present invention is not so limited and may also be used with other file systems and layouts. -
Storage device manager 454 manages the storage devices instorage system 460.Storage device manager 454 receives read and write commands fromfile system 452 and processes the commands by accordingly accessingstorage system 460.Storage device manager 454 takes a block's logical address fromfile system 452 and translates that logical address to a physical address in one or more storage devices instorage system 460. In one embodiment,storage device manager 454 manages storage devices in accordance withRAID level 4, and accordingly stripes data blocks across storage devices and uses separate parity storage devices. It should be understood, however, that the present invention may also be used with data storage architectures other thanRAID level 4. For example, embodiments of the present invention may be used with other RAID levels, DASD's, and non-arrayed storage devices. - As shown in FIG. 5,
storage device manager 454 is logically organized as a tree of objects that include avolume 501, amirror 502, plexes 503 (i.e., 503A, 503B), and RAID groups 504-507. It is to be noted that implementing a mirror in a logical layer belowfile system 452 advantageously allows for a relatively transparent fail-over mechanism. For example, becausefile system 452 does not necessarily have to know of the existence of the mirror, a failing plex 503 does not have to be reported to file tosystem 452. When a plex fails,file system 452 may still read and write data as before. This minimizes disruption to filesystem 452 and also simplifies its design. - Still referring to FIG. 5,
volume 501 represents a file system.Mirror 502 is one level belowvolume 501 and manages a pair of mirrored plexes 503.Plex 503A is a duplicate ofplex 503B, and vice versa. Each plex 503 represents a full copy of the file system ofvolume 501. In one embodiment, consistency points are established from time to time for each plex 503. As will be described further below, this allowsstorage device manager 454 to determine which plex is more up to date in the event both plexes go down and one of them needs to be resynchronized with the other. - Below each plex503 is one or more RAID groups that have associated storage devices in
storage system 460. In the example of FIG. 5, storage devices 511-513 belong toRAID group 504, storage devices 514-516 belong toRAID group 505, storage devices 517-519 belong toRAID group 506, and storage devices 520-522 belong toRAID group 507.RAID group 504 mirrorsRAID group 506, whileRAID group 505 mirrorsRAID group 507. As can be appreciated, storage devices 511-522 do not have to be housed in the same cabinet or facility. For example, storage devices 511-516 may be located in a data center in one city, while storage devices 517-522 may be in another data center in another city. This advantageously allows data to remain available even if a facility housing one set of storage devices is hit by a disaster (e.g., fire, earthquake). - In one embodiment, storage devices511-522 include hard disk drives communicating with
storage device manager 454 over a Fiber Channel Arbitrated Loop link and configured in accordance withRAID level 4. Implementing a mirror withRAID level 4 significantly improves data availability. Ordinarily,RAID level 4 does not include mirroring. Thus, although a storage system according toRAID level 4 may survive a single disk failure, it may not be able to survive double disk failures. Implementing a mirror withRAID level 4 improves data availability by providing back up copies in the event of a double disk failure in one of the RAID groups. - Because
plex 503A and plex 503B mirror each other, data may be accessed through eitherplex 503A orplex 503B. This allows data to be accessed from a surviving plex in the event one of the plexes goes down and becomes inaccessible. This is particularly advantageous in mission-critical applications where a high degree of data availability is required. To further improve data availability, plex 503A andplex 503B may also utilize separate pieces of hardware to communicate withstorage system 460. - FIG. 6 shows a state diagram of
mirror 502 in accordance with an embodiment of the present invention. At any given moment,mirror 502 may be in normal (state 601), degraded (state 602), or resync (state 603) state.Mirror 502 is in the normal state when both plexes are working and online. In the normal state, data may be read from either plex. Using FIG. 5 as an example, a block instorage device 511 may be read and passed throughRAID group 504, plex 503A,mirror 502,volume 501, and then to filesystem 452. Alternatively, the same block may be read fromstorage device 517 and passed throughRAID group 506, plex 503B,mirror 502,volume 501, and then to filesystem 452. - In the normal state, data are written to both plexes in response to a write command from
file system 452. The writing of data to both plexes may progress simultaneously. Data may also be written to each plex sequentially. For example, write data received fromfile system 452 may be forwarded bymirror 502 to an available plex. After the available plex confirms that the data were successfully written tostorage system 460,mirror 502 may then forward the same data to the other plex. For example, the data may first be stored throughplex 503A. Onceplex 503A sends a confirmation that the data were successfully written tostorage system 460,mirror 502 may then forward the same data to plex 503B. In response, plex 503B may initiate writing of the data tostorage system 460. - From the normal state,
mirror 502 may go to the degraded state when eitherplex 503A orplex 503B goes down. A plex 503 may go down for a variety of reasons including when its associated storage devices fail, are placed offline, etc. A down plex loses synchronization with its mirror as time passes. The longer the down time, the more the down plex becomes outdated. - In the degraded state, read and write commands are processed by the surviving plex. For example, when plex503B goes down and is survived by
plex 503A, plex 503A assumes responsibility for processing all read and write commands. As can be appreciated, having a mirrored pair of plexes allowsstorage device manager 454 to continue to operate even after a plex goes down. - From the degraded state,
mirror 502 goes to the resync state when the down plex (now a “previously down plex”) becomes operational again. In the resync state, the previously down plex is resynchronized with the surviving plex. In other words, during the resync state, information in the previously down plex is updated to match that in the surviving plex. A technique for resynchronizing a previously down plex is later described in connection with FIG. 7. In one embodiment, resynchronization of a previously down plex with a surviving plex is performed bystorage device manager 454. Performing resynchronization in a logical layer belowfile system 452 allows the resynchronization process to be relatively transparent to filesystem 452. This advantageously minimizes disruption to filesystem 452. - In the resync state, data are read from the surviving plex because the previously down plex may not yet have the most current data.
- As mentioned, in one embodiment, data writes may only be performed on unused blocks. Because an unused block by definition has not been allocated in either plex while one of the plexes is down, data may be written to both plexes even if the mirror is still in the resync state. In other words, data may be written to the previously down plex even while it is still being resynchronized. As can be appreciated, the capability to write to the previously down plex while it is being resynchronized advantageously reduces the complexity of the resynchronization process.
- From the resync state,
mirror 502 returns to the normal state after the previously down plex is resynchronized with the surviving plex. - FIG. 7 shows a flow diagram of a method for resynchronizing a mirrored storage device in accordance with an embodiment of the present invention. In
action 702, a snapshot arbitrarily referred to as a “base snapshot” is created byfile system 452 at the request ofstorage device manager 454. The base snapshot, like a snapshot 300 (see FIG. 3), includes information about files in a file system. - In
action 704 toaction 702, at the request ofstorage device manager 454,file system 452 periodically creates a new base snapshot (and deletes the old one) while both plexes remain accessible. When one of the plexes goes down and becomes inaccessible,mirror 502 goes to the degraded state as indicated inaction 706. Inaction 708 toaction 706,mirror 502 remains in the degraded state while one of the plexes remains down. - In
action 708 toaction 710,mirror 502 goes to the resync state when the down plex becomes operational. Inaction 712, another snapshot arbitrarily referred to as a “resync snapshot” is created byfile system 452 at the request ofstorage device manager 454. The resync snapshot is just like asnapshot 300 except that it is created whenmirror 502 is in the resync state. Becausefile system 452, in one embodiment, only sees the most current plex, the resync snapshot is a copy of a root inode in the surviving plex. - In
action 714, the difference between the base snapshot and the resync snapshot is determined. In one embodiment,file system 452 determines the difference by: - (a) reading the base snapshot and the resync snapshot;
- (b) identifying blocks composing the base snapshot and blocks composing the resync snapshot; and
- (c) finding blocks that are in the resync snapshot but not in the base snapshot. Note that the base snapshot is created at an earlier time when both plexes are up (normal state), whereas the resync snapshot is created at a later time when a plex that has gone down goes back up (resync state). Thus, the difference between the base and resync snapshots represents data that were written to the surviving plex while
mirror 502 is in the degraded state. - FIGS. 8A and 8B further illustrate
action 714. FIGS. 8A and 8B represent storage locations of a storage device, with each cell representing one or more blocks. In FIG. 8A, cell A1 holds abase snapshot 801.Base snapshot 801 identifies blocks in cells A2, B3, and C1. In FIG. 8B, cell C4 holds aresync snapshot 802 created whilemirror 502 is in the resync state. Likebase snapshot 801,resync snapshot 802 identifies blocks in cells A2, B3, and C1.Resync snapshot 802 additionally identifies blocks in cell D2. Thus, the blocks in cell D2 compose the difference betweenbase snapshot 801 andresync snapshot 802. - Continuing in
action 716 of FIG. 7, the difference between the base and resync snapshots is copied to the formerly down plex. In one embodiment, this is performed bystorage device manager 454 by copying to the formerly down plex the blocks that are in the resync snapshot but not in the base snapshot. Using FIG. 8B as an example, blocks in cell D2 are copied to the formerly down plex. Advantageously, this speeds up the resynchronization process and thus shortens the period when only one plex is operational. Also, compared with prior techniques where all blocks of the surviving plex are copied to a formerly down plex, copying the difference to the formerly down plex consumes less processing time and I/O bandwidth. - In
action 718, the resync snapshot is made the base snapshot. Inaction 719, the previous base snapshot is deleted. Thereafter,mirror 502 goes to the normal state as indicated inaction 720. The cycle then continues withfile system 452 periodically creating base snapshots while both plexes remain accessible. - It is to be noted that the flow diagram of FIG. 7 may also be used in the event both plexes go down. In that case, the plex with the higher consistency point count is designated the surviving plex while the other plex is designated the down plex. Thereafter, the down plex is resynchronized with the surviving plex as in FIG. 7. For example, if plexes503A and 503B both go down and
plex 503A has a higher consistency point count than plex 503B, plex 503A is designated the surviving plex whileplex 503B is designated the down plex. When both plexes become operational again, plex 503B may then be resynchronized withplex 503A as inactions - Improved techniques for resynchronizing mirrored storage devices have been disclosed. While specific embodiments have been provided, it is to be understood that these embodiments are for illustration purposes and not limiting. Many additional embodiments will be apparent to persons of ordinary skill in the art reading this disclosure. Thus, the present invention is limited only by the following claims.
Claims (36)
1. A method of resynchronizing mirrored storage devices, the method comprising:
mirroring a first storage apparatus with a second storage apparatus;
determining a difference between data stored in the second storage apparatus and data stored in the first storage apparatus; and
in the event the first storage apparatus loses synchronization with the second storage apparatus, resynchronizing the first storage apparatus by copying the difference to the first storage apparatus.
2. The method of claim 1 further comprising:
servicing data write requests by writing data to the first storage apparatus while resynchronizing the first storage apparatus.
3. The method of claim 1 further comprising:
servicing data read requests by reading data from the second storage apparatus while resynchronizing the first storage apparatus.
4. The method of claim 1 wherein determining the difference between data stored in the second storage apparatus and data stored in the first storage apparatus further comprises:
reading a first storage usage information and a second storage usage information;
identifying data in the first storage usage information and data in the second storage usage information; and
finding blocks that correspond to data that are in the second storage usage information but not in the first storage usage information.
5. The method of claim 1 wherein the first storage apparatus and the second storage apparatus are configured in accordance with RAID level 4.
6. A system comprising:
a first storage device and a second storage device forming a mirrored pair of storage devices;
a storage device manager configured to manage the first storage device and the second storage device; and
wherein the storage device manager is configured to resynchronize the second storage device with data blocks allocated in the first storage device but not in the second storage device.
7. The system of claim 6 further comprising:
a file system at a logical layer above the storage device manager and configured to send storage-related commands to the storage device manager.
8. The system of claim 7 further comprising:
a network interface in communication with the file system, the network interface being configured to receive storage-related requests over a computer network.
9. The system of claim 6 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
10. The system of claim 6 wherein the storage device manager is configured to service storage-related requests while resynchronizing the second storage device.
11. A method of resynchronizing mirrored storage devices, the method comprising:
creating a first storage usage information at a first moment and a second storage usage information at a second moment;
determining a difference between the first storage usage information and the second storage usage information; and
based on the difference, resynchronizing a first storage device that forms a mirror with a second storage device.
12. The method of claim 11 further comprising:
servicing data write requests by writing data to the first storage device while resynchronizing the first storage device.
13. The method of claim 11 further comprising:
servicing data read requests by reading data from the second storage device while resynchronizing the first storage device.
14. The method of claim 11 wherein determining the difference between the first storage usage information and the second storage usage information further comprises:
reading the first storage usage information and the second storage usage information;
identifying blocks in the first storage usage information and blocks in the second storage usage information; and
finding blocks that are in the second storage usage information but not in the first storage usage information.
15. The method of claim 11 wherein the mirror is implemented in a logical layer below a file system.
16. The method of claim 11 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
17. The method of claim 11 further comprising:
going from a normal state to a degraded state when the first storage device becomes inaccessible;
going from the degraded state to a resync state when resynchronizing the first storage device; and
going from the resync state to the normal state after resynchronizing the first storage device.
18. The method of claim 17 further comprising:
writing new data to the first storage device while in the resync state.
19. The method of claim 17 further comprising:
reading data from the second storage device while in the resync state.
20. The method of claim 17 wherein the first storage usage information is created while in the normal state and the second storage usage information is created while in the resync state.
21. A computer-readable storage medium comprising:
computer-readable program code for creating a first storage usage information and a second storage usage information;
computer-readable program code for determining a difference between the first storage usage information and the second storage usage information; and
computer-readable program code for resynchronizing a previously down storage device with another storage device based on the difference.
22. A method of resynchronizing a storage device, the method comprising:
creating a first storage usage information when a first storage device and a second device that form a mirror are both accessible;
creating a second storage usage information after the first storage device goes down and comes back up;
determining a difference between the first storage usage information and the second storage information;
resynchronizing the first storage device with the second storage device based on the difference; and
servicing data write requests by writing data to the first storage device while resynchronizing the first storage device.
23. The method of claim 22 further comprising:
servicing data read requests by reading data from the second storage device while resynchronizing the first storage device.
24. The method of claim 22 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
25. A method of resynchronizing mirrored storage devices, the method comprising:
keeping a mirror in a normal state while a first storage device and a second storage device of the mirror are both accessible;
transitioning the mirror from the normal state to a degraded state when the second storage device becomes inaccessible;
transitioning the mirror from the degraded state to a resync state when the second storage device becomes accessible;
determining a difference between data stored in the first storage device and data stored in the second storage device; and
transitioning the mirror from the resync state to the normal state after the difference is copied to the second storage device.
26. The method of claim 25 wherein determining the difference between data stored in the first storage device and data stored in the second storage device comprises:
identifying data blocks in the first storage device that are not in the second storage device.
27. The method of claim 25 wherein determining the difference between data stored in the first storage device and data stored in the second storage device comprises:
identifying data blocks stored in the first storage device and the second storage device while the mirror is in the normal state to create a first storage usage information;
identifying data blocks stored in the first storage device while the mirror is in the resync state to create a second storage usage information; and
determining a difference between the first storage usage information and the second storage usage information.
28. The method of claim 25 further comprising:
in response to a write command, writing data to the second storage device while the mirror is in the resync state.
29. A system for providing data storage services over a computer network, the system comprising:
a file system;
a storage device manager configured to service data access requests from the file system, the storage device manager configured to form a mirror with a first storage device and a second storage device; and
wherein the storage device manager is configured to resynchronize the second storage device with data determined to be in the first storage device but not in the second storage device.
30. The system of claim 29 wherein the first storage and the second storage device are configured in accordance with RAID level 4.
31. The system of claim 29 wherein the first storage device and the second storage device are not housed in the same facility.
32. A method of resynchronizing mirrored storage devices, the method comprising:
mirroring a first group of storage devices with a second group of storage devices;
determining a difference between data stored in the second group of storage devices and data stored in the second group of storage devices; and
in the event the first group of storage devices loses synchronization with the second group of storage devices, resynchronizing the first group of storage devices by copying the difference to the first group of storage devices.
33. The method of claim 32 further comprising:
servicing data write requests by writing data to the first group of storage devices while resynchronizing the first group of storage devices.
34. The method of claim 32 further comprising:
servicing data read requests by reading data from the second group of storage devices while resynchronizing the first group of storage devices.
35. The method of claim 32 wherein determining the difference between data stored in the second group of storage devices and data stored in the second group of storage devices further comprises:
reading a first storage usage information and a second storage usage information;
identifying data in the first storage usage information and data in the second storage usage information; and
finding blocks that correspond to data that are in the second storage usage information but not in the first storage usage information.
36. The method of claim 32 wherein the first group of storage devices and the second group of storage devices are configured in accordance with RAID level 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/154,414 US20020194529A1 (en) | 2000-10-04 | 2002-05-23 | Resynchronization of mirrored storage devices |
US10/225,453 US7143249B2 (en) | 2000-10-04 | 2002-08-19 | Resynchronization of mirrored storage devices |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/684,487 US6654912B1 (en) | 2000-10-04 | 2000-10-04 | Recovery of file system data in file servers mirrored file system volumes |
US10/154,414 US20020194529A1 (en) | 2000-10-04 | 2002-05-23 | Resynchronization of mirrored storage devices |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/684,487 Continuation-In-Part US6654912B1 (en) | 2000-10-04 | 2000-10-04 | Recovery of file system data in file servers mirrored file system volumes |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/225,453 Continuation-In-Part US7143249B2 (en) | 2000-10-04 | 2002-08-19 | Resynchronization of mirrored storage devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020194529A1 true US20020194529A1 (en) | 2002-12-19 |
Family
ID=24748237
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/684,487 Expired - Lifetime US6654912B1 (en) | 2000-10-04 | 2000-10-04 | Recovery of file system data in file servers mirrored file system volumes |
US10/154,414 Abandoned US20020194529A1 (en) | 2000-10-04 | 2002-05-23 | Resynchronization of mirrored storage devices |
US10/719,699 Expired - Fee Related US7096379B2 (en) | 2000-10-04 | 2003-11-21 | Recovery of file system data in file servers mirrored file system volumes |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/684,487 Expired - Lifetime US6654912B1 (en) | 2000-10-04 | 2000-10-04 | Recovery of file system data in file servers mirrored file system volumes |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/719,699 Expired - Fee Related US7096379B2 (en) | 2000-10-04 | 2003-11-21 | Recovery of file system data in file servers mirrored file system volumes |
Country Status (4)
Country | Link |
---|---|
US (3) | US6654912B1 (en) |
EP (1) | EP1325415B1 (en) |
DE (1) | DE60112462T2 (en) |
WO (1) | WO2002029572A2 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060136771A1 (en) * | 2004-12-06 | 2006-06-22 | Hitachi, Ltd. | Storage system and snapshot data preparation method in storage system |
US20060168624A1 (en) * | 2004-11-22 | 2006-07-27 | John Carney | Method and system for delivering enhanced TV content |
US7162662B1 (en) * | 2003-12-23 | 2007-01-09 | Network Appliance, Inc. | System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption |
US7200726B1 (en) | 2003-10-24 | 2007-04-03 | Network Appliance, Inc. | Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring |
US7203796B1 (en) | 2003-10-24 | 2007-04-10 | Network Appliance, Inc. | Method and apparatus for synchronous data mirroring |
US20070180305A1 (en) * | 2003-01-31 | 2007-08-02 | Hitachi, Ltd. | Methods for Controlling Storage Devices Controlling Apparatuses |
US20070255758A1 (en) * | 2006-04-28 | 2007-11-01 | Ling Zheng | System and method for sampling based elimination of duplicate data |
US20080005141A1 (en) * | 2006-06-29 | 2008-01-03 | Ling Zheng | System and method for retrieving and using block fingerprints for data deduplication |
US20080005201A1 (en) * | 2006-06-29 | 2008-01-03 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US7325109B1 (en) * | 2003-10-24 | 2008-01-29 | Network Appliance, Inc. | Method and apparatus to mirror data at two separate sites without comparing the data at the two sites |
US20080184001A1 (en) * | 2007-01-30 | 2008-07-31 | Network Appliance, Inc. | Method and an apparatus to store data patterns |
US20080301134A1 (en) * | 2007-05-31 | 2008-12-04 | Miller Steven C | System and method for accelerating anchor point detection |
US20080313496A1 (en) * | 2007-06-12 | 2008-12-18 | Microsoft Corporation | Gracefully degradable versioned storage systems |
US7596672B1 (en) | 2003-10-24 | 2009-09-29 | Network Appliance, Inc. | Synchronous mirroring including writing image updates to a file |
US20090299492A1 (en) * | 2008-05-28 | 2009-12-03 | Fujitsu Limited | Control of connecting apparatuses in information processing system |
US20100049726A1 (en) * | 2008-08-19 | 2010-02-25 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US7707165B1 (en) * | 2004-12-09 | 2010-04-27 | Netapp, Inc. | System and method for managing data versions in a file system |
US7747584B1 (en) | 2006-08-22 | 2010-06-29 | Netapp, Inc. | System and method for enabling de-duplication in a storage system architecture |
US8001307B1 (en) | 2007-04-27 | 2011-08-16 | Network Appliance, Inc. | Apparatus and a method to eliminate deadlock in a bi-directionally mirrored data storage system |
US8793226B1 (en) | 2007-08-28 | 2014-07-29 | Netapp, Inc. | System and method for estimating duplicate data |
US20150039572A1 (en) * | 2012-03-01 | 2015-02-05 | Netapp, Inc. | System and method for removing overlapping ranges from a flat sorted data structure |
US10142121B2 (en) | 2011-12-07 | 2018-11-27 | Comcast Cable Communications, Llc | Providing synchronous content and supplemental experiences |
US20240143215A1 (en) * | 2022-10-28 | 2024-05-02 | Netapp, Inc. | Fast resynchronization of a mirrored aggregate using disk-level cloning |
Families Citing this family (112)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6138126A (en) | 1995-05-31 | 2000-10-24 | Network Appliance, Inc. | Method for allocating files in a file system integrated with a raid disk sub-system |
US6516351B2 (en) | 1997-12-05 | 2003-02-04 | Network Appliance, Inc. | Enforcing uniform file-locking for diverse file-locking protocols |
US6119244A (en) | 1998-08-25 | 2000-09-12 | Network Appliance, Inc. | Coordinating persistent status information with multiple file servers |
US8935307B1 (en) | 2000-09-12 | 2015-01-13 | Hewlett-Packard Development Company, L.P. | Independent data access in a segmented file system |
US7406484B1 (en) | 2000-09-12 | 2008-07-29 | Tbrix, Inc. | Storage allocation in a distributed segmented file system |
US20040236798A1 (en) * | 2001-09-11 | 2004-11-25 | Sudhir Srinivasan | Migration of control in a distributed segmented file system |
US6782389B1 (en) * | 2000-09-12 | 2004-08-24 | Ibrix, Inc. | Distributing files across multiple, permissibly heterogeneous, storage devices |
US7836017B1 (en) | 2000-09-12 | 2010-11-16 | Hewlett-Packard Development Company, L.P. | File replication in a distributed segmented file system |
US20060288080A1 (en) * | 2000-09-12 | 2006-12-21 | Ibrix, Inc. | Balanced computer architecture |
US6654912B1 (en) * | 2000-10-04 | 2003-11-25 | Network Appliance, Inc. | Recovery of file system data in file servers mirrored file system volumes |
US7143249B2 (en) * | 2000-10-04 | 2006-11-28 | Network Appliance, Inc. | Resynchronization of mirrored storage devices |
US6728735B1 (en) * | 2001-03-12 | 2004-04-27 | Network Appliance, Inc. | Restartable dump that produces a consistent filesystem on tapes |
US7617292B2 (en) | 2001-06-05 | 2009-11-10 | Silicon Graphics International | Multi-class heterogeneous clients in a clustered filesystem |
US8010558B2 (en) | 2001-06-05 | 2011-08-30 | Silicon Graphics International | Relocation of metadata server with outstanding DMAPI requests |
US6950833B2 (en) * | 2001-06-05 | 2005-09-27 | Silicon Graphics, Inc. | Clustered filesystem |
US20040139125A1 (en) | 2001-06-05 | 2004-07-15 | Roger Strassburg | Snapshot copy of data volume during data access |
US7765329B2 (en) * | 2002-06-05 | 2010-07-27 | Silicon Graphics International | Messaging between heterogeneous clients of a storage area network |
US7640582B2 (en) | 2003-04-16 | 2009-12-29 | Silicon Graphics International | Clustered filesystem for mix of trusted and untrusted nodes |
EP1508090A2 (en) * | 2001-09-03 | 2005-02-23 | Koninklijke Philips Electronics N.V. | Device for use in a network environment |
US6948089B2 (en) * | 2002-01-10 | 2005-09-20 | Hitachi, Ltd. | Apparatus and method for multiple generation remote backup and fast restore |
US7216135B2 (en) * | 2002-02-15 | 2007-05-08 | International Business Machines Corporation | File system for providing access to a snapshot dataset where disk address in the inode is equal to a ditto address for indicating that the disk address is invalid disk address |
US7043503B2 (en) * | 2002-02-15 | 2006-05-09 | International Business Machines Corporation | Ditto address indicating true disk address for actual data blocks stored in one of an inode of the file system and subsequent snapshot |
US6857001B2 (en) | 2002-06-07 | 2005-02-15 | Network Appliance, Inc. | Multiple concurrent active file systems |
US7024586B2 (en) | 2002-06-24 | 2006-04-04 | Network Appliance, Inc. | Using file system information in raid data reconstruction and migration |
US7454529B2 (en) | 2002-08-02 | 2008-11-18 | Netapp, Inc. | Protectable data storage system and a method of protecting and/or managing a data storage system |
US7117386B2 (en) * | 2002-08-21 | 2006-10-03 | Emc Corporation | SAR restart and going home procedures |
US7437387B2 (en) | 2002-08-30 | 2008-10-14 | Netapp, Inc. | Method and system for providing a file system overlay |
US7882081B2 (en) | 2002-08-30 | 2011-02-01 | Netapp, Inc. | Optimized disk repository for the storage and retrieval of mostly sequential data |
US6938184B2 (en) * | 2002-10-17 | 2005-08-30 | Spinnaker Networks, Inc. | Method and system for providing persistent storage of user data |
US7567993B2 (en) | 2002-12-09 | 2009-07-28 | Netapp, Inc. | Method and system for creating and using removable disk based copies of backup data |
US8024172B2 (en) | 2002-12-09 | 2011-09-20 | Netapp, Inc. | Method and system for emulating tape libraries |
US7769722B1 (en) | 2006-12-08 | 2010-08-03 | Emc Corporation | Replication and restoration of multiple data storage object types in a data network |
US20040181707A1 (en) | 2003-03-11 | 2004-09-16 | Hitachi, Ltd. | Method and apparatus for seamless management for disaster recovery |
US6973369B2 (en) | 2003-03-12 | 2005-12-06 | Alacritus, Inc. | System and method for virtual vaulting |
US7437492B2 (en) | 2003-05-14 | 2008-10-14 | Netapp, Inc | Method and system for data compression and compression estimation in a virtual tape library environment |
US20040267823A1 (en) * | 2003-06-24 | 2004-12-30 | Microsoft Corporation | Reconcilable and undoable file system |
US7275177B2 (en) * | 2003-06-25 | 2007-09-25 | Emc Corporation | Data recovery with internet protocol replication with or without full resync |
US7028156B1 (en) | 2003-07-01 | 2006-04-11 | Veritas Operating Corporation | Use of read data tracking and caching to recover from data corruption |
US7278049B2 (en) * | 2003-09-29 | 2007-10-02 | International Business Machines Corporation | Method, system, and program for recovery from a failure in an asynchronous data copying system |
US7188272B2 (en) * | 2003-09-29 | 2007-03-06 | International Business Machines Corporation | Method, system and article of manufacture for recovery from a failure in a cascading PPRC system |
US7315965B2 (en) | 2004-02-04 | 2008-01-01 | Network Appliance, Inc. | Method and system for storing data using a continuous data protection system |
US7720817B2 (en) | 2004-02-04 | 2010-05-18 | Netapp, Inc. | Method and system for browsing objects on a protected volume in a continuous data protection system |
US7490103B2 (en) | 2004-02-04 | 2009-02-10 | Netapp, Inc. | Method and system for backing up data |
US7325159B2 (en) | 2004-02-04 | 2008-01-29 | Network Appliance, Inc. | Method and system for data recovery in a continuous data protection system |
US7559088B2 (en) | 2004-02-04 | 2009-07-07 | Netapp, Inc. | Method and apparatus for deleting data upon expiration |
US7904679B2 (en) | 2004-02-04 | 2011-03-08 | Netapp, Inc. | Method and apparatus for managing backup data |
US7406488B2 (en) | 2004-02-04 | 2008-07-29 | Netapp | Method and system for maintaining data in a continuous data protection system |
US7783606B2 (en) | 2004-02-04 | 2010-08-24 | Netapp, Inc. | Method and system for remote data recovery |
US7426617B2 (en) * | 2004-02-04 | 2008-09-16 | Network Appliance, Inc. | Method and system for synchronizing volumes in a continuous data protection system |
JP2006011581A (en) * | 2004-06-23 | 2006-01-12 | Hitachi Ltd | Storage system and its control method |
US8028135B1 (en) | 2004-09-01 | 2011-09-27 | Netapp, Inc. | Method and apparatus for maintaining compliant storage |
US7680839B1 (en) * | 2004-09-30 | 2010-03-16 | Symantec Operating Corporation | System and method for resynchronizing mirrored volumes |
US7774610B2 (en) | 2004-12-14 | 2010-08-10 | Netapp, Inc. | Method and apparatus for verifiably migrating WORM data |
US7581118B2 (en) | 2004-12-14 | 2009-08-25 | Netapp, Inc. | Disk sanitization using encryption |
US7558839B1 (en) | 2004-12-14 | 2009-07-07 | Netapp, Inc. | Read-after-write verification for improved write-once-read-many data storage |
US7526620B1 (en) | 2004-12-14 | 2009-04-28 | Netapp, Inc. | Disk sanitization in an active file system |
US7437601B1 (en) * | 2005-03-08 | 2008-10-14 | Network Appliance, Inc. | Method and system for re-synchronizing an asynchronous mirror without data loss |
US7401198B2 (en) | 2005-10-06 | 2008-07-15 | Netapp | Maximizing storage system throughput by measuring system performance metrics |
US7765187B2 (en) * | 2005-11-29 | 2010-07-27 | Emc Corporation | Replication of a consistency group of data storage objects from servers in a data network |
US20070168721A1 (en) * | 2005-12-22 | 2007-07-19 | Nokia Corporation | Method, network entity, system, electronic device and computer program product for backup and restore provisioning |
US7752401B2 (en) | 2006-01-25 | 2010-07-06 | Netapp, Inc. | Method and apparatus to automatically commit files to WORM status |
US7788456B1 (en) | 2006-02-16 | 2010-08-31 | Network Appliance, Inc. | Use of data images to allow release of unneeded data storage |
US7650533B1 (en) | 2006-04-20 | 2010-01-19 | Netapp, Inc. | Method and system for performing a restoration in a continuous data protection system |
US7730351B2 (en) * | 2006-05-15 | 2010-06-01 | Oracle America, Inc. | Per file dirty region logging |
US20080077635A1 (en) * | 2006-09-22 | 2008-03-27 | Digital Bazaar, Inc. | Highly Available Clustered Storage Network |
US8706833B1 (en) | 2006-12-08 | 2014-04-22 | Emc Corporation | Data storage server having common replication architecture for multiple storage object types |
US7793148B2 (en) * | 2007-01-12 | 2010-09-07 | International Business Machines Corporation | Using virtual copies in a failover and failback environment |
US7644300B1 (en) * | 2007-04-20 | 2010-01-05 | 3Par, Inc. | Fast resynchronization of data from a remote copy |
US8566362B2 (en) | 2009-01-23 | 2013-10-22 | Nasuni Corporation | Method and system for versioned file system using structured data representations |
DE102009029334A1 (en) * | 2009-09-10 | 2011-03-24 | Henkel Ag & Co. Kgaa | Two-stage process for the corrosion-protective treatment of metal surfaces |
US8190574B2 (en) | 2010-03-02 | 2012-05-29 | Storagecraft Technology Corporation | Systems, methods, and computer-readable media for backup and restoration of computer information |
US9244015B2 (en) | 2010-04-20 | 2016-01-26 | Hewlett-Packard Development Company, L.P. | Self-arranging, luminescence-enhancement device for surface-enhanced luminescence |
US8799231B2 (en) | 2010-08-30 | 2014-08-05 | Nasuni Corporation | Versioned file system with fast restore |
US8661063B2 (en) * | 2010-10-12 | 2014-02-25 | Nasuni Corporation | Versioned file system with sharing |
US9279767B2 (en) | 2010-10-20 | 2016-03-08 | Hewlett-Packard Development Company, L.P. | Chemical-analysis device integrated with metallic-nanofinger device for chemical sensing |
WO2012054024A1 (en) | 2010-10-20 | 2012-04-26 | Hewlett-Packard Development Company, L.P. | Metallic-nanofinger device for chemical sensing |
US8402004B2 (en) | 2010-11-16 | 2013-03-19 | Actifio, Inc. | System and method for creating deduplicated copies of data by tracking temporal relationships among copies and by ingesting difference data |
US8417674B2 (en) | 2010-11-16 | 2013-04-09 | Actifio, Inc. | System and method for creating deduplicated copies of data by sending difference data between near-neighbor temporal states |
US9858155B2 (en) | 2010-11-16 | 2018-01-02 | Actifio, Inc. | System and method for managing data with service level agreements that may specify non-uniform copying of data |
US8904126B2 (en) | 2010-11-16 | 2014-12-02 | Actifio, Inc. | System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage |
US8843489B2 (en) | 2010-11-16 | 2014-09-23 | Actifio, Inc. | System and method for managing deduplicated copies of data using temporal relationships among copies |
US8601220B1 (en) | 2011-04-29 | 2013-12-03 | Netapp, Inc. | Transparent data migration in a storage system environment |
US8589724B2 (en) | 2011-06-30 | 2013-11-19 | Seagate Technology Llc | Rapid rebuild of a data set |
US8983915B2 (en) | 2011-08-01 | 2015-03-17 | Actifio, Inc. | Successive data fingerprinting for copy accuracy assurance |
GB2495079A (en) * | 2011-09-23 | 2013-04-03 | Hybrid Logic Ltd | Live migration of applications and file systems in a distributed system |
EP2862051A4 (en) | 2012-06-18 | 2016-08-10 | Actifio Inc | Enhanced data management virtualization system |
US8892941B2 (en) | 2012-06-27 | 2014-11-18 | International Business Machines Corporation | Recovering a volume table and data sets from a corrupted volume |
KR102050723B1 (en) | 2012-09-28 | 2019-12-02 | 삼성전자 주식회사 | Computing system and data management method thereof |
US9646067B2 (en) | 2013-05-14 | 2017-05-09 | Actifio, Inc. | Garbage collection predictions |
WO2015074033A1 (en) | 2013-11-18 | 2015-05-21 | Madhav Mutalik | Copy data techniques |
US9720778B2 (en) | 2014-02-14 | 2017-08-01 | Actifio, Inc. | Local area network free data movement |
US9792187B2 (en) | 2014-05-06 | 2017-10-17 | Actifio, Inc. | Facilitating test failover using a thin provisioned virtual machine created from a snapshot |
US9772916B2 (en) | 2014-06-17 | 2017-09-26 | Actifio, Inc. | Resiliency director |
US10089185B2 (en) | 2014-09-16 | 2018-10-02 | Actifio, Inc. | Multi-threaded smart copy |
US10379963B2 (en) | 2014-09-16 | 2019-08-13 | Actifio, Inc. | Methods and apparatus for managing a large-scale environment of copy data management appliances |
US10146788B1 (en) * | 2014-10-10 | 2018-12-04 | Google Llc | Combined mirroring and caching network file system |
WO2016085541A1 (en) | 2014-11-28 | 2016-06-02 | Nasuni Corporation | Versioned file system with global lock |
WO2016094819A1 (en) | 2014-12-12 | 2016-06-16 | Actifio, Inc. | Searching and indexing of backup data sets |
US10055300B2 (en) | 2015-01-12 | 2018-08-21 | Actifio, Inc. | Disk group based backup |
US9842029B2 (en) * | 2015-03-25 | 2017-12-12 | Kabushiki Kaisha Toshiba | Electronic device, method and storage medium |
US10282201B2 (en) | 2015-04-30 | 2019-05-07 | Actifo, Inc. | Data provisioning techniques |
US9734028B2 (en) * | 2015-06-29 | 2017-08-15 | International Business Machines Corporation | Reverse resynchronization by a secondary data source when a data destination has more recent data |
US10691659B2 (en) | 2015-07-01 | 2020-06-23 | Actifio, Inc. | Integrating copy data tokens with source code repositories |
US10613938B2 (en) | 2015-07-01 | 2020-04-07 | Actifio, Inc. | Data virtualization using copy data tokens |
US10684994B2 (en) * | 2015-09-25 | 2020-06-16 | Netapp Inc. | Data synchronization |
US10445298B2 (en) | 2016-05-18 | 2019-10-15 | Actifio, Inc. | Vault to object store |
US10476955B2 (en) | 2016-06-02 | 2019-11-12 | Actifio, Inc. | Streaming and sequential data replication |
US10855554B2 (en) | 2017-04-28 | 2020-12-01 | Actifio, Inc. | Systems and methods for determining service level agreement compliance |
US11403178B2 (en) | 2017-09-29 | 2022-08-02 | Google Llc | Incremental vault to object store |
US11176001B2 (en) | 2018-06-08 | 2021-11-16 | Google Llc | Automated backup and restore of a disk group |
CN112307013A (en) * | 2019-07-30 | 2021-02-02 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for managing application systems |
CN111291005B (en) * | 2020-01-19 | 2023-05-02 | Oppo(重庆)智能科技有限公司 | File viewing method, device, terminal equipment, system and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5479653A (en) * | 1994-07-14 | 1995-12-26 | Dellusa, L.P. | Disk array apparatus and method which supports compound raid configurations and spareless hot sparing |
US5519844A (en) * | 1990-11-09 | 1996-05-21 | Emc Corporation | Logical partitioning of a redundant array storage system |
US5819292A (en) * | 1993-06-03 | 1998-10-06 | Network Appliance, Inc. | Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system |
US5960169A (en) * | 1997-02-27 | 1999-09-28 | International Business Machines Corporation | Transformational raid for hierarchical storage management system |
US6023780A (en) * | 1996-05-13 | 2000-02-08 | Fujitsu Limited | Disc array apparatus checking and restructuring data read from attached disc drives |
US6085298A (en) * | 1994-10-13 | 2000-07-04 | Vinca Corporation | Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer |
US6092215A (en) * | 1997-09-29 | 2000-07-18 | International Business Machines Corporation | System and method for reconstructing data in a storage array system |
US20010010070A1 (en) * | 1998-08-13 | 2001-07-26 | Crockett Robert Nelson | System and method for dynamically resynchronizing backup data |
US6269381B1 (en) * | 1998-06-30 | 2001-07-31 | Emc Corporation | Method and apparatus for backing up data before updating the data and for restoring from the backups |
US20020059505A1 (en) * | 1998-06-30 | 2002-05-16 | St. Pierre Edgar J. | Method and apparatus for differential backup in a computer storage system |
US6463573B1 (en) * | 1999-06-03 | 2002-10-08 | International Business Machines Corporation | Data processor storage systems with dynamic resynchronization of mirrored logical data volumes subsequent to a storage system failure |
US6543004B1 (en) * | 1999-07-29 | 2003-04-01 | Hewlett-Packard Development Company, L.P. | Method and apparatus for archiving and restoring data |
US6654912B1 (en) * | 2000-10-04 | 2003-11-25 | Network Appliance, Inc. | Recovery of file system data in file servers mirrored file system volumes |
US6662268B1 (en) * | 1999-09-02 | 2003-12-09 | International Business Machines Corporation | System and method for striped mirror re-synchronization by logical partition rather than stripe units |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US20040073831A1 (en) * | 1993-04-23 | 2004-04-15 | Moshe Yanai | Remote data mirroring |
Family Cites Families (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US489781A (en) * | 1893-01-10 | William w | ||
US4761785B1 (en) | 1986-06-12 | 1996-03-12 | Ibm | Parity spreading to enhance storage access |
US4897781A (en) | 1987-02-13 | 1990-01-30 | International Business Machines Corporation | System and method for using cached data at a local node after re-opening a file at a remote node in a distributed networking environment |
US4875159A (en) | 1987-12-22 | 1989-10-17 | Amdahl Corporation | Version management system using plural control fields for synchronizing two versions of files in a multiprocessor system |
US4937763A (en) | 1988-09-06 | 1990-06-26 | E I International, Inc. | Method of system state analysis |
US5067099A (en) | 1988-11-03 | 1991-11-19 | Allied-Signal Inc. | Methods and apparatus for monitoring system performance |
US5163148A (en) | 1989-08-11 | 1992-11-10 | Digital Equipment Corporation | File backup system for producing a backup copy of a file which may be updated during backup |
US5163131A (en) | 1989-09-08 | 1992-11-10 | Auspex Systems, Inc. | Parallel i/o network file server architecture |
US5276867A (en) | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data storage system with improved data migration |
JPH0731582B2 (en) | 1990-06-21 | 1995-04-10 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Method and apparatus for recovering parity protected data |
US5208813A (en) | 1990-10-23 | 1993-05-04 | Array Technology Corporation | On-line reconstruction of a failed redundant array system |
JP2603757B2 (en) | 1990-11-30 | 1997-04-23 | 富士通株式会社 | Method of controlling array disk device |
US5235601A (en) | 1990-12-21 | 1993-08-10 | Array Technology Corporation | On-line restoration of redundancy information in a redundant array system |
US5369757A (en) * | 1991-06-18 | 1994-11-29 | Digital Equipment Corporation | Recovery logging in the presence of snapshot files by ordering of buffer pool flushing |
US5321837A (en) | 1991-10-11 | 1994-06-14 | International Business Machines Corporation | Event handling mechanism having a process and an action association process |
US5313626A (en) | 1991-12-17 | 1994-05-17 | Jones Craig S | Disk drive array with efficient background rebuilding |
US5442752A (en) | 1992-01-24 | 1995-08-15 | International Business Machines Corporation | Data storage method for DASD arrays using striping based on file length |
US5305326A (en) | 1992-03-06 | 1994-04-19 | Data General Corporation | High availability disk arrays |
US5335235A (en) | 1992-07-07 | 1994-08-02 | Digital Equipment Corporation | FIFO based parity generator |
US5963962A (en) | 1995-05-31 | 1999-10-05 | Network Appliance, Inc. | Write anywhere file-system layout |
DE69431186T2 (en) | 1993-06-03 | 2003-05-08 | Network Appliance Inc | Method and file system for assigning file blocks to storage space in a RAID disk system |
US6604118B2 (en) | 1998-07-31 | 2003-08-05 | Network Appliance, Inc. | File system image transfer |
WO1994029795A1 (en) | 1993-06-04 | 1994-12-22 | Network Appliance Corporation | A method for providing parity in a raid sub-system using a non-volatile memory |
DE69413977T2 (en) | 1993-07-01 | 1999-03-18 | Legent Corp | ARRANGEMENT AND METHOD FOR DISTRIBUTED DATA MANAGEMENT IN NETWORKED COMPUTER SYSTEMS |
KR0128271B1 (en) * | 1994-02-22 | 1998-04-15 | 윌리암 티. 엘리스 | Remote data duplexing |
US5649152A (en) | 1994-10-13 | 1997-07-15 | Vinca Corporation | Method and system for providing a static snapshot of data stored on a mass storage system |
US5604862A (en) * | 1995-03-14 | 1997-02-18 | Network Integrity, Inc. | Continuously-snapshotted protection of computer files |
US5666353A (en) | 1995-03-21 | 1997-09-09 | Cisco Systems, Inc. | Frame based traffic policing for a digital switch |
US6453325B1 (en) * | 1995-05-24 | 2002-09-17 | International Business Machines Corporation | Method and means for backup and restoration of a database system linked to a system for filing data |
US5907672A (en) | 1995-10-04 | 1999-05-25 | Stac, Inc. | System for backing up computer disk volumes with error remapping of flawed memory addresses |
US5819310A (en) | 1996-05-24 | 1998-10-06 | Emc Corporation | Method and apparatus for reading data from mirrored logical volumes on physical disk drives |
US5857208A (en) * | 1996-05-31 | 1999-01-05 | Emc Corporation | Method and apparatus for performing point in time backup operation in a computer system |
US5996106A (en) | 1997-02-04 | 1999-11-30 | Micron Technology, Inc. | Multi bank test mode for memory devices |
US5873101A (en) | 1997-02-10 | 1999-02-16 | Oracle Corporation | Database backup/restore and bulk data transfer |
US5895495A (en) * | 1997-03-13 | 1999-04-20 | International Business Machines Corporation | Demand-based larx-reserve protocol for SMP system buses |
US6490610B1 (en) * | 1997-05-30 | 2002-12-03 | Oracle Corporation | Automatic failover for clients accessing a resource through a server |
US5996086A (en) | 1997-10-14 | 1999-11-30 | Lsi Logic Corporation | Context-based failover architecture for redundant servers |
US6101585A (en) | 1997-11-04 | 2000-08-08 | Adaptec, Inc. | Mechanism for incremental backup of on-line files |
US6212531B1 (en) * | 1998-01-13 | 2001-04-03 | International Business Machines Corporation | Method for implementing point-in-time copy using a snapshot function |
US6360330B1 (en) * | 1998-03-31 | 2002-03-19 | Emc Corporation | System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server |
WO1999063441A1 (en) * | 1998-06-05 | 1999-12-09 | Mylex Corporation | Snapshot backup strategy |
US6279011B1 (en) | 1998-06-19 | 2001-08-21 | Network Appliance, Inc. | Backup and restore for heterogeneous file server environment |
US6574591B1 (en) | 1998-07-31 | 2003-06-03 | Network Appliance, Inc. | File systems image transfer between dissimilar file systems |
US6119244A (en) | 1998-08-25 | 2000-09-12 | Network Appliance, Inc. | Coordinating persistent status information with multiple file servers |
US6397307B2 (en) * | 1999-02-23 | 2002-05-28 | Legato Systems, Inc. | Method and system for mirroring and archiving mass storage |
KR100382851B1 (en) * | 1999-03-31 | 2003-05-09 | 인터내셔널 비지네스 머신즈 코포레이션 | A method and apparatus for managing client computers in a distributed data processing system |
US6529921B1 (en) * | 1999-06-29 | 2003-03-04 | Microsoft Corporation | Dynamic synchronization of tables |
US6591377B1 (en) * | 1999-11-24 | 2003-07-08 | Unisys Corporation | Method for comparing system states at different points in time |
US6715034B1 (en) | 1999-12-13 | 2004-03-30 | Network Appliance, Inc. | Switching file system request in a mass storage system |
US6341341B1 (en) * | 1999-12-16 | 2002-01-22 | Adaptec, Inc. | System and method for disk control with snapshot feature including read-write snapshot half |
US6708227B1 (en) * | 2000-04-24 | 2004-03-16 | Microsoft Corporation | Method and system for providing common coordination and administration of multiple snapshot providers |
US6978280B1 (en) * | 2000-10-12 | 2005-12-20 | Hewlett-Packard Development Company, L.P. | Method and system for improving LUN-based backup reliability |
US6877016B1 (en) * | 2001-09-13 | 2005-04-05 | Unisys Corporation | Method of capturing a physically consistent mirrored snapshot of an online database |
US6981114B1 (en) * | 2002-10-16 | 2005-12-27 | Veritas Operating Corporation | Snapshot reconstruction from an existing snapshot and one or more modification logs |
-
2000
- 2000-10-04 US US09/684,487 patent/US6654912B1/en not_active Expired - Lifetime
-
2001
- 2001-10-04 EP EP01979574A patent/EP1325415B1/en not_active Expired - Lifetime
- 2001-10-04 WO PCT/US2001/031422 patent/WO2002029572A2/en active IP Right Grant
- 2001-10-04 DE DE60112462T patent/DE60112462T2/en not_active Expired - Fee Related
-
2002
- 2002-05-23 US US10/154,414 patent/US20020194529A1/en not_active Abandoned
-
2003
- 2003-11-21 US US10/719,699 patent/US7096379B2/en not_active Expired - Fee Related
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5519844A (en) * | 1990-11-09 | 1996-05-21 | Emc Corporation | Logical partitioning of a redundant array storage system |
US20040073831A1 (en) * | 1993-04-23 | 2004-04-15 | Moshe Yanai | Remote data mirroring |
US5819292A (en) * | 1993-06-03 | 1998-10-06 | Network Appliance, Inc. | Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system |
US5479653A (en) * | 1994-07-14 | 1995-12-26 | Dellusa, L.P. | Disk array apparatus and method which supports compound raid configurations and spareless hot sparing |
US6085298A (en) * | 1994-10-13 | 2000-07-04 | Vinca Corporation | Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer |
US6023780A (en) * | 1996-05-13 | 2000-02-08 | Fujitsu Limited | Disc array apparatus checking and restructuring data read from attached disc drives |
US5960169A (en) * | 1997-02-27 | 1999-09-28 | International Business Machines Corporation | Transformational raid for hierarchical storage management system |
US6092215A (en) * | 1997-09-29 | 2000-07-18 | International Business Machines Corporation | System and method for reconstructing data in a storage array system |
US6269381B1 (en) * | 1998-06-30 | 2001-07-31 | Emc Corporation | Method and apparatus for backing up data before updating the data and for restoring from the backups |
US20020059505A1 (en) * | 1998-06-30 | 2002-05-16 | St. Pierre Edgar J. | Method and apparatus for differential backup in a computer storage system |
US20010010070A1 (en) * | 1998-08-13 | 2001-07-26 | Crockett Robert Nelson | System and method for dynamically resynchronizing backup data |
US6463573B1 (en) * | 1999-06-03 | 2002-10-08 | International Business Machines Corporation | Data processor storage systems with dynamic resynchronization of mirrored logical data volumes subsequent to a storage system failure |
US6543004B1 (en) * | 1999-07-29 | 2003-04-01 | Hewlett-Packard Development Company, L.P. | Method and apparatus for archiving and restoring data |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US6662268B1 (en) * | 1999-09-02 | 2003-12-09 | International Business Machines Corporation | System and method for striped mirror re-synchronization by logical partition rather than stripe units |
US6654912B1 (en) * | 2000-10-04 | 2003-11-25 | Network Appliance, Inc. | Recovery of file system data in file servers mirrored file system volumes |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070180305A1 (en) * | 2003-01-31 | 2007-08-02 | Hitachi, Ltd. | Methods for Controlling Storage Devices Controlling Apparatuses |
US7596672B1 (en) | 2003-10-24 | 2009-09-29 | Network Appliance, Inc. | Synchronous mirroring including writing image updates to a file |
US7200726B1 (en) | 2003-10-24 | 2007-04-03 | Network Appliance, Inc. | Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring |
US7203796B1 (en) | 2003-10-24 | 2007-04-10 | Network Appliance, Inc. | Method and apparatus for synchronous data mirroring |
US7325109B1 (en) * | 2003-10-24 | 2008-01-29 | Network Appliance, Inc. | Method and apparatus to mirror data at two separate sites without comparing the data at the two sites |
US7162662B1 (en) * | 2003-12-23 | 2007-01-09 | Network Appliance, Inc. | System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption |
US7363537B1 (en) * | 2003-12-23 | 2008-04-22 | Network Appliance, Inc. | System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption |
US20060168624A1 (en) * | 2004-11-22 | 2006-07-27 | John Carney | Method and system for delivering enhanced TV content |
US20060136771A1 (en) * | 2004-12-06 | 2006-06-22 | Hitachi, Ltd. | Storage system and snapshot data preparation method in storage system |
US8095822B2 (en) | 2004-12-06 | 2012-01-10 | Hitachi, Ltd. | Storage system and snapshot data preparation method in storage system |
US7536592B2 (en) * | 2004-12-06 | 2009-05-19 | Hitachi, Ltd. | Storage system and snapshot data preparation method in storage system |
US20090216977A1 (en) * | 2004-12-06 | 2009-08-27 | Hitachi, Ltd. | Storage System and Snapshot Data Preparation Method in Storage System |
US7707165B1 (en) * | 2004-12-09 | 2010-04-27 | Netapp, Inc. | System and method for managing data versions in a file system |
US9344112B2 (en) | 2006-04-28 | 2016-05-17 | Ling Zheng | Sampling based elimination of duplicate data |
US20070255758A1 (en) * | 2006-04-28 | 2007-11-01 | Ling Zheng | System and method for sampling based elimination of duplicate data |
US8165221B2 (en) | 2006-04-28 | 2012-04-24 | Netapp, Inc. | System and method for sampling based elimination of duplicate data |
US8296260B2 (en) | 2006-06-29 | 2012-10-23 | Netapp, Inc. | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US20080005141A1 (en) * | 2006-06-29 | 2008-01-03 | Ling Zheng | System and method for retrieving and using block fingerprints for data deduplication |
US8412682B2 (en) * | 2006-06-29 | 2013-04-02 | Netapp, Inc. | System and method for retrieving and using block fingerprints for data deduplication |
US20080005201A1 (en) * | 2006-06-29 | 2008-01-03 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US20110035357A1 (en) * | 2006-06-29 | 2011-02-10 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US7921077B2 (en) | 2006-06-29 | 2011-04-05 | Netapp, Inc. | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US7747584B1 (en) | 2006-08-22 | 2010-06-29 | Netapp, Inc. | System and method for enabling de-duplication in a storage system architecture |
US7853750B2 (en) | 2007-01-30 | 2010-12-14 | Netapp, Inc. | Method and an apparatus to store data patterns |
US20080184001A1 (en) * | 2007-01-30 | 2008-07-31 | Network Appliance, Inc. | Method and an apparatus to store data patterns |
US8001307B1 (en) | 2007-04-27 | 2011-08-16 | Network Appliance, Inc. | Apparatus and a method to eliminate deadlock in a bi-directionally mirrored data storage system |
US9069787B2 (en) | 2007-05-31 | 2015-06-30 | Netapp, Inc. | System and method for accelerating anchor point detection |
US20080301134A1 (en) * | 2007-05-31 | 2008-12-04 | Miller Steven C | System and method for accelerating anchor point detection |
US8762345B2 (en) | 2007-05-31 | 2014-06-24 | Netapp, Inc. | System and method for accelerating anchor point detection |
US20080313496A1 (en) * | 2007-06-12 | 2008-12-18 | Microsoft Corporation | Gracefully degradable versioned storage systems |
US7849354B2 (en) * | 2007-06-12 | 2010-12-07 | Microsoft Corporation | Gracefully degradable versioned storage systems |
US8793226B1 (en) | 2007-08-28 | 2014-07-29 | Netapp, Inc. | System and method for estimating duplicate data |
US7941691B2 (en) * | 2008-05-28 | 2011-05-10 | Fujitsu Limited | Control of connecting apparatuses in information processing system |
US20090299492A1 (en) * | 2008-05-28 | 2009-12-03 | Fujitsu Limited | Control of connecting apparatuses in information processing system |
US8250043B2 (en) | 2008-08-19 | 2012-08-21 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US20100049726A1 (en) * | 2008-08-19 | 2010-02-25 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US10142121B2 (en) | 2011-12-07 | 2018-11-27 | Comcast Cable Communications, Llc | Providing synchronous content and supplemental experiences |
US10848333B2 (en) | 2011-12-07 | 2020-11-24 | Comcast Cable Communications, Llc | Providing synchronous content and supplemental experiences |
US11711231B2 (en) | 2011-12-07 | 2023-07-25 | Comcast Cable Communications, Llc | Providing synchronous content and supplemental experiences |
US20150039572A1 (en) * | 2012-03-01 | 2015-02-05 | Netapp, Inc. | System and method for removing overlapping ranges from a flat sorted data structure |
US9720928B2 (en) * | 2012-03-01 | 2017-08-01 | Netapp, Inc. | Removing overlapping ranges from a flat sorted data structure |
US20240143215A1 (en) * | 2022-10-28 | 2024-05-02 | Netapp, Inc. | Fast resynchronization of a mirrored aggregate using disk-level cloning |
Also Published As
Publication number | Publication date |
---|---|
WO2002029572A3 (en) | 2003-01-09 |
US6654912B1 (en) | 2003-11-25 |
EP1325415A2 (en) | 2003-07-09 |
US20040153736A1 (en) | 2004-08-05 |
US7096379B2 (en) | 2006-08-22 |
EP1325415B1 (en) | 2005-08-03 |
WO2002029572A9 (en) | 2003-11-13 |
WO2002029572A8 (en) | 2002-09-12 |
WO2002029572B1 (en) | 2003-04-24 |
DE60112462D1 (en) | 2005-09-08 |
WO2002029572A2 (en) | 2002-04-11 |
DE60112462T2 (en) | 2006-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7143249B2 (en) | Resynchronization of mirrored storage devices | |
US20020194529A1 (en) | Resynchronization of mirrored storage devices | |
US5682513A (en) | Cache queue entry linking for DASD record updates | |
US7634594B1 (en) | System and method for identifying block-level write operations to be transferred to a secondary site during replication | |
US7415488B1 (en) | System and method for redundant storage consistency recovery | |
US6934725B1 (en) | Management of file extent mapping to hasten mirror breaking in file level mirrored backups | |
US7337288B2 (en) | Instant refresh of a data volume copy | |
JP4454342B2 (en) | Storage system and storage system control method | |
US6678809B1 (en) | Write-ahead log in directory management for concurrent I/O access for block storage | |
US7478263B1 (en) | System and method for establishing bi-directional failover in a two node cluster | |
US7904684B2 (en) | System and article of manufacture for consistent copying of storage volumes | |
US6035412A (en) | RDF-based and MMF-based backups | |
US7383407B1 (en) | Synchronous replication for system and data security | |
US6366986B1 (en) | Method and apparatus for differential backup in a computer storage system | |
US7089385B1 (en) | Tracking in-progress writes through use of multi-column bitmaps | |
US7194487B1 (en) | System and method for recording the order of a change caused by restoring a primary volume during ongoing replication of the primary volume | |
US6981114B1 (en) | Snapshot reconstruction from an existing snapshot and one or more modification logs | |
US6553389B1 (en) | Resource availability determination mechanism for distributed data storage system | |
US6832330B1 (en) | Reversible mirrored restore of an enterprise level primary disk | |
US8200631B2 (en) | Snapshot reset method and apparatus | |
US20040254964A1 (en) | Data replication with rollback | |
US20030065780A1 (en) | Data storage system having data restore by swapping logical units | |
US7617259B1 (en) | System and method for managing redundant storage consistency at a file system level | |
US7424497B1 (en) | Technique for accelerating the creation of a point in time prepresentation of a virtual file system | |
US20070277012A1 (en) | Method and apparatus for managing backup data and journal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NETWORK APPLIANCE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DOUCETTE, DOUGLAS P.;STRANGE, STEPHEN H.;VISWANATHAN, SRINIVASAN;AND OTHERS;REEL/FRAME:013198/0401;SIGNING DATES FROM 20020730 TO 20020812 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |