EP1423770A2 - Asynchronous mirroring in a storage area network - Google Patents

Asynchronous mirroring in a storage area network

Info

Publication number
EP1423770A2
EP1423770A2 EP02760525A EP02760525A EP1423770A2 EP 1423770 A2 EP1423770 A2 EP 1423770A2 EP 02760525 A EP02760525 A EP 02760525A EP 02760525 A EP02760525 A EP 02760525A EP 1423770 A2 EP1423770 A2 EP 1423770A2
Authority
EP
European Patent Office
Prior art keywords
volume
storage device
mirroring
remote
data object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02760525A
Other languages
German (de)
English (en)
French (fr)
Inventor
Nelson Nahum
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LSI Technologies Israel Ltd
Original Assignee
StoreAge Networking Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by StoreAge Networking Technology Ltd filed Critical StoreAge Networking Technology Ltd
Publication of EP1423770A2 publication Critical patent/EP1423770A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2074Asynchronous techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • the invention relates in general to the field of mirroring, or data replication, and in particular, to the asynchronous mirroring of data objects between storage devices coupled to a Storage Area Network (SAN) or to a network connectivity in general.
  • SAN Storage Area Network
  • a selected data object is a single data object, or a plurality, or a group, of data objects.
  • a data object is a volume, a logical or virtual volume, a data file, or any data structure.
  • the terms data object and volume are used interchangeably below.
  • the term "local” is used to indicate origin, such as for a local storage device.
  • the term “remote” is used to indicate destination, such as for a remote storage device.
  • Storage devices are magnetic disks, optical disks, RAIDS, and JBODS.
  • the storage space for a data object may span only a part, or the whole, or more than the whole space contents of a storage device.
  • a computing facility or processing facility is a computer processor, a host, a server, a PC, and also a storage switch or network switch, a storage router or network router, or a storage controller.
  • a computing facility may operate with a RAM for running computer programs, or operate with a memory and computer programs stored on magnetic or other storage means.
  • a network connectivity is a Local Area Network (LAN), a Wide Area Network (WAN), or a Storage Area Network (SAN).
  • LAN Local Area Network
  • WAN Wide Area Network
  • SAN Storage Area Network
  • Prior art direct access storage systems that perform remote mirroring and storage from one storage device to a second storage device, such as from a local storage device to a remote storage device, stipulate requirements that are hard to cope with, some examples of which are described below.
  • the disclosure presents a method to be implemented as a system to achieve mirroring, or replication, of a selected data object from a local storage device, to a remote storage device, by sequential freeze and copy of discrete blocks of data.
  • the selected data object may be used uninterruptedly, since mirroring is transparent to the operating system. Copying of the successive discrete blocks of data is performed asynchronously and in the background.
  • the at least one local storage device is coupled to a first processing facility (HL), and the at least one remote storage device is coupled to a second processing facility (HR).
  • the at least one local storage device, the at least one remote storage device, the first and the second processing facility are coupled to a network connectivity comprising pluralities of users, of processing facilities and of storage devices.
  • the method and the system comprise: running a mirroring functionality in the first and in the second processing facility, the mirroring functionality comprising: a freeze procedure for freezing the selected data object, a copy procedure for copying the frozen selected data object into the at least one remote storage device, permitting use and updating of the selected data object in parallel to running the mirroring functionality, and commanding, by default, repeated run of the mirroring functionality for copying updates to the selected data object, unless receiving command for mirroring break, whereby the selected data object residing in the at least one local storage device is copied and sequentially updated into the at least one remote storage device.
  • SN source volume
  • ANL local auxiliary volume
  • the mirroring functionality is applied simultaneously to more than one data object, and from at least one local storage device to at least one remote storage device, and vice- versa.
  • Fig. 1 is an example of a network connectivity
  • Fig. 2 presents the freeze procedure
  • Fig. 3 is a flowchart for sorting between various types of I/O READ and I/O WRITE instructions
  • Fig. 4 illustrates the procedure for an I/O READ instruction addressed to the source volume SV after start of the freeze procedure
  • Fig. 5 shows steps for the processing of an I/O WRITE command containing data updated after the freeze command
  • Fig. 6 exhibits the steps for an I/O WRITE instruction, for data unaltered since freeze time
  • Fig. 7 provides a general overview of the mechanisms of the mirroring functionality
  • a virtual volume of such a virtualized SAN may contain a group of data object, a plurality of local storage devices, and a plurality of remote storage devices.
  • a selected data object is frozen by a freeze procedure, for example as a source volume.
  • a first local auxiliary volume is created in the local storage device and a first remote volume, of the same size as the frozen source volume, is created in the remote storage device. Since the source volume is and must remain frozen, it may not incur changes, but it may be copied by the copy procedure to the remote storage device.
  • the selected data object may be used during mirroring.
  • the Operating System O.S. creates a resulting source volume comprising both the frozen selected data object and the first local auxiliary volume.
  • the resulting source volume is accessible to the I/O Read and I/O Write operations.
  • Only read operations are permitted to the frozen source volume, while the write updates to the selected data object are redirected to the first local auxiliary volume.
  • the freeze and copy procedures are repeated.
  • the first local auxiliary volume in the resulting source volume is now frozen, and simultaneously, a second local auxiliary volume and a second remote volume are created.
  • the second local auxiliary volume is added to the previously created resulting source volume, to form a new resulting source volume for use by the Operating System O.S.
  • the frozen first local auxiliary volume is copied to the second remote volume.
  • the data object may be used with the previous resulting source volume to which the last frozen local auxiliary volume is added to form a last resulting source volume.
  • the mirroring functionality performs successive freeze and copy procedures to replicate one, or a group of data object(s), from one or more local storage device(s), to one or more other, or remote, storage device(s).
  • a singular case relates to the mirroring of a selected data object consisting of only a single data object, residing in one local storage device, to but one remote storage device.
  • the mirroring functionality is operable to perform more than one mirroring operation simultaneously. For example, two different data objects, each one residing in say, a different volume in a different local storage device, are possibly mirrored to two different remote storage devices.
  • simultaneous mirroring is not limited to two selected data objects.
  • the mirroring functionality is also capable of cross mirroring, which in parallel to the last example, results to mirroring two different data objects, one residing in the local storage device and the other in the remote storage device, for mirroring, correspondingly, to the remote storage device and to the local storage device.
  • Cross mirroring is not restricted to simultaneous mirroring of two selected data objects.
  • the mirroring functionality achieves mirroring of groups of data objects, from several local storage devices to several remote storage devices, as well as two directional cross mirroring.
  • a mirroring overview table presents mirroring options I to NI inclusive, for direct mirroring, to which cross-mirroring must be added for all the options I to VI.
  • Fig. 1 of the co-pending patent application PCT/IL00/00309, entitled “Storage Nirtualization in a Storage Network", by the same applicant, incorporated herewith by reference in whole, cited below as the '309 patent.
  • Fig. 1 in the present application depicting a network connectivity NET.
  • computing facilities such as hosts, or servers H, or processors
  • storage devices SD such as Hard Disks HD.
  • mirroring may take place from one local storage device to another remote storage device controlled by a second, or remote processing facility.
  • a host H4 may command mirroring from a storage device SDA to a storage device SDB, controlled by another processing facility H3.
  • the host HI may control mirroring from a first hard disk HD1 to a second hard disk HD2 coupled to a processor H2.
  • the host H2 may command mirroring from a first hard disk HD2 to a second hard disk HD3 or another hard disk HD4.
  • Mirroring of a selected data object residing in more than one storage devices may be effected to one or more storage devices.
  • the minimum requirements are for two processing facilities and for at least two storage devices on the network connectivity: one local storage device for copying from and one remote storage device for writing thereto.
  • the mirroring of a data object from one storage device to another storage device requires the application of successive freeze and copy procedures.
  • the operation of a network connectivity may not be hampered while mirroring. Therefore, the description below illustrates first the freeze procedure, then the operation of the system while the freeze procedure is running and last, the copy procedure.
  • FIG. 2 A graphical illustration of the freeze procedure is depicted in Fig. 2, in stages from 2a to 2d.
  • the mirroring functionality operates on at least two processing facilities, such as a first and a second processing facility, respectively HL and HR, coupled to a network connectivity NET.
  • the at least one remote storage device SDRx may thus consist of a first remote storage device SDR1 , a second remote storage device (SDR2) and so on.
  • both the local and the remote storage devices may reside, say, inside the same or in different storage device(s) coupled to a SAN, or to a host H, the different storage devices being adjacent or each one on opposite side of the globe. Copy is made from the local storage device to one or more remote storage device(s). Any storage device may be designated with either name, but there is only one local storage device when mirroring therefrom.
  • the mirroring functionality which contains both the freeze procedure and the copy procedure, receives indication of the data object selected to be frozen.
  • the freeze procedure receives a request to freeze a selected data object as a source volume SN.
  • the "frozen" source volume SN is thus restricted to "read only", which does not alter the contents of the source volume.
  • the frozen source volume SN may now be copied as will be described below.
  • WRITE operations directed by the local processing facility HL to the that frozen source volume are redirected by the mirroring functionality to the local auxiliary volume 1 AVLl residing in the resulting source volume.
  • Read operations are thus permitted as long as they concern an original unaltered portion of the contents of the frozen source volume SV.
  • Write operations to the frozen source volume SV are redirected to the local auxiliary volume 1, since otherwise, they would effect changes to the contents of the frozen source volume SV.
  • the mirroring functionality, and thus the freeze procedure resides in both local and remote processing facilities, and is enabled to intercept I/O commands directed to the frozen data object, as will be described below with respect to the operation of the system.
  • WRITE operations diverted to the local auxiliary volume 1 AVLl are defined as updates. It is noted that a local auxiliary volume remains operative from time of creation until the time a next freeze is taken. In other words: until a next local auxiliary volume is created. Furthermore, the performance of the processing facilities involved is only but slightly affected by the freeze functionality that deals only with routing instructions, i.e. the redirection of I/O READ or I/O WRITE instructions.
  • a next freeze is performed and applied to the local auxiliary volume 1 AVLl.
  • a new local auxiliary volume 2 AVL2 is created, in the same manner as described for the local auxiliary volume 1 AVLl.
  • a new resulting source volume is now made to comprise the previous resulting source volume with the addition of the local auxiliary volume 2 AVL2.
  • the updates contained in the frozen local auxiliary volume 1 AVLl may now be copied, as will be described below. Again, the O.S. considers the last resulting source volume as the original source volume since the freeze operation is transparent.
  • the updates previously written into the frozen local auxiliary volume 2 AVL2 may now be copied.
  • the last created, or ultimate local auxiliary volume 3 AVL3 becomes part of the new and ultimate resulting source volume, together with the previous resulting source volume.
  • the local auxiliary volume 1 AVLl is deleted, and thereby, storage space is saved, while the contents of the ultimate resulting source volume are kept unchanged.
  • the now frozen source volume is arbitrarily divided into sequentially numbered segments or chunks of 1 MB for example, and these chunks are listed in a Freeze Table 1 created at freeze time within the local auxiliary volume 1 AVLl.
  • the total number of entries in the freeze table 1 is thus equal to the capacity of the frozen source volume SV, expressed in MB. If the division does not yield an integer, then the number of chunks listed in the freeze table is rounded up to the next integer.
  • the freeze table 1 resides in the local auxiliary volume 1 and is a tool for redirecting I/O instructions directed by the O.S. to the data object
  • the I/O READ commands are separated into two categories.
  • a second category of READ instructions refers to data that underwent update by WRITE commands, which updates occurred after the freeze, and therefore, were routed to the local auxiliary volume 1.
  • a mapping table is required. For example, when the O.S. commands an I/O READ instruction on data that was updated after a freeze, the address of that data in the local auxiliary volume is needed.
  • Freeze Table 1 With reference to Freeze Table 1, there is shown a first left column with chunk numbers of the source volume SV and a second right column with an index pointing to the address where each chunk is mapped.
  • the chunk number 0 in the first line and left column of the Freeze Table 1 is indexed as -1 in the right column of that same first line.
  • the index -1 indicates original condition or lack of change since the last freeze.
  • the indices other than - 1 redirect the I/O instructions to a specific address to be found in the ultimate local auxiliary volume. It is noted that the mechanism for routing I/O instructions to the frozen source volume S V and to the local auxiliary volume permits continuous unhampered use of the data object.
  • Freeze Procedure routes I/O instructions directed to the data object according to three different conditions. To keep the terms of the description simple, reference is made to only the first freeze, thus to one frozen source volume SV and to one first local auxiliary volume.
  • READ instructions are directed either to the source volume SV, if unaltered since freeze, or else, to the local auxiliary volume.
  • step D2 the O.S. waits for an I/O instruction in step Dl, and when such an instruction is received, a test at step D2, differentiates between READ and WRITE instructions. For a READ instruction, thus for yes (Y), control is diverted to step D3, for further handling, as by step Al in Fig. 4, described below.
  • step D4 handling Write I/O instructions, to step D5, to check if there were prior updates or if this is the first WRITE after freeze. If there were prior updates, then control passes to step D6 to be handled by step Bl in Fig.
  • step D7 which passes I/O WRITE instructions without prior update to step Cl below.
  • Read Instructions Fig. 4 illustrates the procedure for an I/O READ instruction sent to the data object after freeze start.
  • the instruction received by the "Wait for I/O" first step Al passes to step A2, where it is filtered in search of a READ instruction.
  • the WRITE instruction is diverted to step A3 for passage to step Bl in Fig. 5.
  • step A4 calculates the chunk number and searches for the index in the freeze table.
  • the chunk number is calculated by an integer division of the address number by 1MB, and further divided by 512 to find the sector number.
  • 1MB/512 (1024 bytes x 1024 bytes)/512.
  • the result is forwarded to the following step A5.
  • the O.S. searches for the address(es) in the Freeze Table 1, across the calculated chunk number(s).
  • Step A5 differentiates between the index -1 designating data unaltered since freeze, and other indices. Zero and positive integer values indicate that the data reside in the local auxiliary volume.
  • step A6 If the chunk number forwarded to step A5 is -1, then the READ command is sent to the step A6, to "Read from the source volume". Else, the READ command is directed to the address in the local auxiliary volume, as found in the Freeze Table 1, as per step A7. After completion, both steps A6 and A7 return control to the first step Dl in Fig. 3. Write Instructions
  • Fig. 5 shows steps for the processing of an I/O WRITE command to a chunk of the local auxiliary volume, which contains data updated after the freeze command.
  • the procedure waits to receive an I/O command that is then forwarded to the next step B2.
  • a filter at B2 checks whether the I/O command is a READ or a WRITE command;
  • An I/O READ command is routed to step B3 to be handled as an I/O READ command by step Al in Fig. 4, but an I/O WRITE command is directed to step B4, where the chunk number is calculated by division, as explained above, for access to the Freeze Table 1. Should the WRITE command span more than one single chunk and 5 cross chunk boundaries, then two or more chunk numbers are derived.
  • step B5 The one or more chunk number is passed to step B5 where the freeze table 1 is looked up to find the index number corresponding to the chunk(s) in question. If a value of -1 is found, then control is directed to step B6, to be handled as unaltered data residing in the source volume SV. In case a zero or positive index value is discovered in the Freeze l o Table 1 , then by step B7, instructions are directed to the local auxiliary volume, for writing to the specified address. From steps B6 and B7, control returns to the I/O waiting step Dl in Fig. 3.
  • the first step Cl is a "Wait for I/O" instruction that once received, leads to
  • step C2 acting as a "Write I/O" filter. If the received I/O instruction is not a "Write I/O", then control is passed to step C3 to be handled as a "Read I/O" as by step Al in Fig. 4. Otherwise, for a write instruction, the chunk number is calculated in step C4. I/O commands crossing the boundary of a chunk are also dealt with, resulting in at least two chunk numbers.
  • step C5 uses the calculated chunk number to search the freeze table and differentiate between unaltered data and updated data. In the latter case, control passes to step C6, where the I/O is directed for handling as a previously updated Write I/O command by step Bl in Fig. 5.
  • step C7 a search is made for a first free chunk in the local auxiliary volume.
  • the index opposite the chunk number calculated in step C4 is altered, to indicate not -1 anymore, but the address in the local auxiliary volume.
  • the single or more chunks must first be copied from the source
  • Control next passes from step C7 to step C8, where a check is performed to find out whether there is need for more storage space in the local auxiliary volume. For more storage space in a SAN supporting virtualization, the request is forwarded to a virtual
  • a request is forwarded to the virtualization appliance to grant storage space expansion to the local auxiliary volume, as in step C9.
  • a storage allocation program run by the O.S. of the local host HL handles additional storage space.
  • control passes from either step C8, not requesting additional
  • step CIO 40 storage space, or from step C9 after expansion of storage space, to step CIO, where the complete chunk is copied from the source volume SV to the local auxiliary volume. Once this is completed, control passes to step Cll.
  • step Cl 1 the freeze table 1 is updated and opposite the chunk number calculated in step C4, instead of the value -1, the address in the local auxiliary volume is entered. From step Cl 1 control returns to step Bl in Fig. 5, via step C6.
  • the local auxiliary volume has at most, the same number of chunks as the source volume SV. This last case happens when all the chunks, or segments, of the source volume SV are written to. I/O WRITE instruction updates to the same chunk of the source volume SV overwrite previous WRITE commands that are then lost.
  • the mirroring functionality may thus command to copy the frozen source volume SV, from the storage device of origin wherein it resides, defined as a local storage device, to any other storage device, which is referred to as a remote storage device.
  • the remote storage device is possibly another storage device at the same site, or at a remote site, or consists of many remote storage devices at a plurality of sites. The remote storage device may even be selected as the same storage device where the source volume S V is saved.
  • the mirroring functionality may be repeated sequentially, or may be stopped after any freeze and copy cycle.
  • Copying from the frozen source volume SV to the remote storage device does not impose a load the processing facility resources, or slow down communications, or otherwise interfere with the operation of the processing facility, since only freeze and copy procedures are required.
  • FIG. 7 as a general overview, while a more detailed description is provided with reference to Fig. 8.
  • the left column relates to the local storage device SDL wherein a data object resides in the source volume SV, and the abscise displays a time axis t.
  • the right column indicates events occurring in parallel to those at the local storage device, and depicts the process at the remote storage device SDRx, where x[l, 2, ..., n] is chosen out of the at least one x available storage device.
  • the denomination "the remote storage device SDRx" is used below in the sense of at least one storage device.
  • Stage 7A in Fig. 7 shows the situation prior to mirroring.
  • a first local auxiliary volume 1 AVLl is created in the local storage device SDL, whereto updates to the data object are now directed.
  • the updates are those I/O WRITE instructions from the computing facility HL that are redirected to the local auxiliary volume.
  • a first remote volume RVx/s is created in the remote storage device SDRx, in the right column of Fig. 7, with the same size as the source volume SV.
  • the frozen source volume SV is copied, in the background, and written to the remote volume RVx/1.
  • freeze procedure divides a frozen data object into chunks of e.g. 1 MB.
  • a freeze table is also created therein, to relate between the source volume and the updates.
  • the freeze table redirects I/O instructions from the data object to the local auxiliary volume, when necessary.
  • the first local auxiliary volume AVLl is frozen and a second remote volume RVx/2 is created in the remote storage device SDRx, in the right column, with the same size as the first local auxiliary volume AVLl.
  • a second local auxiliary volume 2 AVL2 is created in the local storage device SDL whereto updates to the data object are directed.
  • a freeze table is automatically created by the freeze procedure, to reside in each local auxiliary volume, to the advantage of the O.S.
  • the first local auxiliary volume AVLl including the freeze tables for the benefit of the second computing facility HR, is copied to and written to the second remote volume RVx/2.
  • a new resulting source volume is created together with a new freeze table.
  • the new resulting source volume consists of the previous resulting source volume to which is added the second local auxiliary volume AVL2.
  • the O.S may thus communicate with the new resulting source volume to use the data object in parallel to mirroring.
  • the local storage device SDL contains the source volume SV, the first local auxiliary volume AVLl and the second local auxiliary volume AVL2.
  • the remote storage device SDRx contains the first and the second remote volumes.
  • the frozen volumes namely the source volume SV and the first local auxiliary volume are synchronized, whereby the updates previously written into the first local auxiliary volume AVLl are entered into the source volume SV.
  • the freeze table residing in the first local auxiliary volume AVLl is used for correctly synchronizing the updates.
  • the first local auxiliary volume AVLl which contains at most as many chunks or segments as the source volume SV, is copied to overwrite the contents of the source volume SV that retains its original size.
  • the first local auxiliary volume AVLl is now deleted.
  • the indices opposite the chunk numbers in the freeze table residing in the second local auxiliary volume AVL2 are set to index values of -1, to reflect the status of the synchronized volumes.
  • the second remote volume RVx/2 is synchronized into the first volume RVx/1, which retains the same size as the source volume SV. Synchronization at the remote storage device is performed by the second processing facility HR using the freeze table copied thereto together with the last copied local auxiliary volume. The second remote volume RVx/2 may now be deleted.
  • Synchronization limits the required storage space in both the local storage device SDL and the remote storage device SDRx, by deleting the local auxiliary volume and the remote volume that now becomes unnecessary.
  • Stage 7E is another freeze stage, equivalent to stages 7B and 7C.
  • a third local auxiliary volume AVL3 is created with the local storage device SDL.
  • a third remote volume RVx/3 is created in the remote storage device SDRx, in the right column, with the same size as the second auxiliary volume ANL2.
  • the ultimate resulting source volume now contains the previous resulting source volume plus the ultimate local auxiliary volume ANL3.
  • the last frozen local auxiliary volume here ANL2
  • the last created remote volume RNx/3.
  • command is given to synchronize the last frozen local auxiliary volume ANL2 with the source volume SN.
  • the denomination remote storage device x is a name used to refer to a storage device different from the local storage device, at the same site or at a remote site.
  • mirroring from a source volume SV residing in a local SANL at a local site is feasible not only to a storage device at the local site, but also to a storage device emplaced at a remote site, using the same mirroring procedure.
  • cross mirroring is feasible, as well as simultaneous cross mirroring. Mirroring flow of control
  • Fig. 8 illustrates the consecutive steps of the mirroring functionality, applicable to any network connectivity.
  • the SAN consists of a least: a local host HL, a remote host HR and two separate storage devices, local and remote, all referred to but not shown in Fig. 8.
  • the same minimum of one local host HL and one remote host HR, and two storage devices is necessary for other network connectivities.
  • these are designated as the local storage device SDL and the remote storage device SDRx.
  • the names given to the storage devices are unrelated to their location.
  • step 202 of Fig. 8 command is given to mirror a selected source volume SV, which resides in a local storage device SDL that is coupled to a local host HL.
  • the command is entered by a user, or by a System Administrator, or by the Operating System O.S., or by a software command, none of which appears in Fig. 8.
  • Mirroring is directed to one or more storage devices referred to as remote storage device x, SDRx, where x is an integer, from 1 to n.
  • control passes to step 208, which commands the creation, in the remote storage device x SDRx, of a first remote virtual volume RVx/s, here RVx/1, with the same size as that of the source volume SV.
  • the creation and management of virtual volumes referred to as volumes for short, is transparent to the O.S, and the storage of data in physical storage devices, is handled as explained in the co-pending '309 application.
  • step 210 checks for an acknowledgment of completion from step 208, to ensure the availability of the first remote volume RVx/s. If the check is negative, a new check loop through step 210 is started. Otherwise, in step 212, for a positive reply to the test of step 210, a command starts the copy of the source volume SV to the first remote (virtual) volume RVx/s, and control flows to step 214.
  • step 214 complementary to step 212, the source volume SV is written to the first remote volume RVx/1, and when ended, completion is acknowledged to the computing facility HL, which then performs a completion check in step 216, similarly to step 210.
  • An acknowledgement of completion is sent to step 224.
  • control is passed to step 226, but else, the completion-check is repeated.
  • step 226 command is given to copy the frozen penultimate, here the first, local auxiliary volume AVL/s-1 to the ultimate, here the second, remote volume RVx/s.
  • step 228 executes the write operation from the first local auxiliary volume AVL/s-1 to the second remote volume RVx/s, which upon write completion, is acknowledged to step 230.
  • both the source volume SV and the first local auxiliary volume AVLl are acknowledged as being actually mirrored to the SDRx, in both the
  • first local auxiliary volume ANL1 or the second remote volume RNx/2 there is no further reason to separately operate either the first local auxiliary volume ANL1 or the second remote volume RNx/2, and therefore, those (virtual) volumes may be synchronized with, respectively, the source volume SN and the first remote virtual volume RVx/1.
  • Such synchronization and unification is performed, respectively, in steps 232 and 234, whereby only the source volume SV and the first remote virtual volume RVx/1 remain available, while both the first local auxiliary virtual volume AVLl and the second remote volume RVx/2 are deleted. If so wished, the mirroring loop is commanded to be broken in step 236 and ended in step 238, or else, mirroring is continued by transfer of control to step 218.
  • the procedure repeats a loop through the steps from 218 to 236 inclusive, which either continues mirroring or else, ends mirroring if so commanded.
  • the mirroring functionality described above is represented by row I in Table 2. This is the simplest and basic mirroring method implementation for mirroring one data object, from one local storage device to one remote storage device. For each mirroring cycle, one local auxiliary volume AVL and one remote volume RVx are created.
  • one data object is stored in one local storage device SDL, for mirroring into a plurality of remote storage devices SDRx, where x receives the identity of the specific storage device, will require the creation of a number of remote volumes equal to the number of the plurality of remote storage devices, for each mirroring cycle.
  • SDR1 to SDR4 the mirroring functionality will apply the freeze procedure, as by row I, and next, the copy procedure will be operated in parallel four times, once for each remote storage device. The next mirroring cycle, thus the interval between two consecutive mirroring cycles, will be started after completion of the copy to, and writing to all the four storage devices.
  • Each mirroring cycle will require one local auxiliary volume and four remote volumes RVx, with x ranging from 1 to 4, for example.
  • the minimal number of local auxiliary volumes and of remote volumes created for each mirroring cycle by the mirroring functionality is shown in the third and last column of Table 2.
  • the number of remote storage devices may be multiplied by integers. Thereby, mirroring may be achieved to 8, 12, 16, etc. remote storage devices.
  • Row III of Table 2 calls for the mirroring of a selected data object residing in local storage SDL as single data objects, thus as a group of data objects, into one remote storage device SDRx.
  • the mirroring functionality is applied as by row I, by freezing all the single data objects simultaneously. For example, if the selected data object is a group of three single data objects, then these three are frozen at the same time, and then each one is copied to the remote storage device SDRx. The next mirroring cycle may now start after completion of writing to the storage device SDRx.
  • the freeze procedure is simultaneous for the three single data objects and the method of row I is applied to each one of the three single data objects.
  • a next mirroring cycle will start after completion of the last write operation to the destination remote storage device SDRx.
  • Row V applies the freeze procedure as by the method of row III and the copy procedure for copy to many remote storage devices as by row II.
  • freeze procedure is simultaneous for all more than one data objects to be frozen, whether belonging to the same selected data object or stored in more than one local storage device.
  • the cycle time to the next mirroring cycle is dictated by the time needed for the copy procedure to complete the last copy, when multiple copies are performed, such as to many remote storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)
EP02760525A 2001-08-14 2002-08-13 Asynchronous mirroring in a storage area network Withdrawn EP1423770A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31220901P 2001-08-14 2001-08-14
US312209P 2001-08-14
PCT/IL2002/000665 WO2003017022A2 (en) 2001-08-14 2002-08-13 Asynchronous mirroring in a storage area network

Publications (1)

Publication Number Publication Date
EP1423770A2 true EP1423770A2 (en) 2004-06-02

Family

ID=23210373

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02760525A Withdrawn EP1423770A2 (en) 2001-08-14 2002-08-13 Asynchronous mirroring in a storage area network

Country Status (6)

Country Link
EP (1) EP1423770A2 (zh)
JP (1) JP2005500603A (zh)
CN (1) CN1331062C (zh)
AU (1) AU2002326116A1 (zh)
CA (1) CA2457091A1 (zh)
WO (1) WO2003017022A2 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1305265C (zh) * 2003-11-07 2007-03-14 清华大学 San系统中基于负载自适应的异步远程镜像方法
US7917711B2 (en) * 2003-11-14 2011-03-29 International Business Machines Corporation System, apparatus, and method for automatic copy function selection
US7054883B2 (en) 2003-12-01 2006-05-30 Emc Corporation Virtual ordered writes for multiple storage devices
US7231502B2 (en) 2004-02-04 2007-06-12 Falcon Stor Software, Inc. Method and system for storing data
JP2005228170A (ja) 2004-02-16 2005-08-25 Hitachi Ltd 記憶装置システム
GB0410540D0 (en) * 2004-05-12 2004-06-16 Ibm Write set boundary management for heterogeneous storage controllers in support of asynchronous update of secondary storage
US7856419B2 (en) * 2008-04-04 2010-12-21 Vmware, Inc Method and system for storage replication
CN102567131B (zh) * 2011-12-27 2015-03-04 创新科存储技术有限公司 一种异步镜像方法
US9983960B2 (en) 2012-01-23 2018-05-29 International Business Machines Corporation Offline initialization for a remote mirror storage facility
US8930309B2 (en) * 2012-02-29 2015-01-06 Symantec Corporation Interval-controlled replication
US9218255B2 (en) * 2012-08-27 2015-12-22 International Business Machines Corporation Multi-volume instant virtual copy freeze
KR102078867B1 (ko) 2013-09-17 2020-02-18 삼성전자주식회사 제어권 관리 방법, 그에 따른 클라이언트 기기 및 그에 따른 마스터 기기
US10642809B2 (en) 2017-06-26 2020-05-05 International Business Machines Corporation Import, export, and copy management for tiered object storage

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3126225B2 (ja) * 1991-07-12 2001-01-22 富士通株式会社 データベース・システム
US5455946A (en) * 1993-05-21 1995-10-03 International Business Machines Corporation Method and means for archiving modifiable pages in a log based transaction management system
US5515502A (en) * 1993-09-30 1996-05-07 Sybase, Inc. Data backup system with methods for stripe affinity backup to multiple archive devices
US5799141A (en) * 1995-06-09 1998-08-25 Qualix Group, Inc. Real-time data protection system and method
US5852715A (en) * 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions
US6073209A (en) * 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
US6067199A (en) * 1997-06-30 2000-05-23 Emc Corporation Method and apparatus for increasing disc drive performance
US6308284B1 (en) * 1998-08-28 2001-10-23 Emc Corporation Method and apparatus for maintaining data coherency
US6549992B1 (en) * 1999-12-02 2003-04-15 Emc Corporation Computer data storage backup with tape overflow control of disk caching of backup data stream
US6496908B1 (en) * 2001-05-18 2002-12-17 Emc Corporation Remote mirroring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO03017022A3 *

Also Published As

Publication number Publication date
WO2003017022A3 (en) 2004-03-18
AU2002326116A1 (en) 2003-03-03
CA2457091A1 (en) 2003-02-27
JP2005500603A (ja) 2005-01-06
WO2003017022A2 (en) 2003-02-27
CN1331062C (zh) 2007-08-08
CN1549974A (zh) 2004-11-24

Similar Documents

Publication Publication Date Title
JP4809040B2 (ja) ストレージ装置及びスナップショットのリストア方法
US7707151B1 (en) Method and apparatus for migrating data
US7809912B1 (en) Methods and systems for managing I/O requests to minimize disruption required for data migration
US7707186B2 (en) Method and apparatus for data set migration
US8935497B1 (en) De-duplication in a virtualized storage environment
US8706833B1 (en) Data storage server having common replication architecture for multiple storage object types
JP4175764B2 (ja) 計算機システム
JP3478746B2 (ja) ディスク上の追加のアドレス空間を設ける方法
EP0681721B1 (en) Archiving file system for data servers in a distributed network environment
US7325110B2 (en) Method for acquiring snapshot
EP2905709A2 (en) Method and apparatus for replication of files and file systems using a deduplication key space
US20030120676A1 (en) Methods and apparatus for pass-through data block movement with virtual storage appliances
US7330862B1 (en) Zero copy write datapath
US7424497B1 (en) Technique for accelerating the creation of a point in time prepresentation of a virtual file system
US20060047926A1 (en) Managing multiple snapshot copies of data
JP2020528618A (ja) 整合性グループにおける整合したポイント・イン・タイム・スナップコピーの非同期ローカル及びリモート生成
EP1637987A2 (en) Operation environment associating data migration method
US11954372B2 (en) Technique for efficient migration of live virtual disk across storage containers of a cluster
US6510491B1 (en) System and method for accomplishing data storage migration between raid levels
JP2002351703A (ja) 記憶装置およびファイルデータのバックアップ方法およびファイルデータのコピー方法
JP5944001B2 (ja) ストレージシステム、管理計算機、ストレージ装置及びデータ管理方法
WO2003014933A1 (en) Data backup method and system using snapshot and virtual tape
US10031682B1 (en) Methods for improved data store migrations and devices thereof
JP2000298554A (ja) Raidデータ記憶システムにおける瞬時バックアップを提供する方法及びシステム
EP1423770A2 (en) Asynchronous mirroring in a storage area network

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040303

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RIN1 Information on inventor provided before grant (corrected)

Inventor name: NAHUM, NELSONC/O STOREAGE NETWORKING TECHNOLOGIES

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20080714