WO2011125126A1 - 非同期リモートコピーシステム、及び、記憶制御方法 - Google Patents
非同期リモートコピーシステム、及び、記憶制御方法 Download PDFInfo
- Publication number
- WO2011125126A1 WO2011125126A1 PCT/JP2010/002540 JP2010002540W WO2011125126A1 WO 2011125126 A1 WO2011125126 A1 WO 2011125126A1 JP 2010002540 W JP2010002540 W JP 2010002540W WO 2011125126 A1 WO2011125126 A1 WO 2011125126A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- journal
- storage
- storage device
- data
- storage resource
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2058—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1629—Error detection by comparing the output of redundant processing systems
- G06F11/1641—Error detection by comparing the output of redundant processing systems where the comparison is not performed by the redundant processing components
- G06F11/1645—Error detection by comparing the output of redundant processing systems where the comparison is not performed by the redundant processing components and the comparison itself uses redundant hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1456—Hardware arrangements for backup
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2079—Bidirectional techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/855—Details of asynchronous mirroring using a journal to transfer not-yet-mirrored changes
Definitions
- the present invention relates to storage control in a storage system that performs remote copy, which is a copy of data between storage apparatuses.
- Remote copy which is a copy of data between the first and second storage devices.
- remote copy for example, there are synchronous remote copy and asynchronous remote copy.
- the first storage device receives a write request from the host
- synchronous remote copy is employed, the data following the write request is transferred from the first storage device to the second storage device, and then the write request is made.
- Completion response is sent to the host device, but if asynchronous remote copy is employed, the write completion is completed even if the data according to the write request is not transferred from the first storage device to the second storage device.
- a response is sent to the host device.
- the first storage device has a first JVOL in addition to the first DVOL (logical volume in which data is stored). (Logical volume in which a journal (hereinafter referred to as JNL) is stored).
- the second storage device has a second JVOL in addition to the second DVOL.
- the first storage device stores the data according to the write request in the first DVOL, and stores the JNL of the data in the first JVOL.
- the first storage device transfers the JNL in the first JVOL to the second storage device, and the second storage device writes the JNL from the first storage device to the second JVOL.
- the second storage device writes the data held by the JNL in the second JVOL to the second DVOL.
- the data written to the first DVOL is copied to the second DVOL.
- Patent Document 1 discloses a multi-target asynchronous remote copy system. In the multi-target method, a plurality of copy destinations exist for one copy source. Specifically, Patent Document 1 discloses, for example, the following.
- (*) There are a first storage device, a second storage device, and a third storage device.
- the first storage device is a copy source, and the second and third storage devices are copy destinations from the first storage device.
- the first host device is connected to the first storage device, the second host device is connected to the second storage device, and the third host device is connected to the third storage device.
- the first storage device has a first DVOL, a first first JVOL, and a second first JVOL.
- the second storage device has a second DVOL and a second JVOL.
- the first and second DVOLs are paired.
- the first DVOL is a primary DVOL
- the second DVOL is a secondary DVOL.
- the third storage device has a third DVOL and a third JVOL.
- the first and third DVOLs are paired.
- the first DVOL is the primary DVOL
- the third DVOL is the secondary DVOL.
- the first host device writes data to the first DVOL.
- the first storage device writes the JNL of the data to be written to the first DVOL to both the first first JVOL and the second first JVOL.
- JNL includes an update number in addition to input / output target data of the host device.
- the JNL in the first first JVOL is copied to the second JVOL.
- the second storage device reflects the JNL having the oldest update number among the one or more unreflected JNLs in the second JVOL in the second DVOL (the data held in the JNL 2 DVOL).
- the JNL in the second first JVOL is copied to the third JVOL.
- the third storage device reflects the JNL having the oldest update number among the one or more unreflected JNLs in the third JVOL in the third DVOL (the data held in the JNL 3 DVOL).
- the second host device After the second storage device becomes the copy source, the second host device writes data to the second DVOL, the second storage device updates the update number, and the JNL including the update number and data Is written to the second JVOL.
- the JNL is copied from the second JVOL to the third JVOL, and the third storage device writes the data held by the JNL in the third JVOL to the third DVOL.
- the starting point of the update number in the second storage device is the latest update number among the JNLs already reflected in the second storage device. This is because the second DVOL has data up to the order represented by the latest update number.
- the latest update number in the JNL reflected in the second storage device is older than the latest update number in the JNL reflected in the third storage device. This is because in that case, the state of the second DVOL is older than the state of the third DVOL, and data is copied from the DVOL in the old state to the DVOL in the new state.
- Such a problem may also occur when the first storage device is stopped for a reason other than a failure (for example, when the first storage device is stopped due to a so-called planned stop for maintenance).
- an object of the present invention is to appropriately continue the business even when the first storage device is stopped in the multi-target asynchronous remote copy system.
- a first storage device having a first storage resource group and connected to a first host device, and a second storage device having a second storage resource group and connected to a second host device And a third storage device having a third storage resource group and connected to a third host device.
- the first storage resource group has a first data volume that is a logical volume to which data is written, and a first journal storage resource that is a storage resource to which a data journal is written.
- the second storage resource group has a second data volume that is a logical volume to which data is written, and a second journal storage resource that is a storage resource to which a data journal is written.
- the third storage resource group has a third data volume that is a logical volume to which data is written, and a third journal storage resource that is a storage resource to which a data journal is written.
- the first storage device When data is written from the host device to the first data volume, the first storage device updates an update number, which is a number that is updated each time data is written to the data volume in the first storage resource group. A journal including the update number and the data is created, and the journal is written to the first journal storage resource.
- Multi-target asynchronous remote copy is performed.
- the journal is transferred from the first storage device to the second storage device and reflected in the second data volume, so that the data in the first data volume is written to the second data volume.
- the journal is transferred from the first storage device to the third storage device and reflected in the third data volume, so that the data in the first data volume is written into the third data volume.
- the following is performed.
- (A1) The journal is copied from the first journal storage resource to the second journal storage resource.
- the second storage device reflects one or more unreflected journals in the second journal storage resource in the second data volume in the order of the update numbers.
- the journal is copied from the first journal storage resource to the third journal storage resource.
- the third storage device reflects one or more unreflected journals in the third journal storage resource to the third data volume in the order of the update numbers.
- (X1) It is determined which one of the update number of the journal reflected recently in the second storage device and the update number of the journal reflected recently in the third storage device is newer.
- (X2) It is determined whether or not there is one or more difference journals in the new storage device (the storage device having the update number determined to be new in (x1)) among the second and third storage devices.
- the one or more differential journals are one or more journals from a journal having an update number next to an update number not determined to be new in (x1) to a journal having an update number determined to be new in (x1). .
- FIG. 1 shows a configuration of an asynchronous remote copy system according to Embodiment 1 of the present invention. Indicates the path between storage devices.
- the status of the JNL groups 112A, 112B and 112C in FIG. 6 and the validity / invalidity of the mirror are shown.
- the status of the JNL groups 112B and 112C in FIG. 7 and the validity / invalidity of the mirror are shown.
- the status of the JNL groups 112B and 112C in FIG. 10 and the validity / invalidity of the mirror are shown.
- An overview of multi-target asynchronous remote copy is shown below. A part of processing performed when a failure occurs in the first storage 105A is shown.
- An outline of differential resync from the third storage 105C to the second storage 105B is shown.
- An outline of differential resync from the second storage 105B to the third storage 105C is shown.
- An overview of business continuation after differential resync is complete is shown.
- the configuration of the first storage 105A is shown.
- the structure of JNL is shown.
- the structure of JVOL115A is shown.
- the structure of meta information is shown.
- the control information which each storage has is shown.
- the structure of the JVOL effective bitmap 701A is shown.
- the structure of the DVOL effective bitmap 702A is shown.
- the structure of the JVOL management table 703A is shown.
- the structure of the JNL group management table 704A is shown.
- the structure of the pair management table 705A is shown.
- the flow of difference management and formation copy is shown.
- the flow of write processing in the first storage 105A is shown.
- the flow of JNL read processing is shown.
- the flow of JNL reflection processing is shown.
- a flow of processing relating to the usage rate check of the first JVOL 115A is shown.
- a flow of processing performed when a failure occurs in the first storage 105A is shown.
- An example of a case where differential resync is possible is shown.
- An example of a case where differential resync is not possible is shown.
- the timing at which data writing from the second host 103B to the second DVOL 113B is permitted is shown.
- the timing at which data writing from the second host 103B to the second DVOL 113B is permitted is shown.
- the flow of write processing in the second storage 105B when differential resync from the third storage 105C to the second storage 105B is performed is shown.
- the flow of the JNL reflection process in the second storage 105B when differential resync from the third storage 105C to the second storage 105B is performed is shown.
- the flow of read processing in the second storage 105B when differential resync from the third storage 105C to the second storage 105B is performed is shown.
- Embodiment 4 of this invention the flow of the process regarding judgment of the necessity of transfer of difference JNL is shown. An example of a case where there is a difference JNL that does not require transfer will be shown.
- xxx table and “xxx bitmap”, but the various types of information may be expressed in a data structure other than a table or a bitmap. . In order to show that it does not depend on the data structure, “xxx table” and “xxx bitmap” can be called “xxx information”.
- numbers are mainly used as identification information for various objects, but other types of identification information (for example, names) may be employed instead of numbers.
- FIG. 1 shows a configuration of an asynchronous remote copy system according to the first embodiment of the present invention.
- a journal is described as “JNL”
- a data volume that is a logical volume to which data is written is described as “DVOL”
- a JNL volume that is a logical volume to which JNL is written is described as “JVOL”.
- the host device is described as “host”
- the storage device is described as “storage”
- DKC controller
- first site 101A There are three or more sites, for example, a first site 101A, a second site 101B, and a third site 101C.
- the reference numerals of the elements included in the first site 101A are combinations of the parent number and the child code “A”
- the reference numerals of the elements included in the second site 101B are the parent numbers And the child code “B”
- the reference code of the element included in the third site 101B is a combination of the parent number and the child code “C”.
- the first site 101A has a first storage 105A and a first host 103A connected to the first storage 105A.
- the first storage 105A includes a first DKC 111A and a first JNL group 112A.
- One JNL group 112A includes a DVOL 113A and a JVOL 115A.
- the second and third sites 101B and 101C have the same configuration as the first site 101A.
- the storages 105A and 105B are physically connected via a dedicated line (or communication network).
- the storages 105B and 105C are also physically connected via a dedicated line (or communication network).
- the storages 105A and 105C are also physically connected via a dedicated line (or communication network).
- the control path is a path necessary for differential resync (described later). Specifically, for example, a path through which a sequence number acquisition request (described later) flows in the differential resync process.
- the data transfer path is a path through which JNL flows. Both the control path and the data transfer path are paths capable of bidirectional communication.
- mirror There is a logical connection between JNL groups called “mirror”.
- the connection between JNL groups 112A and 112B is mirror # 0 (mirror (M0) assigned number “0”)
- the connection between JNL groups 112A and 112C is mirror # 1 ( The number “1” is a mirror (M1))
- the connection between the JNL groups 112B and 112C is a mirror # 2 (a mirror (M2) assigned a number “2”).
- the first site 101A is the operation site.
- the status of the first JNL group 112A is “master”, and the statuses of the second and third JNL groups 112B and 112C are “restore”.
- the status “master” means the copy source.
- the status “restore” means a copy destination.
- the dotted-line mirror is an invalid mirror, and the solid-line mirror is an effective mirror.
- the first DVOL 113A is a primary DVOL (hereinafter referred to as PVOL), and the second and third DVOLs 113B and 113C are secondary DVOLs (hereinafter referred to as SVOL).
- PVOL primary DVOL
- SVOL secondary DVOL
- the first host 103A writes data to the PVOL 113A as the business is executed (S11).
- the first storage 105A updates the sequence number (hereinafter, SEQ #), creates a JNL having the updated SEQ # and the data written to the PVOL 113A, and writes the created JNL to the first JVOL 115A ( S12).
- SEQ # is a number updated (for example, incremented (or decremented by 1)) each time data is written to the first JNL group 112A (DVOL in the first JNL group 112A).
- the second storage 105B reads the JNL from the first JVOL 115A and writes the read JNL to the second JVOL 115B (S21).
- the second storage 105B reflects one or more unreflected JNLs in the second JVOL 115B to the SVOL 113B in ascending order of SEQ # (S22).
- the second storage 105B writes the data held in the unreflected JNL in the second JVOL 115B to the SVOL 113B.
- the data written to the PVOL 113A is copied to the SVOL 113B.
- the third storage 105C reads the JNL from the first JVOL 115A, and writes the read JNL to the third JVOL 115C (S31).
- the third storage 105C reflects one or more unreflected JNLs in the third JVOL 115C to the SVOL 113C in the ascending order of SEQ # (S32).
- the third storage 105C writes the data held in the unreflected JNL in the third JVOL 115C to the SVOL 113C.
- the data written to the PVOL 113A is copied to the SVOL 113C.
- a JVOL included in one copy source JNL group is common to a plurality of copy destination JNL groups.
- the number of JVOLs included in one copy source JNL group does not depend on the number of copy destination JNL groups. This is realized by copying (transferring) the JNL between the storage apparatuses when the copy destination storage apparatus reads the JNL from the JVOL in the copy source storage apparatus. According to this feature, the storage capacity consumed in the first storage 105A can be saved.
- the operation site is switched from the first site 101A to the second site 101B.
- failover from the first host 103A to the second host 103B is performed (S41).
- the second host 103B transmits a predetermined command to the second storage 105B (S42).
- the second storage 105B receives a predetermined command.
- the mirror # 2 is enabled by the storage 105B and / or 105C, and the statuses of the second and third JNL groups 112B and 112C are temporarily set to “master / restore”.
- the status “master / restore” means that it is both a copy source and a copy destination.
- the statuses of the second and third JNL groups 112B and 112C are “master / restore” until the differential resync (described later) ends.
- the second storage 105B requests the third storage 105C for the SEQ # (hereinafter, SEQ # (3)) that the JNL recently reflected in the third storage 105C has (sequence).
- SEQ # (3) is received from the third storage 105C.
- the second storage 105B determines which of SEQ # (3) and SEQ # (hereinafter referred to as SEQ # (2)) of JNL recently reflected in the second storage 105B is newer.
- differential resync shown in FIGS. 8 and 9 is performed.
- the second storage 105B reads one or more difference JNLs from the third JVOL 115C, and the second storage 105B Write to the JVOL 115B (S44-1).
- the “difference JNL of 1 or more” referred to here is one or more JNLs from the JNL having the SEQ # next to the SEQ # (2) to the JNL having the SEQ # (3).
- the second storage 105B reflects the one or more differential JNLs in the second JVOL 115B to the second DVOL 113B in the ascending order of SEQ #, starting with the journal having the SEQ # next to SEQ # (2). (S45-1). As a result, the data in the second DVOL 113B matches the data in the third DVOL 113C at the start of differential resync.
- the second storage 105B uses one or more differential JNLs in the second JVOL 115B as the third JVOL 115C.
- the “difference JNL of 1 or more” referred to here is one or more JNLs from the JNL having the SEQ # next to the SEQ # (3) to the JNL having the SEQ # (2).
- the third storage 105C reflects one or more differential JNLs in the third JVOL 115C to the third DVOL 113C in the ascending order of SEQ #, starting with the journal having the SEQ # next to SEQ # (3). (S45-2). As a result, the data in the third DVOL 113C matches the data in the second DVOL 113B at the start of differential resync.
- the second site 101B becomes the operation site, and the business continues. Specifically, as shown in FIG. 5, the status of the second JNL group 112B is “master”, and the status of the third JNL group 112C is “restore”. Therefore, as shown in FIG. 10, the second DVOL 113B becomes a PVOL, and the third DVOL 113C becomes an SVOL.
- the second host 103B writes data to the PVOL 113B (S51).
- the second storage 105B updates the SEQ #, creates a JNL having the SEQ # and the data written to the PVOL 113B, and writes the created JNL to the second JVOL 115B (S52).
- the third storage 105C reads JNL from the second JVOL 115B, and writes the read JNL to the third JVOL 115C (S53).
- the third storage 105C reflects the JNL in the third JVOL 115C in the SVOL 113C in the order of ascending SEQ # (S54).
- the second storage 105B acquires SEQ # (3) from the third storage 105C, determines which of SEQ # (3) and SEQ # (2) is new, and the determination Based on the result of the above, it is controlled from which of the second and third storages 105B and 105C to which one or more differential JNLs are transferred.
- the second and third storages 105B and 105C to which one or more differential JNLs are transferred.
- the elements in the first site 101A will be described as a representative example, but unless otherwise specified, the elements in the second and third sites 101B and 101C are in the first site 101A. Is essentially the same as
- FIG. 11 shows the configuration of the first storage 105A.
- the first storage 105A includes a first DKC 111A and a plurality of RAID (Redundant Array of Independent) (or Inexpensive) Disks (hereinafter referred to as RG) 900A.
- the RG 900A is composed of a plurality of HDDs (Hard Disk Disk Drive). Instead of the HDD, another physical storage device such as a flash memory may be employed.
- One or more logical volumes are based on one RG 900A.
- the logical volume is, for example, the above-mentioned DVOL or JVOL, but either of them may be a substantive logical volume that is a part of the storage space of the RG900A, and is configured based on one or more RG900. It may be a virtual logical volume (virtual logical volume according to Thin Provisioning technology) to which a real area is dynamically allocated from a pool (a storage area composed of a plurality of real areas).
- the DKC 111A includes a plurality of front-end interface devices (hereinafter referred to as FE-IF) 610A, a back-end interface device (hereinafter referred to as BE-IF) 150A, a cache memory (hereinafter referred to as CM) 620A, and a shared memory (hereinafter referred to as SM). 640A and one or more CPUs (Central Processing Unit) 630A connected to them. The processing of the DKC 111A may be performed by the CPU 630A executing one or more computer programs, but at least a part of the processing may be performed by a hardware circuit.
- FE-IF front-end interface devices
- BE-IF back-end interface device
- CM cache memory
- SM shared memory
- 640A and one or more CPUs (Central Processing Unit) 630A connected to them.
- the processing of the DKC 111A may be performed by the CPU 630A executing one or more computer programs, but at least a part of the processing may be performed by
- the first host 103A and the second and third storages 105B and 105C are connected to the plurality of FE-IFs 610A.
- the DKC 111A (CPU 630A) communicates with the first host 103A, the second and third storages 105B and 105C via the FE-IF 610A.
- a plurality of RG900A are connected to the BE-IF 150A.
- the DKC 111A (CPU 630A) writes data (or JNL) to the RG 900A that is the basis of the write destination logical volume (for example, the first DVOL 113A or the first JVOL 115A) via the BE-IF 150A.
- CM 620A stores data (and JNL) written to RG 900A and data (and JNL) read from RG 900A.
- SM640A stores various control information used for controlling the processing of DKC 111A.
- the CPU 630A controls processing performed by the DKC 111A.
- the DKC 111A is not limited to the configuration shown in FIG. 11, and may have other configurations.
- the configurations of the second and third storages 105B and 105C are substantially the same as the configuration of the first storage 105A.
- the configuration of the DKC 111B or 111C may be different from the configuration of the DKC 111A.
- FIG. 12 shows the configuration of JNL
- FIG. 13 shows the configuration of JVOL 115A.
- JNL consists of meta information and data.
- the JVOL 115A includes a meta area 1201A and a data area 1203A.
- the meta area 1201A stores meta information
- the data area 1203A stores data. Note that the meta area 1201A may exist in a storage resource other than the RG 900A, such as the CM 620A.
- FIG. 14 shows the structure of meta information.
- the meta information is management information regarding data included in the JNL.
- the meta information is, for example, the following information: (*) SEQ #, (*) Write destination information (information indicating where in the DVOL data is written) (*) PVOL # (copy source DVOL number), (*) SVOL # (copy destination DVOL number), (*) Information indicating the position in the JVOL of the data corresponding to this meta information (this information is included when the JNL is written in the JVOL), including.
- the first DKC 111A manages SEQ #. SEQ # exists for each JNL group 112A.
- the first DKC 111A writes data to the DVOL 113A in a certain JNL group 112A
- the first DKC 111A updates the SEQ # corresponding to the JNL group 112A.
- SEQ # is stored in, for example, SM640A or another storage resource.
- the second storage 105B manages the SEQ # for each JNL group 112B.
- the SM 640A uses, as control information, for example, a JVOL valid bitmap 701A, a DVOL valid bitmap 702A, a JVOL management table 703A, a JNL group management table 704A, a pair management table 705A, a difference bitmap 706A, and A mirror bitmap 707A is stored.
- the SM 640B in the second storage 105B stores control information 701B to 707B
- the SM 640C in the third storage 105C stores control information 701C to 707C.
- the control information 701A to 707A will be described as a representative.
- the JNL group can have a maximum of 64 logical volumes, for example. As shown in FIG. 16, the JVOL effective bitmap 701A has 64 bits for each JNL group. If the nth (n is an integer between 0 and 63) logical volume is JVOL115A, the nth bit is on (eg, 1).
- the DVOL effective bitmap 702A has 64 bits for each JNL group, as shown in FIG. If the nth (n is an integer between 0 and 63) logical volume is JVOL115A, the nth bit is on (eg, 1).
- the JVOL management table 703A exists for each JNL group 112A. As shown in FIG. 18, the table 703A represents the information indicating the start address of the meta area, the information indicating the size of the meta area, the information indicating the start address of the data area, and the size of the data area for each JVOL 115A. Information. That is, the table 703A indicates where from where to where is the meta area and where from where is the data area for each JVOL 115A.
- the JNL group management table 704A has information regarding the JNL group. Specifically, for example, as shown in FIG. 19, the table 704A includes the following information for each JNL group: (*) JNL group # 1001A representing the number of the JNL group, (*) Status 1002A indicating the status of the JNL group, (*) Mirror # 1003A indicating the number of the mirror existing in the remote copy system according to the present embodiment, (*) Partner JNL group # 1004A representing the partner JNL group number, (*) Partner storage # 1005A indicating the number of the storage device having the partner JNL group, (*) Purged SEQ # 1006A representing SEQ # of recently purged JNL, (*) Purgeable SEQ # 1007A representing SEQ # of JNL that may be purged, (*) Read SEQ # 1008A representing SEQ # of recently read JNL, Have
- the pair management table 705A has information regarding a pair of DVOLs. Specifically, for example, as shown in FIG. 20, the table 705A includes the following information for each DVOL 113A: (*) DVOL # 1101A representing the number of DVOL113A, (*) JNL group # 1102A representing the number of JNL group 112A including DVOL 113A, (*) Copy destination VOL # 1103A representing the DVOL number of the copy destination of DVOL 113A, (*) For a pair of DVOL 113A and copy destination DVOL, status 1104A indicating the pair status of DVOL 113A, Have
- the difference bitmap 706A is provided for each DVOL 113A.
- the DVOL 113A is composed of a plurality of blocks.
- the difference bitmap 706A represents which block of the DVOL 113A has been updated. That is, the bits included in the difference bitmap 706A correspond to blocks.
- the difference bitmap 706A is updated when the pair status of the DVOL 113A is a predetermined status. Specifically, for example, when the pair status of a certain DVOL 113A is a predetermined status and data is written to a certain block in the DVOL 113A, the DKC 111A, in the difference bitmap 706A corresponding to the DVOL 113A, The bit corresponding to the block is changed to ON (for example, 1).
- the mirror bitmap 707A represents which mirror is valid and which mirror is invalid. Specifically, the mirror bitmap 707A has a plurality of bits corresponding to a plurality of mirrors. For example, normally, mirrors # 0 and # 1 are valid and mirror # 2 is invalid (see FIG. 6). In this case, the bit corresponding to mirrors # 0 and # 1 is on (for example, 1), and the bit corresponding to mirror # 2 is off (for example, 0). When a failure occurs in the first storage 105A, mirrors # 0 and # 1 are invalid and mirror # 2 is valid (see FIG. 10). In this case, the bit corresponding to mirrors # 0 and # 1 is off, and the bit corresponding to mirror # 2 is on.
- both the status of the pair of PVOL 113A and SVOL 113B and the status of the pair of PVOL 113A and SVOL 113C may be suspended (S2101-1, S2101-2).
- the pair status of the PVOL 113A is “PSUS” (primary suspend)
- the pair status of the SVOLs 113B and 113C is “SSUS”.
- the first DKC 111A resets SEQ # corresponding to the JNL group 112A including the PVOL 113A to a predetermined value (for example, zero).
- the first DKC 111A writes data to the PVOL 113A, it does not create a JNL having that data.
- the first DKC 111A writes data to a certain write destination block of the PVOL 113A (S2102)
- the pair status of the PVOL 113A is “PSUS”
- the write destination block in the difference bitmap 706A corresponding to the PVOL 113A The bit corresponding to is turned on (for example, 1) (S2103).
- S2103 is performed every time data is written to an unupdated block in the PVOL 113A.
- the first DKC 111A receives a formation copy instruction from the first host 103A (or a management terminal (not shown) connected to the first DKC 111A) (S2111).
- the receipt of the formation copy instruction is a trigger for starting the formation copy.
- the formation copy may be performed in parallel for the first pair (PVOL 113A and SVOL 113B pair) and the second pair (PVOL 113A and SVOL 113C pair), but in this embodiment, they are not performed in parallel. It is done sequentially. For this reason, the concentration of access to the PVOL 113A can be reduced.
- the formation copy is performed for the first pair. That is, the first DKC 111A specifies a block corresponding to a bit that is turned on in the difference bitmap 706A (a block in the PVOL 113A corresponding to the difference bitmap 706A). Then, the first DKC 111A creates a JNL having the data in the specified block, and writes the created JNL in the second JVOL 115B without storing it in the first JVOL 115A (S2112). The second DKC 111B reflects the JNL in the second JVOL 115B in the SVOL 113B (S2113). Steps S2112 and S2113 are performed for all updated blocks (blocks corresponding to bits that are turned on) in the PVOL 113A.
- the formation copy is performed for the second pair. That is, the first DKC 111A creates a JNL having data in the updated block in the PVOL 113A, and writes the created JNL in the third JVOL 115C without storing it in the first JVOL 115A (S2115).
- the third DKC 111C reflects the JNL in the third JVOL 115C in the SVOL 113C (S2116). S2115 and S2116 are performed for all updated blocks in the PVOL 113A.
- the first DKC 111A receives a write request designating the PVOL 113A from the first host 103A (S2201), secures a CM area (cache memory area) in the CM 630A, and secures it in the secured CM area. Then, data according to the write request (data to be written) is written (S2202). At this time, the first DKC 111A may respond to the completion of writing to the first host 103A.
- the first DKC 111A writes the write target data in the CM 630A to the PVOL 113A (S2203).
- the first DKC 111A updates the SEQ # corresponding to the JNL group 112A including the PVOL 113A (S2211).
- the first DKC 111A creates a JNL (S2212), and writes the created JNL to the first JVOL 115A.
- the JNL created in S2212 has meta information including SEQ # (or SEQ # before update) updated in S2211, and data written in the PVOL 113A in S2203.
- the data may be data read from the PVOL 113A or data remaining in the CM 630A.
- the JNL read process will be described by taking an example in which the second DKC 111B reads JNL from the first JVOL 115A.
- the second DKC 111B calculates the read target SEQ # (S2301).
- “Read target SEQ #” is a SEQ # included in the JNL to be read. Specifically, the read target SEQ # is a value obtained by adding 1 to a value represented by the read SEQ # 1008B (information 1008B in the JNL group management table 704B) corresponding to the mirror # 0 of the JNL group 112B (that is, (The next value after the value represented by the read SEQ # 1008B).
- the second DKC 111B transmits a read request to the first storage 105A (S2302).
- the read request includes the read target SEQ # calculated in S2301 and the number of the JVOL 115A that is the JNL read source (or the LUN (Logical Unit Number) corresponding thereto).
- the number of JVOL 115A is specified from, for example, control information stored in SM640B.
- the control information includes the number of the JVOL 112A included in the JNL group 112A corresponding to the JNL group 112B including the SVOL 113B.
- the first DKC 111A receives the read request from the second DKC 111B. Based on the read request, the first DKC 111A identifies the JNL having the read target SEQ # from the read-source JVOL 115A (S2303). The first DKC 111A reads the identified JNL from the read source JVOL 115A, and transmits the read JNL to the second DKC 111B via the data transfer path between the first and second storages 105A and 105B ( S2304).
- the second DKC 111B receives the JNL from the first DKC 111A, and writes the received JNL into the second JVOL 115B (S2305).
- the second DKC 111B changes the value represented by the read SEQ # 1008B corresponding to the mirror # 0 of the JNL group 112B having the SVOL 113B to the value represented by the read target SEQ # (S2306). That is, the second DKC 111B adds 1 to the value represented by the read SEQ # 1008B.
- JNL reflection processing will be described.
- the second DKC 111B reflects the JNL having the oldest SEQ # among the one or more unreflected JNLs in the second JVOL 115B in the SVOL 113B (S2401). Specifically, the second DKC 111B includes a JNL including a SEQ # that is one greater than the value represented by the purgeable SEQ # 1007B (information 1007B included in the JNL group management table 704B) corresponding to the mirror # 0 of the JNL group 112B. , Read from the second JVOL 115B, and writes the data of the read JNL to the SVOL 113B.
- the second DKC 111B changes the value represented by the purgeable SEQ # 1007B corresponding to the mirror # 0 of the JNL group 112B (S2402). Specifically, the second DKC 111B adds 1 to the value represented by the purgeable SEQ # 1007B.
- the second DKC 111B notifies the first DCK 111A of the value represented by the purgeable SEQ # 1007B after the update (S2403).
- the first DKC 111A changes the value represented by the purgeable SEQ # 1007A corresponding to the mirror # 0 of the JNL group 112A to the value notified from the second DKC 111B (S2404).
- Each DKC checks the JVOL in the storage having the DKC periodically (or irregularly).
- this will be described by taking the first storage 105A as an example.
- the first DKC 111A performs the process shown in FIG. 25 periodically (or irregularly).
- the first DKC 111A determines whether there is a purgeable JNL in the JNL in the first JVOL 115A (S2502).
- the JNL that can be purged is from the following JNL in (A) to JNL in (B).
- the first DKC 111A purges the purgeable JNL from the first JVOL 115A (S2503). Thereafter, the first DKC 111A performs S2501.
- the first DKC 111A suspends the neck pair (S2504).
- the “neck pair” is a pair corresponding to either the mirror # 0 or the mirror # 1, and is the pair with the smaller value represented by the purgeable SEQ # 10007A.
- the pair is a bottleneck because the smaller purgeable SEQ # 10007A is much smaller than the larger purgeable SEQ # 10007A, thus reducing the number of JNLs that can be purged. .
- the number of JNLs that can be purged increases. Specifically, the purgeable JNL is up to the JNL including the value represented by the larger purgeable SEQ # 10007A as SEQ #. Therefore, the first DKC 111A performs S2502 and S2503 after S2504.
- the first DKC 111A performs difference management and resync for the pair (S2510). Specifically, the following is performed (in this description, the neck pair is a pair corresponding to mirror # 1).
- the bit corresponding to the write destination block of the data (the bit in the differential bitmap 706A corresponding to the PVOL 113A) is turned off. Turn on.
- the first DKC 111A performs resync at a predetermined timing (for example, when a resync instruction is received from the first host 103A (or management terminal)). Specifically, the first DKC 111A writes JNL including data in the block corresponding to the ON bit to the third JVOL 115C.
- the second host 103B transmits a takeover command as a predetermined command to the second storage 105B (S2603).
- the second DKC 111B updates various statuses in response to the takeover command (S2604). Specifically, for example, the second DKC 111B performs the following update.
- the pair status corresponding to DVOL 113B and mirror # 0 is updated to “SSWS”.
- the second DKC 111B normally prohibits data writing from the second host 103B to the DVOL 113B if the DVOL 113B is SVOL. However, “SSWS” writes data even if the DVOL 113B is SVOL. Means that is allowed.
- the pair status corresponding to DVOL 113B and mirror # 1 is updated to “SSUS”. “SSUS” means that the pair corresponding to the mirror # 1 is suspended.
- the second DKC 111B acquires the value represented by the purgeable SEQ # from the third storage 105C (S2605). Specifically, for example, the second DKC 111B sets the value represented by the purgeable SEQ # 1007C (information 1007C in the JNL group management table 704C) corresponding to the JNL group 112C and the mirror # 2 to the second and third A request is made to the third storage 105C via the control path between the storages 105B and 105C. In response to the request, the third DKC 111C notifies the second DKC 111B of the value represented by the purgeable SEQ # 1007C corresponding to the JNL group 112C and mirror # 2 via the control path.
- the second DKC 111B compares the value represented by the purgeable SEQ # 1007B corresponding to the JNL group 112B and mirror # 2 with the value represented by the acquired purgeable SEQ # 1007C (S2606). That is, the second DKC 111B has the SEQ # (hereinafter, SEQ # (2)) that the JNL recently reflected in the second storage 105B and the SEQ # ( Hereinafter, SEQ # (3)) is compared.
- the second DKC 111B determines whether differential resynchronization is possible (S2607).
- the case where differential resync is possible is as shown in FIG. 27, that is, the value represented by the purged SEQ # corresponding to the JNL group of storage (large) and mirror # 2 is equal to or less than SEQ # (small). It is. In other words, the storage (large) is changed from the JNL (hereinafter, JNL (X)) to SEQ # including the SEQ # (SEQ # (equal to SEQ # (small) +1) following the SEQ # (small). This is a case of having a JNL including (large) (hereinafter, JNL (Y)).
- JNLs from JNL (X) to JNL (Y) ie, one or more JNLs including SEQ # from SEQ # to SEQ # (large) equal to SEQ # (small) +1
- difference JNL of 1 or more ie, one or more JNLs including SEQ # from SEQ # to SEQ # (large) equal to SEQ # (small) +1
- the case where differential resync is impossible is as shown in FIG. 28, that is, the value represented by the purged SEQ # corresponding to the JNL group and mirror # 2 in the storage (large) is SEQ # (small). This is a case where the value is larger than the next SEQ # (SEQ # (equal to SEQ # (small) +1)). This is because SEQ # is not continuous.
- the second DKC 111B performs a full copy (S2608). That is, the second DKC 111B converts all data stored in the DVOL (for example, DVOL 113B) in the storage having the larger SEQ # of SEQ # (2) and SEQ # (3) to SEQ # ( Copy to the DVOL (for example, DVOL 113C) in the storage having the smaller SEQ # of 2) and SEQ # (3). Thereby, the contents of DVOL 113B and the contents of DVOL 113C match.
- S2607 If the result of the determination in S2607 is positive (S2607: YES), if SEQ # (2) is greater than SEQ # (3) (S2609: YES), the difference from the second storage 105B to the third storage 105C Resync is performed (S2610). That is, the differential resync shown in FIG. 9 is performed. Specifically, the second DKC 111B reads one or more differential JNLs from the second JVOL 115B, and writes one or more differential JNL write requests (write request designating the third JVOL 115C) to the third storage 105C.
- one or more differential JNLs are written to the third JVOL 115C (instead of the read request (SEQ # (3) next to SEQ # (3)) by the third DKC 111C specifying the second JVOL 115B.
- one or more differential JNLs may be received from the second storage 105B and the one or more differential JNLs may be written to the third JVOL 115C.
- the third DKC 111C reflects the one or more difference JNLs in the third JVOL 115C to the DVOL 113C in the ascending order of SEQ #, starting from the JNL having the SEQ # next to SEQ # (3). As a result, the contents of the DVOL 113C match the contents of the DVOL 113B.
- S2607 If the result of the determination in S2607 is positive (S2607: YES), if SEQ # (2) is smaller than SEQ # (3) (S2609: NO), the difference from the third storage 105B to the second storage 105C Resync is performed (S2611). That is, the differential resync shown in FIG. 8 is performed. Specifically, the second DKC 111B transmits a read request designating the third JVOL 115C (the read request including SEQ # (3) from SEQ # (2) next to SEQ # (2)) and responding to it.
- One or more differential JNLs are received from the third storage 105C, and the one or more differential JNLs are written to the second JVOL 115B (instead, the third DKC 111B has one or more differences from the third JVOL 115C.
- the JNL is read, and one or more differential JNL write requests (write request designating the second JVOL 115B) may be sent to the second storage 105B to write one or more differential JNLs to the second JVOL 115B) .
- the second DKC 111B reflects the one or more difference JNLs in the second JVOL 115B to the DVOL 113B in the ascending order of SEQ #, starting from the JNL having the SEQ # next to SEQ # (2). Thereby, the contents of DVOL 113B match the contents of DVOL 113C.
- the second DKC 111B permits writing of data from the second host 103B to the second DVOL 113B (S2902). . Specifically, the second DKC 111B updates the pair status corresponding to the DVOL 113B and the mirror # 2 to “SSWS” (that is, write permission). As a result, not only the pair status corresponding to the DVOL 113B and the mirror # 0 but also the pair status corresponding to the mirror # 2 is set to “SSWS”, so that the data writing from the second host 103B to the second DVOL 113B is performed. It becomes possible.
- the second DKC 11B sets the second DVOL 113B as a PVOL (the third DVOL 113C is an SVOL).
- the initial value of the SEQ # corresponding to the JNL group 112B is the SEQ # (that is, the SEQ # () that follows the larger SEQ # of the SEQ # (2) and SEQ # (3). 2) and SEQ # (3), which is the larger SEQ # (added by 1).
- the second host 103B transmits a write request designating the PVOL 113B to the second storage 105B according to the business.
- the second DCK 111B writes data according to the write request to the CM 630B, and then writes to the PVOL 113B.
- the second DKC 111B updates SEQ #, creates a JNL including the updated SEQ # (or SEQ # before update) and the data written in the PVOL 113B, and writes the JNL in the second JVOL 115B.
- the third DKC 111C receives the JNL in the second JVOL 115B from the second storage 105B by transmitting a read request designating the second JVOL 115B to the second storage 105B.
- the third DKC 111C writes the received JNK to the third JVOL 115C.
- the third DKC 111C reflects the unreflected JNL in the third JVOL 115C to the SVOL 113C in the order of the SEQ #.
- data written to the PVOL 113B is copied to the SVOL 113C.
- the second storage 105B acquires the SEQ # (SEQ # (3)) of the JNL recently reflected in the third storage 105C in the site 101B that is the switching destination of the operation site. Then, SEQ # (3) is compared with SEQ # (SEQ # (2)) possessed by JNL recently reflected in the second storage 105B. In addition, the second storage 105B includes the purged SEQ # in the storage having the larger SEQ # of SEQ # (2) and SEQ # (3), and SEQ # (2) and SEQ # (3). Based on the relationship with the smaller SEQ #, it is determined whether differential resynchronization is possible.
- differential resync is performed based on the result of the determination and the relationship between SEQ # (2) and SEQ # (3). As a result, even if the first storage 105A is stopped due to a failure or the like, the business is appropriately continued.
- a JVOL included in one copy source JNL group is common to a plurality of copy destination JNL groups.
- the number of JVOLs included in one copy source JNL group does not depend on the number of copy destination JNL groups.
- Example 2 of the present invention will be described. In this case, differences from the first embodiment will be mainly described, and description of common points with the first embodiment will be omitted or simplified (this applies to the third and fourth embodiments).
- writing of data to the second DVOL 113B is permitted even during differential resync.
- the second host 103B transfers to the second DVOL 113B.
- Data writing is permitted (S3002).
- the second DKC 111B changes both the pair status corresponding to the second DVOL 113B and mirror # 0 and the pair status corresponding to the second DVOL 113B and mirror # 2 to “SSWS”. .
- “when it is determined to be YES in S2607” is when it is determined that the differential resync of either S2610 or S2611 is performed.
- the following problems may occur if no measures are taken. That is, in the differential resync from the third storage 105C to the second storage 105B, the data (the latest data) written from the second host 103B is replaced with the data in the differential JNL (the block in the DVOL 113B). (Old data) may be written.
- the second DKC 111B when the second DKC 111B receives a write request designating the DVOL 113B from the second host 103B (S3101), the second DKC 111B writes the write target data to the write destination block in the DVOL 113B (S3102). At this time, the second DKC 111B manages the write destination block as a non-reflecting destination of JNL (S3103).
- the SM 640B stores non-reflecting management information indicating a write destination that does not need to be reflected by the JNL
- the second DKC 111B stores the number of the DVOL 113B and the address of the block of the DVOL 113B ( For example, LBA (Logical Block Address)) is registered in the non-reflecting management information.
- LBA Logical Block Address
- whether the JNL reflection destination block is a non-reflecting destination is, for example, whether the address of the reflecting block is registered in the non-reflecting management information.
- Do not reflect JNL means, for example, to ignore JNL.
- the ignored JNL may be left in the second JVOL 115B or may be immediately deleted from the second JVOL 115B.
- the business can be started earlier than the first embodiment, and a problem that may occur when the DVOL 113B is in a write-permitted state during differential resync (in the block in which the latest data is written). In other words, old data in the difference JNL is written).
- the second host 103B is allowed to read data from the second DVOL 113B even during differential resync (in the first embodiment, reading is prohibited). Specifically, for example, when it is determined that S2607 in FIG. 26 is YES, the second DKC 111B is allowed to read data from the second DVOL 113B.
- the data read by the second host 103B from the DVOL 113B is the latest data (the data possessed by the differential JNL having the largest SEQ #). It may be old data.
- the second DKC 111B receives a read request designating the DVOL 113B before the differential resync is completed (S3301)
- the second DKC 111B secures a CM area in the CM 630B (S3302)
- the differential resync is stored in the third storage. It is determined whether or not it is a differential resync from 105C to the second storage 105B (S3303).
- the second DKC 111B reads data from the read source block (block in the DVOL 113B) designated by the read request, writes the read data to the CM area, The data in the CM area is transmitted to the second host 103B (S3307).
- the second DKC 111B determines whether or not the latest JNL to which the read source block is reflected is in the second JVOL 115B (S3304). ).
- the “latest JNL” mentioned here is one of one or more JNLs to which the read source block is a reflection target among the one or more difference JNLs included in the second and third JVOLs 115B and 115C.
- the second DKC 111B reflects the latest JNL from the second JVOL 115B in the read source block in the DVOL 113B (S3306), and then performs S3307.
- the second DKC 111B is a differential JNL to which the read source block is reflected in the differential resync, and a differential JNL older than the latest JNL (JNL having a SEQ # smaller than a SEQ # included in the latest JNL). ) Is not reflected in the read source block (for example, the JNL is purged from the second JVOL 111B).
- the data transmitted to the second host 103B may be provided from the JNL instead of being provided from the second DVOL 113B.
- the second DKC 111B sends the read request including the latest JNL SEQ # to the third storage 105C, so that the latest JNL 3 and the latest JNL is written to the second JVOL 115B. That is, the second DKC 111B obtains the latest JNL from the third storage 105C in preference to other JNLs. Thereafter, the second DKC 111B performs the above-described S3306 and S3307. Note that even in S3307 in the case of S3304: NO, the data transmitted to the second host 103B may be provided from the JNL instead of being provided from the second DVOL 113B.
- the business can be started earlier than the first embodiment, and a problem that may occur when the DVOL 113B is in the read permission state during the differential resync (the latest data in the second host 103C). It is also possible to avoid that old data is provided instead.
- the differential resync can be performed even if all the differential JNLs are not transferred between the storages 105B and 105C.
- the second DKC 111B determines whether or not an unreflected JNL corresponding to the difference JNL exists in the old storage (S3401).
- the “old storage” is the storage device 105B or 105C, which has the smaller SEQ # of SEQ # (2) and SEQ # (3) as the purgeable SEQ #.
- the storage 105B and 105C that is not the old storage is hereinafter referred to as "new storage”.
- an unreflected JNL corresponding to the difference JNL exists in the old storage means that the JVOL in the old storage has an unreflected SEQ # larger than the purgeable SEQ # in the old storage, as shown in FIG. JNL exists.
- the old storage is the second storage 105B, for example, if there is at least one unreflected JNL in the second JVOL 115B, the result of the determination in S3401 is affirmative.
- the second DKC 111B inquires the third DKC 111C about the value represented by the SEQ # included in the latest JNL in one or more unreflected JNLs.
- the determination in S3401 can be made.
- the second DKC 111B can make the determination in S3401 by comparing SEQ # (hereinafter, answer SEQ #) answered from the third DCK 111C with SEQ # (3) +1. it can. If the answer SEQ # is equal to or greater than SEQ # (3) +1, the result of the determination in S3401 is affirmative.
- S3401 If the result of the determination in S3401 is negative (S3401: NO), S2607 in FIG. 26 is performed.
- the second DKC 111B determines whether there is sufficient unreflected JNL in the old storage (S3402). If the SEQ # of the latest unreflected JNL in the old storage is greater than or equal to the SEQ # of the latest JNL among the one or more differential JNLs, the result of this determination is affirmative.
- the DKC in the old storage among the one or more unreflected JNLs in the old storage, all the JNLs corresponding to one or more differential JNLs, This is reflected in the DVOL in the old storage (S3403). If there is a remaining unreflected JNL, the DKC in the old storage may purge the remaining JNL.
- S3404 is performed. That is, the DKC in the old storage acquires the difference JNL for the shortage (JNL corresponding to the difference between the unreflected JNL in the old storage and one or more difference JNLs) from the new storage, and the difference JNL is Store in JVOL in storage. Then, the DKC in the old storage reflects all unreflected JNLs and differential JNLs in the JVOL in the old storage in the DVOL in the old storage.
- Examples 1 to 4 may be combined.
- sequence number (SEQ #) is adopted as the update number, but other types of numbers that increase or decrease regularly may be adopted.
- the second storage 105B in the operation switching destination site determines various processes (for example, whether delta resync is possible). However, in at least one of the first to fourth embodiments, at least one of various processes may be performed by the third storage 105C.
- all JNLs have the updated SEQ #, but may have the pre-update SEQ # instead.
- At least one of the storages 105A to 105C may have a plurality of JNL groups.
- At least one of the storages 105A to 105C may have a logical volume other than the logical volumes included in the JNL group.
- JNL transfer between storage devices is performed by a copy destination storage device reading JNL from a JVOL in the copy source storage device, and a JVOL transfer in the copy destination storage device by the copy source storage device. Either transfer by writing the JNL to the JNL may be used.
Abstract
Description
(*)第1のストレージ装置、第2のストレージ装置、及び第3のストレージ装置がある。
(*)第1のストレージ装置がコピー元であり、第2及び第3のストレージ装置が、第1のストレージ装置からのコピー先である。
(*)第1のストレージ装置に第1のホスト装置が接続されており、第2のストレージ装置に第2のホスト装置が接続されており、第3のストレージ装置に第3のホスト装置が接続されている。
(*)第1のストレージ装置が、第1のDVOLと、第1の第1JVOLと、第2の第1JVOLとを有する。
(*)第2のストレージ装置が、第2のDVOLと、第2のJVOLとを有する。
(*)第1及び第2のDVOLがペアになっている。第1のDVOLが、プライマリのDVOLであり、第2のDVOLが、セカンダリのDVOLである。
(*)第3のストレージ装置が、第3のDVOLと、第3のJVOLとを有する。
(*)第1及び第3のDVOLがペアになっている。第1のDVOLが、プライマリのDVOLであり、第3のDVOLが、セカンダリのDVOLである。
(*)第1のホスト装置が、第1のDVOLにデータを書き込む。
(*)第1のストレージ装置が、第1のDVOLに書き込まれるデータのJNLを、第1の第1JVOLと第2の第1JVOLの両方に書き込む。JNLは、ホスト装置の入出力対象のデータの他に、更新番号を含む。
(*)第1の第1JVOL内のJNLは、第2のJVOLにコピーされる。
(*)第2のストレージ装置が、第2のJVOL内の1以上の未反映のJNLのうち、最古の更新番号を有するJNLを第2のDVOLに反映する(JNLが有するデータを、第2のDVOLに書き込む)。
(*)第2の第1JVOL内のJNLは、第3のJVOLにコピーされる。
(*)第3のストレージ装置が、第3のJVOL内の1以上の未反映のJNLのうち、最古の更新番号を有するJNLを第3のDVOLに反映する(JNLが有するデータを、第3のDVOLに書き込む)。
(A1)第1のジャーナル記憶資源から第2のジャーナル記憶資源にジャーナルがコピーされる。
(B1)第2のストレージ装置が、第2のジャーナル記憶資源における1以上の未反映のジャーナルを、更新番号の順に、第2のデータボリュームに反映する。
(A2)第1のジャーナル記憶資源から第3のジャーナル記憶資源にジャーナルがコピーされる。
(B2)第3のストレージ装置が、第3のジャーナル記憶資源における1以上の未反映のジャーナルを、更新番号の順に、第3のデータボリュームに反映する。
(x1)第2のストレージ装置において最近反映されたジャーナルが有する更新番号と、第3のストレージ装置において最近反映されたジャーナルが有する更新番号とのどちらが新しいかが判断される。
(x2)第2及び第3のストレージ装置のうちの新ストレージ装置((x1)で新しいと判断された更新番号を有するストレージ装置)に、1以上の差分ジャーナルがあるか否かが判断される。1以上の差分ジャーナルは、(x1)で新しいと判断されなかった更新番号の次の更新番号を有するジャーナルから(x1)で新しいと判断された更新番号を有するジャーナルまでの1以上のジャーナルである。
(x3)(x2)の判断の結果が肯定的の場合、新ストレージ装置から旧ストレージ装置(第2及び第3のストレージ装置のうちの新ストレージ装置ではないストレージ装置)に、1以上の差分ジャーナルがコピーされる。
(x4)旧ストレージ装置が、1以上の差分ジャーナルを、更新番号の順に、旧ストレージ装置内のデータボリュームに反映する。
(*)SEQ#、
(*)ライト先情報(DVOLのどこにデータが書き込まれるかを表す情報)
(*)PVOL#(コピー元のDVOLの番号)、
(*)SVOL#(コピー先のDVOLの番号)、
(*)このメタ情報に対応したデータのJVOLにおける位置を表す情報(この情報は、JNLがJVOLに書き込まれる際に含まれる)、
を含む。
(*)JNLグループの番号を表すJNLグループ#1001A、
(*)JNLグループのステータスを表すステータス1002A、
(*)本実施例に係るリモートコピーシステムに存在するミラーの番号を表すミラー#1003A、
(*)相手のJNLグループの番号を表す相手JNLグループ#1004A、
(*)相手のJNLグループを有するストレージ装置の番号を表す相手ストレージ#1005A、
(*)最近パージされたJNLが有するSEQ#を表すパージ済みSEQ#1006A、
(*)パージされて良いJNLが有するSEQ#を表すパージ可能SEQ#1007A、
(*)最近読み出したJNLが有するSEQ#を表すリード済みSEQ#1008A、
を有する。
(*)DVOL113Aの番号を表すDVOL#1101A、
(*)DVOL113Aを含んだJNLグループ112Aの番号を表すJNLグループ#1102A、
(*)DVOL113Aのコピー先のDVOLの番号を表すコピー先VOL#1103A、
(*)DVOL113Aとコピー先のDVOLとのペアについて、DVOL113Aのペアステータスを表すステータス1104A、
を有する。
(A)JNLグループ112Aに対応したパージ済みSEQ#1006A(図19参照)が表す値をSEQ#として含んだJNL。
(B)JNLグループ112Aのミラー#0に対応したパージ可能SEQ#1007Aが表す値と、JNLグループ112Aのミラー#1に対応したパージ可能SEQ#1007Aが表す値とのうち、小さい方の値をSEQ#として含んだJNL。
(*)第1のDKC111Aは、PVOL113Aにデータを書き込んだ場合、そのデータのライト先ブロックに対応したビット(PVOL113Aに対応した差分ビットマップ706A内のビット)がオフになっていれば、そのビットをオンにする。
(*)第1のDKC111Aは、所定のタイミングで(例えば、リシンク指示を第1のホスト103A(又は管理端末)から受けたときに)、リシンクを行う。具体的には、第1のDKC111Aは、オンのビットに対応したブロック内のデータを含んだJNLを、第3のJVOL115Cに書き込む。
(*)DVOL113B及びミラー#0に対応したペアステータスを、「SSWS」に更新する。第2のDKC111Bは、DVOL113BがSVOLであれば、通常は、第2のホスト103BからDVOL113Bへのデータ書込みを禁止しているが、「SSWS」は、DVOL113BがSVOLであっても、データの書込みが許可されていることを意味する。
(*)DVOL113B及びミラー#1に対応したペアステータスを、「SSUS」に更新する。「SSUS」は、ミラー#1に対応したペアがサスペンドされていることを意味する。これにより、ミラー#0に対応したペアステータスが「SSWS」となっていても、第2のホスト103BからDVOL113Bへのデータの書込みの禁止が維持される。
(*)DVOL113Bのステータスを「HOLD」に変更する。「HOLD」は、差分リシンクの開始を待っていることを意味する。
(*)ミラービットマップ707Bにおける、ミラー#0に対応したビットをオフにし、ミラー#2に対応したビットをオンにする。
Claims (11)
- 非同期リモートコピーを行うストレージシステムである非同期リモートコピーシステムであって、
第1の記憶資源グループを有し第1のホスト装置に接続されている第1のストレージ装置と、
第2の記憶資源グループを有し第2のホスト装置に接続されている第2のストレージ装置と、
第3の記憶資源グループを有する第3のストレージ装置と
を有し、
前記第1の記憶資源グループが、データが書き込まれる論理ボリュームである第1のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第1のジャーナル記憶資源とを有し、
前記第2の記憶資源グループが、データが書き込まれる論理ボリュームである第2のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第2のジャーナル記憶資源とを有し、
前記第3の記憶資源グループが、データが書き込まれる論理ボリュームである第3のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第3のジャーナル記憶資源とを有し、
前記第1のストレージ装置が、前記ホスト装置から前記第1のデータボリュームにデータが書き込まれる場合、前記第1の記憶資源グループ内のデータボリュームにデータが書き込まれる都度に更新される番号である更新番号を更新し、その更新番号とそのデータとを含んだジャーナルを作成し、そのジャーナルを前記第1のジャーナル記憶資源に書き込み、
マルチターゲット方式の非同期リモートコピーが行われ、その非同期リモートコピーでは、前記ジャーナルが、前記第1のストレージ装置から前記第2のストレージ装置へ転送され前記第2のデータボリュームに反映されることで前記第1のデータボリューム内のデータが前記第2のデータボリュームに書き込まれ、且つ、前記ジャーナルが、前記第1のストレージ装置から前記第3のストレージ装置へ転送され前記第3のデータボリュームに反映されることで前記第1のデータボリューム内のデータが前記第3のデータボリュームに書き込まれ、
(X)前記第1のストレージ装置が停止した場合、
(x1)前記第2のストレージ装置において最近反映されたジャーナルが有する更新番号と、前記第3のストレージ装置において最近反映されたジャーナルが有する更新番号とのどちらが新しいかが判断され、
(x2)前記第2及び第3のストレージ装置のうち、前記(x1)で新しいと判断された更新番号を有するストレージ装置である新ストレージ装置に、前記(x1)で新しいと判断されなかった更新番号の次の更新番号を有するジャーナルから前記(x1)で新しいと判断された更新番号を有するジャーナルまでの1以上のジャーナルである1以上の差分ジャーナルがあるか否かが判断され、
(x3)前記(x2)の判断の結果が肯定的の場合、前記新ストレージ装置から、前記第2及び第3のストレージ装置のうちの前記新ストレージ装置ではないストレージ装置である旧ストレージ装置に、前記1以上の差分ジャーナルがコピーされ、
(x4)前記旧ストレージ装置が、前記1以上の差分ジャーナルを、更新番号の順に、前記旧ストレージ装置内のデータボリュームに反映する、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記第2のストレージ装置が、前記旧ストレージ装置であり、前記第3のストレージ装置が、前記新ストレージ装置であり、
前記第2のデータボリュームは、前記第2のホスト装置からのデータの書込みが禁止されたライト禁止状態であり、
前記(x2)の判断の結果が肯定的の場合、前記第2のストレージ装置が、前記第2のデータボリュームを、前記第2のホスト装置からのデータの書込みが可能なライト許可状態とし、
前記第2のストレージ装置が、前記第2のデータボリュームをライト許可状態とした後、前記第2のデータボリュームにおけるライト先を指定したライト要求を前記第2のホスト装置から受信した場合、
(w1)前記ライト要求に従うデータを前記ライト先に書き込み、
(w2)前記ライト先を、差分ジャーナルの非反映先として管理し、
前記(x4)において、前記第2のストレージ装置が、
(x41)前記第2のジャーナル記憶資源内の差分ジャーナルの反映先が、非反映先であるか否かを判断し、
(x42)前記(x41)の判断の結果が否定的であれば、前記差分ジャーナルを反映し、
(x43)前記(x41)の判断の結果が肯定的であれば、前記差分ジャーナルを反映しない、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記第2のストレージ装置が、前記旧ストレージ装置であり、前記第3のストレージ装置が、前記新ストレージ装置であり、
前記第2のデータボリュームは、前記第2のホスト装置からデータを読み出すことが禁止されたリード禁止状態であり、
前記(x2)の判断の結果が肯定的の場合、前記第2のストレージ装置が、前記第2のデータボリュームを、前記第2のホスト装置からデータを読み出すことが可能なリード許可状態とし、
前記第2のストレージ装置が、前記第2のデータボリュームをリード許可状態とした後、前記第2のデータボリュームにおけるリード元を指定したリード要求を前記第2のホスト装置から受信した場合、
(R)リード元が反映先となる差分ジャーナルのうち、最新の更新番号を有する差分ジャーナルである対象ジャーナルが有するデータを、前記第2のホスト装置に送信する、
非同期リモートコピーシステム。 - 請求項3記載のシステムであって、
前記(R)において、前記第2のストレージ装置が、
(r1)前記第2及び第3のジャーナル記憶資源における、リード元が反映先となる差分ジャーナルのうち、最新の更新番号を有する差分ジャーナルである対象ジャーナルが、前記第2のジャーナル記憶資源にあるか否かを判断し、
(r2)前記(r1)の判断の結果が肯定的であれば、前記対象ジャーナルを前記第2のデータボリュームに反映し、且つ、前記対象ジャーナルが有するデータを前記第2のホスト装置に送信し、
(r3)前記(r1)の判断の結果が否定的であれば、前記対象ジャーナルを、他のジャーナルよりも優先して前記第3のストレージ装置から取得し、前記対象ジャーナルを前記第2のジャーナル記憶資源に書き込み、前記対象ジャーナルを前記第2のデータボリュームに反映し、前記対象ジャーナルが有するデータを前記第2のホスト装置に送信する、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
(P)前記第2又は第3のストレージ装置が、前記旧ストレージ装置内のジャーナル記憶資源に、前記差分ジャーナルに相当する未反映のジャーナルがあるか否かを判断し、
(Q)前記(P)の判断の結果が肯定的であれば、前記旧ストレージ装置が、前記差分ジャーナルに相当する未反映のジャーナルを、自身のデータボリュームに反映し、
前記(Q)で反映される未反映のジャーナルに対応した差分ジャーナルについては、前記(x3)及び(x4)に代えて、前記(Q)が行われる、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記第1のストレージ装置が、
(H)前記第2及び第3の記憶資源グループについて共通のパージ可能なジャーナルを前記第1のジャーナル記憶資源からパージし、
(I)前記第1のジャーナル記憶資源の使用率が所定の閾値を超えているか否かを判断し、
(J)前記(I)の判断の結果が肯定的の場合、前記第1の記憶資源グループと前記第2の記憶資源グループとの関係である第1の関係と、前記第1の記憶資源グループと前記第3の記憶資源グループとの関係である第2の関係とのうち、前記第1のジャーナル記憶資源からパージ可能なジャーナルの数が少ない方の関係を解除し、
(K)前記第2及び第3の記憶資源グループのうち解除されていない関係に係る記憶資源グループについてのパージ可能なジャーナルを前記第1のジャーナル記憶資源からパージし、
前記第1のジャーナル記憶資源の使用率とは、前記第1のジャーナル記憶資源の容量に対する、前記第1のジャーナル記憶資源内の1以上のJNLの総容量の割合、である、
非同期リモートコピーシステム。 - 請求項6記載のシステムであって、
(L)前記第1のストレージ装置が、サスペンドされたペアについて、前記第1のデータボリューム内のライト先にデータが書き込まれた場合、そのデータを有するJNLを作成せず、そのライト先に更新があったことを管理し、
(M)前記第1のストレージ装置が、所定のコマンドに応答して、更新があったライト先にあるデータを含んだジャーナルを作成して、そのジャーナルを、前記ネックのペアに係るデータボリュームを有するストレージ装置に送信し、
(N)そのストレージ装置が、そのジャーナルを自身のジャーナル記憶資源に書込み、そのジャーナル記憶資源内のジャーナルを、自身のデータボリュームに反映する、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記(X)の場合、前記第1のホスト装置から前記第2のホスト装置へのフェイルオーバが行われ、前記第2のホスト装置が、前記第2のストレージ装置に所定のコマンドを送信し、
前記第2のストレージ装置が、前記所定のコマンドを受信し、前記所定のコマンドに応答して、前記(x1)及び(x2)を行う、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記第2のデータボリュームは、前記第2のホスト装置からデータを書き込むことが禁止されたライト禁止状態であり、
前記第2のストレージ装置が、前記(x4)の完了後に、前記第2のデータボリュームを、前記第2のホスト装置からデータを書き込むことが可能なライト許可状態にする、
非同期リモートコピーシステム。 - 請求項1記載のシステムであって、
前記第1のジャーナル記憶資源は、前記第2及び第3の記憶資源グループに共通であり、
前記(D1)において、前記第2のストレージ装置が、前記第2のジャーナル記憶資源において最新である更新番号の次の更新番号を有するジャーナルを前記第1のストレージ装置から読み出し、読み出したジャーナルを前記第2のジャーナル記憶資源に書き込み、
前記(D2)において、前記第3のストレージ装置が、前記第3のジャーナル記憶資源において最新である更新番号の次の更新番号を有するジャーナルを前記第1のストレージ装置から読み出し、読み出したジャーナルを前記第3のジャーナル記憶資源に書き込む、
非同期リモートコピーシステム。 - 非同期リモートコピーを行うストレージシステムである非同期リモートコピーシステムでの記憶制御方法であって、
前記非同期リモートコピーシステムが、
第1の記憶資源グループを有し第1のホスト装置に接続されている第1のストレージ装置と、
第2の記憶資源グループを有し第2のホスト装置に接続されている第2のストレージ装置と、
第3の記憶資源グループを有し第3のホスト装置に接続されている第3のストレージ装置と
を有し、
前記第1の記憶資源グループが、データが書き込まれる論理ボリュームである第1のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第1のジャーナル記憶資源とを有し、
前記第2の記憶資源グループが、データが書き込まれる論理ボリュームである第2のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第2のジャーナル記憶資源とを有し、
前記第3の記憶資源グループが、データが書き込まれる論理ボリュームである第3のデータボリュームと、データのジャーナルが書き込まれる記憶資源である第3のジャーナル記憶資源とを有し、
前記第1のストレージ装置が、前記ホスト装置から前記第1のデータボリュームにデータが書き込まれる場合、前記第1の記憶資源グループ内のデータボリュームにデータが書き込まれる都度に更新される番号である更新番号を更新し、その更新番号とそのデータとを含んだジャーナルを作成し、そのジャーナルを前記第1のジャーナル記憶資源に書き込み、
マルチターゲット方式の非同期リモートコピーが行われ、その非同期リモートコピーでは、前記ジャーナルが、前記第1のストレージ装置から前記第2のストレージ装置へ転送され前記第2のデータボリュームに反映されることで前記第1のデータボリューム内のデータが前記第2のデータボリュームに書き込まれ、且つ、前記ジャーナルが、前記第1のストレージ装置から前記第3のストレージ装置へ転送され前記第3のデータボリュームに反映されることで前記第1のデータボリューム内のデータが前記第3のデータボリュームに書き込まれ、
前記記憶制御方法が、
前記第1のストレージ装置が停止した場合、
(x1)前記第2のストレージ装置において最近反映されたジャーナルが有する更新番号と、前記第3のストレージ装置において最近反映されたジャーナルが有する更新番号とのどちらが新しいかを判断し、
(x2)前記第2及び第3のストレージ装置のうち、前記(x1)で新しいと判断された更新番号を有するストレージ装置である新ストレージ装置に、前記(x1)で新しいと判断されなかった更新番号の次の更新番号を有するジャーナルから前記(x1)で新しいと判断された更新番号を有するジャーナルまでの1以上のジャーナルである1以上の差分ジャーナルがあるか否かを判断し、
(x3)前記(x2)の判断の結果が肯定的の場合、前記新ストレージ装置内のジャーナル記憶資源から、前記第2及び第3のストレージ装置のうちの前記新ストレージ装置ではないストレージ装置である旧ストレージ装置内のジャーナル記憶資源に、前記1以上の差分ジャーナルをコピーし、
(x4)前記旧ストレージ装置において、前記1以上の差分ジャーナルを、更新番号の順に、前記旧ストレージ装置内のデータボリュームに反映する、
記憶制御方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201080008833.2A CN102326152B (zh) | 2010-04-07 | 2010-04-07 | 非同步远程复制系统以及存储控制方法 |
PCT/JP2010/002540 WO2011125126A1 (ja) | 2010-04-07 | 2010-04-07 | 非同期リモートコピーシステム、及び、記憶制御方法 |
US12/864,298 US8375004B2 (en) | 2010-04-07 | 2010-04-07 | Asynchronous remote copy system and storage control method |
JP2012509185A JP5270796B2 (ja) | 2010-04-07 | 2010-04-07 | 非同期リモートコピーシステム、及び、記憶制御方法 |
EP10841823.7A EP2423818B1 (en) | 2010-04-07 | 2010-04-07 | Asynchronous remote copy system and storage control method |
US13/740,351 US9880910B2 (en) | 2010-04-07 | 2013-01-14 | Asynchronous remote copy system and storage control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2010/002540 WO2011125126A1 (ja) | 2010-04-07 | 2010-04-07 | 非同期リモートコピーシステム、及び、記憶制御方法 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/864,298 A-371-Of-International US8375004B2 (en) | 2010-04-07 | 2010-04-07 | Asynchronous remote copy system and storage control method |
US13/740,351 Continuation US9880910B2 (en) | 2010-04-07 | 2013-01-14 | Asynchronous remote copy system and storage control method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011125126A1 true WO2011125126A1 (ja) | 2011-10-13 |
Family
ID=44761651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/002540 WO2011125126A1 (ja) | 2010-04-07 | 2010-04-07 | 非同期リモートコピーシステム、及び、記憶制御方法 |
Country Status (5)
Country | Link |
---|---|
US (2) | US8375004B2 (ja) |
EP (1) | EP2423818B1 (ja) |
JP (1) | JP5270796B2 (ja) |
CN (1) | CN102326152B (ja) |
WO (1) | WO2011125126A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013186557A (ja) * | 2012-03-06 | 2013-09-19 | Nec Corp | データベースの非同期レプリケーション方式 |
US9886359B2 (en) | 2014-06-20 | 2018-02-06 | Fujitsu Limited | Redundant system, redundancy method, and computer-readable recording medium |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6019940B2 (ja) * | 2012-08-30 | 2016-11-02 | 富士通株式会社 | 情報処理装置、コピー制御プログラム、およびコピー制御方法 |
US20140164323A1 (en) * | 2012-12-10 | 2014-06-12 | Transparent Io, Inc. | Synchronous/Asynchronous Storage System |
WO2014091600A1 (ja) * | 2012-12-13 | 2014-06-19 | 株式会社日立製作所 | ストレージ装置及びストレージ装置移行方法 |
US9317423B2 (en) * | 2013-01-07 | 2016-04-19 | Hitachi, Ltd. | Storage system which realizes asynchronous remote copy using cache memory composed of flash memory, and control method thereof |
US9514012B2 (en) | 2014-04-03 | 2016-12-06 | International Business Machines Corporation | Tertiary storage unit management in bidirectional data copying |
US9672124B2 (en) * | 2014-06-13 | 2017-06-06 | International Business Machines Corporation | Establishing copy pairs from primary volumes to secondary volumes in multiple secondary storage systems for a failover session |
JP6511738B2 (ja) * | 2014-06-20 | 2019-05-15 | 富士通株式会社 | 冗長システム、冗長化方法および冗長化プログラム |
CN105430063A (zh) * | 2015-11-05 | 2016-03-23 | 浪潮(北京)电子信息产业有限公司 | 一种多控共享存储系统间远程复制方法 |
US10671493B1 (en) | 2017-08-29 | 2020-06-02 | Wells Fargo Bank, N.A. | Extended remote copy configurator of three-site data replication for disaster recovery |
CN107832014A (zh) * | 2017-11-07 | 2018-03-23 | 长沙曙通信息科技有限公司 | 一种远程数据缓冲存储转发实现方法和装置 |
JP2021174392A (ja) * | 2020-04-28 | 2021-11-01 | 株式会社日立製作所 | リモートコピーシステム及びリモートコピー管理方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005267216A (ja) * | 2004-03-18 | 2005-09-29 | Hitachi Ltd | ストレージリモートコピー方法および情報処理システム |
JP2006065629A (ja) | 2004-08-27 | 2006-03-09 | Hitachi Ltd | データ処理システム及び方法 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6745303B2 (en) | 2002-01-03 | 2004-06-01 | Hitachi, Ltd. | Data synchronization of multiple remote storage |
JP4021823B2 (ja) * | 2003-09-01 | 2007-12-12 | 株式会社日立製作所 | リモートコピーシステム及びリモートコピーの方法 |
JP4477370B2 (ja) * | 2004-01-30 | 2010-06-09 | 株式会社日立製作所 | データ処理システム |
WO2005086001A1 (en) * | 2004-02-27 | 2005-09-15 | Incipient, Inc. | Distributed asynchronous ordered replication |
JP4452533B2 (ja) * | 2004-03-19 | 2010-04-21 | 株式会社日立製作所 | システムおよび記憶装置システム |
JP4476683B2 (ja) * | 2004-04-28 | 2010-06-09 | 株式会社日立製作所 | データ処理システム |
US7395265B2 (en) | 2004-08-27 | 2008-07-01 | Hitachi, Ltd. | Data processing system and storage subsystem provided in data processing system |
US7412576B2 (en) * | 2004-12-08 | 2008-08-12 | Hitachi, Ltd. | Remote copy system having multiple data centers |
JP4738941B2 (ja) * | 2005-08-25 | 2011-08-03 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの管理方法 |
JP2007066162A (ja) * | 2005-09-01 | 2007-03-15 | Hitachi Ltd | ストレージシステム及びストレージシステムの管理方法 |
JP5124989B2 (ja) * | 2006-05-26 | 2013-01-23 | 日本電気株式会社 | ストレージシステム及びデータ保護方法とプログラム |
GB0615779D0 (en) * | 2006-08-09 | 2006-09-20 | Ibm | Storage management system with integrated continuous data protection and remote copy |
JP5244332B2 (ja) * | 2006-10-30 | 2013-07-24 | 株式会社日立製作所 | 情報システム、データ転送方法及びデータ保護方法 |
JP5022773B2 (ja) * | 2007-05-17 | 2012-09-12 | 株式会社日立製作所 | ジャーナルを利用したリモートコピーのコピー先となるストレージシステムの消費電力を節約する方法及びシステム |
JP2009122873A (ja) * | 2007-11-13 | 2009-06-04 | Hitachi Ltd | ストレージシステム間でのリモートコピーを管理する装置 |
-
2010
- 2010-04-07 WO PCT/JP2010/002540 patent/WO2011125126A1/ja active Application Filing
- 2010-04-07 EP EP10841823.7A patent/EP2423818B1/en not_active Not-in-force
- 2010-04-07 CN CN201080008833.2A patent/CN102326152B/zh active Active
- 2010-04-07 US US12/864,298 patent/US8375004B2/en active Active
- 2010-04-07 JP JP2012509185A patent/JP5270796B2/ja active Active
-
2013
- 2013-01-14 US US13/740,351 patent/US9880910B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005267216A (ja) * | 2004-03-18 | 2005-09-29 | Hitachi Ltd | ストレージリモートコピー方法および情報処理システム |
JP2006065629A (ja) | 2004-08-27 | 2006-03-09 | Hitachi Ltd | データ処理システム及び方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2423818A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013186557A (ja) * | 2012-03-06 | 2013-09-19 | Nec Corp | データベースの非同期レプリケーション方式 |
US9886359B2 (en) | 2014-06-20 | 2018-02-06 | Fujitsu Limited | Redundant system, redundancy method, and computer-readable recording medium |
Also Published As
Publication number | Publication date |
---|---|
EP2423818B1 (en) | 2016-01-13 |
EP2423818A4 (en) | 2014-04-30 |
US8375004B2 (en) | 2013-02-12 |
US20110251993A1 (en) | 2011-10-13 |
US9880910B2 (en) | 2018-01-30 |
CN102326152A (zh) | 2012-01-18 |
JP5270796B2 (ja) | 2013-08-21 |
US20130132693A1 (en) | 2013-05-23 |
EP2423818A1 (en) | 2012-02-29 |
CN102326152B (zh) | 2014-11-26 |
JPWO2011125126A1 (ja) | 2013-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5270796B2 (ja) | 非同期リモートコピーシステム、及び、記憶制御方法 | |
EP1796004B1 (en) | Storage system and data processing system | |
WO2011125127A1 (ja) | 非同期リモートコピーシステム、及び、記憶制御方法 | |
US7398367B2 (en) | Storage subsystem that connects fibre channel and supports online backup | |
EP0902923B1 (en) | Method for independent and simultaneous access to a common data set | |
US6981008B2 (en) | Method for duplicating data of storage subsystem and data duplicating system | |
EP2120147B1 (en) | Data mirroring system using journal data | |
US7328373B2 (en) | Data processing system | |
US8370590B2 (en) | Storage controller and data management method | |
JP3958757B2 (ja) | カスケード式再同期を利用する障害回復システム | |
US6282610B1 (en) | Storage controller providing store-and-forward mechanism in distributed data storage system | |
JP4550717B2 (ja) | 仮想ストレージシステム制御装置、仮想ストレージシステム制御プログラム、仮想ストレージシステム制御方法 | |
US6584473B2 (en) | Information storage system | |
US20070150677A1 (en) | Storage system and snapshot management method | |
US20050114741A1 (en) | Method, system, and program for transmitting input/output requests from a primary controller to a secondary controller | |
JP2005309793A (ja) | データ処理システム | |
JP4311532B2 (ja) | 記憶システム及び同システムにおけるスナップショット管理方法 | |
US7334164B2 (en) | Cache control method in a storage system with multiple disk controllers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080008833.2 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12864298 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010841823 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10841823 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012509185 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 6356/DELNP/2012 Country of ref document: IN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |