WO2015065399A1 - Datacenter replication - Google Patents

Datacenter replication Download PDF

Info

Publication number
WO2015065399A1
WO2015065399A1 PCT/US2013/067629 US2013067629W WO2015065399A1 WO 2015065399 A1 WO2015065399 A1 WO 2015065399A1 US 2013067629 W US2013067629 W US 2013067629W WO 2015065399 A1 WO2015065399 A1 WO 2015065399A1
Authority
WO
WIPO (PCT)
Prior art keywords
data center
data
volume
center
recovery
Prior art date
Application number
PCT/US2013/067629
Other languages
French (fr)
Inventor
Sudhakaran SUBRAMANIAN
Ramkumar GURUMOORTHY
Ashok RAJA
Vallinayagam PECHIAPPAN
Shajith THEKKAYIL
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2013/067629 priority Critical patent/WO2015065399A1/en
Priority to EP13896501.7A priority patent/EP3063638A4/en
Priority to CN201380081317.6A priority patent/CN105980995A/en
Publication of WO2015065399A1 publication Critical patent/WO2015065399A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2041Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2048Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated

Definitions

  • Datacenters are widely used to store data for various entities and organizations.
  • large enterprises may store data at one or more datacenters that may be centrally located.
  • the data stored at such datacenters may be vital to the operations of the enterprise and may include, for example, employee data, customer data, etc.
  • Figure 1 illustrates a system in accordance with an example
  • Figure 2 illustrates the example system of Figure 1 forming two three-datacenter (3DC) arrangements
  • Figure 3 is a flow chart illustrating an example method. DETAILED DESCRIPTION
  • Various examples described herein provide for successful disaster recovery for datacenters with data currency, while also providing prevention of propagation of data corruption.
  • the present disclosure provides a solution which may be implemented as a four-data center arrangement which simultaneously forms a first three-datacenter (3DC) arrangement for replication of the primary data volume and a second 3DC arrangement for replication of a recovery data volume.
  • the recovery data volume may lag in time, thereby preventing propagation of data corruption which may occur in the primary data volume.
  • the example system 100 of Figure 1 includes four data centers 110, 120, 130, 140.
  • the first data center (Data Center A) 110 and the second data center (Data Center B) 120 may be located close to one another, while the third data center (Data Center C) 130 and fourth data center (Data Center D) 140 may be located close to one another, but away from the first data center 110 and the second data center 120.
  • the first data center 110 and the second data center 120 may be located in a first region (e.g., same neighborhood, city, state, etc.), while the third data center (Data Center C) 130 and the fourth data center (Data Center D) 140 are located in a second region.
  • a disaster e.g., natural, political, economic, etc.
  • the first data center 110 includes a primary data volume 112 for storage of data.
  • the primary data volume 112 may include any of a variety of non-transitory storage media, including hard drives, flash drives, etc.
  • the data stored on the primary data volume 112 may include any type of data, including databases, program data, software programs, etc.
  • the primary data volume 112 of the first data center 110 may be the primary storage of data for the enterprise.
  • the primary data volume 112 of the first data center 110 may be the storage source accessed for all data read and write requests.
  • the first data center 110 may be provided with a second data volume, shown in Figure 1 as a recovery data volume 118.
  • a recovery data volume 118 may be either separate storage media or separate portions (e.g., virtual) of a single storage medium.
  • the first data center 110 may also include various other components, such as a cache 114 and a server 116, which may include a processor and a memory.
  • the cache 114 and the server 116 may facilitate in the storage, writing and access of the data on the primary data volume 112.
  • the second data center 120, the third data center 130, and the fourth data center 140 may each be provided with similar components as the first data center 110.
  • the second data center 120 includes a primary data volume 122, a cache 124 and a server 126.
  • the example third data center 130 includes a primary data volume 132, a cache 134, a server 136 and a recovery data volume 138.
  • the example fourth data center 140 includes a recovery data volume 148, a cache 144 and a server 146.
  • the first data center 110 and the second data center 120 are located close to each other.
  • the primary data volume 112 of the first data center 110 and the primary data volume 122 of the second data center 120 replicate each other, as indicated by the arrow labeled "A".
  • the server 116 may write data to the primary data volume 112 through the cache 114.
  • the primary data volume 112 of the first data center 110 may write to the primary data volume 122 of the second data center 120.
  • synchronous replication may assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 122 of the second data center 120. Synchronous replication is effective when the replicated data volumes are located relatively close to one another (e.g., less than 100 miles).
  • the primary data volume 112 from the first data center 110 is replicated to the primary data volume 132 at the third data center 130, as indicated by the arrow labeled "B".
  • the replication indicated as B is performed asynchronously.
  • the server 116 may write data to the primary data volume 112 through the cache 114, and the write is immediately acknowledged to the server 116 of the first data center 110.
  • the cached write data may then be pushed to the primary data volume 132 of the third data center 130.
  • the cached write data is indicated in a journal which may be stored within the first data center 110.
  • the third data center 130 may periodically poll the first data center to read journal information and, if needed, retrieve data for writing to the primary data volume 132 of the third data center 130.
  • asynchronous replication may not assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 132 of the third data center 130.
  • Asynchronous replication can be effectively implemented for data centers that are located far apart from one another (e.g., more than 100 miles).
  • Figure 1 illustrates that the primary data volume 132 of the third data center 130 is asynchronously replicated from the first data center 110 (line B), in other examples, the primary data volume 132 of the third data center 130 may be replicated from the second data center 120.
  • the primary data volume 132 is used to generate, update or synchronize a recovery data volume 138, as indicated by the arrow labeled "C".
  • the recovery data volume 138 may be generated during an initial copy step. Thereafter, the recovery data volume 138 may be updated or synchronized with the primary data volume 132 at regular intervals, for example.
  • Figure 1 illustrates the recovery data volume 138 as being a separate storage device from the primary data volume 132, in various examples, the recovery data volume 138 may be provided on the same storage device in, for example, a virtual separated portion of the storage device.
  • the generation of the recovery data volume 138 may be performed periodically based on, for example, a predetermined schedule.
  • the frequency at which the recovery data volume 138 is generated may be determined based on the needs of the particular implementation. In one example, the recovery data volume 138 is generated every six hours.
  • the recovery data volume 138 from the third data center 130 is replicated to the recovery data volume 148 at the fourth data center 140, as indicated by the arrow labeled "D".
  • the third data center 130 and the fourth data center 140 are located relatively close to one another.
  • the replication of the recovery data volume 138 to the recovery data volume 148 may be effectively performed synchronously.
  • the recovery data volume 138 of the third data volume 130 may be replicated to the recovery data volume 118 at the first data center 110, as indicated by the arrow labeled "E".
  • the third data center 130 and the first data center 110 are located relatively far from one another.
  • the replication of the recovery data volume 138 to the recovery data volume 118 may be effectively performed asynchronously.
  • Figure 1 illustrates the replication of the recovery volume 118 of the first data center 110 from the third data center 130
  • the recovery volume 118 may be replicated from the recovery volume 148 of the fourth data center 140.
  • FIG. 1 illustrates each data center 110, 120, 130, 140 provided with a server and a cache.
  • the server may not be required in certain data centers.
  • the second data center 120 of Figure 1 may not require the server 126 under normal operation.
  • the second data center 120 may only require the server 126 during a recovery mode in, for example, the event of a disaster during which the second data center 120 may be required to serve as the primary data center.
  • the third data center 130 and the fourth data center 140 may also not require a server in normal operation.
  • the example system 100 of Figure 1 is illustrated as forming two three-datacenter (3DC) arrangements.
  • the first 3DC arrangement 210 is illustrated by the solid lined triangle.
  • the first 3DC arrangement 210 includes the first data center 110, more particularly, the primary data volume 112 of the first data center 110.
  • the first 3DC arrangement 210 is further formed by the second data center 120 and the third data center (more particularly, the primary data volume 132 of the third data center 130).
  • the second data center 120 includes the primary data volume 122 having a synchronous replication of the primary data volume 112 from the first data center 110.
  • the third data center 130 includes an asynchronous replication of the primary data volume 112 from the first data center 110.
  • the example system 100 further includes a second 3DC arrangement 220, as illustrated by the dashed lined triangle.
  • the second 3DC arrangement 220 includes the third data center 130, more particularly, the recovery data volume 138 of the third data center 130.
  • the second 3DC arrangement 220 is further formed by the fourth data center 140 and the first data center (more particularly, the recovery data volume 118 of the first data center 110).
  • the fourth data center 140 includes the recovery data volume 148 having a synchronous replication of the recovery data volume 138 from the third data center 130.
  • the first data center 110 includes an asynchronous replication of the recovery data volume 138 from the third data center 130.
  • the first data center 110 may include the primary data volume 112 having current data, as well as a recovery data volume having time-lagging data. Further the replicated recovery data volume is isolated from the primary data volumes. Therefore, protection against a local disaster, which may affect the entire region in which both the first data center 110 and second data center 120 are located, as well as protection against propagation of data corruption, is provided.
  • FIG. 3 a flow chart illustrating an example method is provided.
  • primary data volumes at the first data center 110 and the second data center 120 are synchronously replicating (block 310), as illustrated by the line labeled "A" in Figures 1 and 2 above.
  • synchronous replication is an effective mode.
  • the primary data volume at the first data center 110 is asynchronously replicated to the third data center (block 312), as illustrated by the line labeled "B" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode.
  • the recovery volume may be initially generated as a copy of the primary volume and may be updated or synchronized with the primary volume on a periodic basis. If the determination is made that the time for updating or synchronization of the recovery volume has not yet arrived, the process returns to block 310 and continues synchronous replication of the first and second data centers and the asynchronous replication of the third data center.
  • the process proceeds to block 316, and a recovery data volume is updated or synchronized with the primary volume at the third data center 130, as illustrated by the line labeled "C" in Figures 1 and 2.
  • the frequency at which the recovery data volume is updated or synchronized may be set for the particular implementation.
  • the recovery data volume at the third data center may then be synchronously replicated to the fourth data center (block 318), as illustrated by the line labeled "D" in Figures 1 and 2. Since the third and fourth data centers are located in close proximity to each other, synchronous replication can be effectively achieved.
  • the recovery data volume at the third data center 130 is asynchronously replicated to the first data center 110 (block 320), as illustrated by the line labeled "E" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode. The process 300 then returns to block 310.
  • four data centers may be used to form two separate 3DC arrangements.
  • One of the 3DC arrangements provides replication of the primary data of a data center at a nearby data center (synchronously) and a distant data center (asynchronously), while the other 3DC arrangement provides similar replication for a recovery data volume which lags in time.
  • data protection is provided against regional disasters, as well as propagation of data corruption.

Abstract

An example system may include a first data center having a primary data volume; a second data center having a replication of the primary data volume from the first data center; a third data center having a replication of the primary data volume from the first data center, the third data center having a recovery data volume updated or synchronized at predetermined intervals, the recovery data volume being a copy of the primary data volume; and a fourth data center having a replication of the recovery data volume from the third data center. The first data center may include a replication of the recovery data volume from at least one of the third data center or the fourth data center.

Description

DATACENTER REPLICATION
BACKGROUND
[0001] Datacenters are widely used to store data for various entities and organizations. For example, large enterprises may store data at one or more datacenters that may be centrally located. The data stored at such datacenters may be vital to the operations of the enterprise and may include, for example, employee data, customer data, etc.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] For a more complete understanding of various examples, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
[0003] Figure 1 illustrates a system in accordance with an example;
[0004] Figure 2 illustrates the example system of Figure 1 forming two three-datacenter (3DC) arrangements; and
[0005] Figure 3 is a flow chart illustrating an example method. DETAILED DESCRIPTION
[0006] Various examples described herein provide for successful disaster recovery for datacenters with data currency, while also providing prevention of propagation of data corruption. In this regard, the present disclosure provides a solution which may be implemented as a four-data center arrangement which simultaneously forms a first three-datacenter (3DC) arrangement for replication of the primary data volume and a second 3DC arrangement for replication of a recovery data volume. The recovery data volume may lag in time, thereby preventing propagation of data corruption which may occur in the primary data volume.
[0007] Referring now to Figure 1 , a system in accordance with a first example is illustrated. The example system 100 of Figure 1 includes four data centers 110, 120, 130, 140. The first data center (Data Center A) 110 and the second data center (Data Center B) 120 may be located close to one another, while the third data center (Data Center C) 130 and fourth data center (Data Center D) 140 may be located close to one another, but away from the first data center 110 and the second data center 120. In this regard, in various examples, the first data center 110 and the second data center 120 may be located in a first region (e.g., same neighborhood, city, state, etc.), while the third data center (Data Center C) 130 and the fourth data center (Data Center D) 140 are located in a second region. Thus, if a disaster (e.g., natural, political, economic, etc.) were to strike the first region, operations of the enterprise may continue using the data centers located in the second region.
[0008] In the example illustrated in Figure 1, the first data center 110 includes a primary data volume 112 for storage of data. In various examples, the primary data volume 112 may include any of a variety of non-transitory storage media, including hard drives, flash drives, etc. Further, the data stored on the primary data volume 112 may include any type of data, including databases, program data, software programs, etc. The primary data volume 112 of the first data center 110 may be the primary storage of data for the enterprise. In this regard, the primary data volume 112 of the first data center 110 may be the storage source accessed for all data read and write requests.
[0009] In some examples, as illustrated in Figure 1, the first data center 110 may be provided with a second data volume, shown in Figure 1 as a recovery data volume 118. Those skilled in the art will appreciate that the primary data volume 112 and the recovery data volume 118 may be either separate storage media or separate portions (e.g., virtual) of a single storage medium.
[0010] The first data center 110 may also include various other components, such as a cache 114 and a server 116, which may include a processor and a memory. The cache 114 and the server 116 may facilitate in the storage, writing and access of the data on the primary data volume 112.
[0011] The second data center 120, the third data center 130, and the fourth data center 140 may each be provided with similar components as the first data center 110. For example, the second data center 120 includes a primary data volume 122, a cache 124 and a server 126. The example third data center 130 includes a primary data volume 132, a cache 134, a server 136 and a recovery data volume 138. Further, the example fourth data center 140 includes a recovery data volume 148, a cache 144 and a server 146.
[0012] As noted above, in various examples, the first data center 110 and the second data center 120 are located close to each other. As illustrated in Figure 1 , the primary data volume 112 of the first data center 110 and the primary data volume 122 of the second data center 120 replicate each other, as indicated by the arrow labeled "A". In various examples, when the first data center 110 and the second data center 120 are in relative close proximity to each other, the replication indicated as A is performed synchronously. In this regard, the server 116 may write data to the primary data volume 112 through the cache 114. The primary data volume 112 of the first data center 110 may write to the primary data volume 122 of the second data center 120. Only after the primary data volume 112 of the first data center receives an acknowledgement from the primary data volume 122 of the second data center 120, the write may be acknowledged to the server 116 of the first data center 110. Thus, synchronous replication may assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 122 of the second data center 120. Synchronous replication is effective when the replicated data volumes are located relatively close to one another (e.g., less than 100 miles).
[0013] Referring again to Figure 1, the primary data volume 112 from the first data center 110 is replicated to the primary data volume 132 at the third data center 130, as indicated by the arrow labeled "B". In various examples, when the third data center 130 is located relatively far from the first data center 110 and the second data center 120, the replication indicated as B is performed asynchronously. In this regard, the server 116 may write data to the primary data volume 112 through the cache 114, and the write is immediately acknowledged to the server 116 of the first data center 110. The cached write data may then be pushed to the primary data volume 132 of the third data center 130. In other examples, the cached write data is indicated in a journal which may be stored within the first data center 110. The third data center 130 may periodically poll the first data center to read journal information and, if needed, retrieve data for writing to the primary data volume 132 of the third data center 130. Thus, since the
acknowledgment of the write is sent to the server 112 before any transfer of data to the third data center 130, asynchronous replication may not assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 132 of the third data center 130. Asynchronous replication can be effectively implemented for data centers that are located far apart from one another (e.g., more than 100 miles).
[0014] While Figure 1 illustrates that the primary data volume 132 of the third data center 130 is asynchronously replicated from the first data center 110 (line B), in other examples, the primary data volume 132 of the third data center 130 may be replicated from the second data center 120. [0015] Referring again to Figure 1, at the third data center 130, the primary data volume 132 is used to generate, update or synchronize a recovery data volume 138, as indicated by the arrow labeled "C". In various examples, the recovery data volume 138 may be generated during an initial copy step. Thereafter, the recovery data volume 138 may be updated or synchronized with the primary data volume 132 at regular intervals, for example. While Figure 1 illustrates the recovery data volume 138 as being a separate storage device from the primary data volume 132, in various examples, the recovery data volume 138 may be provided on the same storage device in, for example, a virtual separated portion of the storage device.
[0016] In various examples, the generation of the recovery data volume 138 may be performed periodically based on, for example, a predetermined schedule. The frequency at which the recovery data volume 138 is generated may be determined based on the needs of the particular implementation. In one example, the recovery data volume 138 is generated every six hours.
[0017] Referring again to Figure 1, the recovery data volume 138 from the third data center 130 is replicated to the recovery data volume 148 at the fourth data center 140, as indicated by the arrow labeled "D". In various examples, the third data center 130 and the fourth data center 140 are located relatively close to one another. Thus, the replication of the recovery data volume 138 to the recovery data volume 148 may be effectively performed synchronously.
[0018] In some examples, the recovery data volume 138 of the third data volume 130 may be replicated to the recovery data volume 118 at the first data center 110, as indicated by the arrow labeled "E". In various examples, the third data center 130 and the first data center 110 are located relatively far from one another. Thus, the replication of the recovery data volume 138 to the recovery data volume 118 may be effectively performed asynchronously. While Figure 1 illustrates the replication of the recovery volume 118 of the first data center 110 from the third data center 130, in other examples, the recovery volume 118 may be replicated from the recovery volume 148 of the fourth data center 140.
[0019] The example of Figure 1 illustrates each data center 110, 120, 130, 140 provided with a server and a cache. Those skilled in the art will appreciate that, in some examples, the server may not be required in certain data centers. For example, the second data center 120 of Figure 1 may not require the server 126 under normal operation. The second data center 120 may only require the server 126 during a recovery mode in, for example, the event of a disaster during which the second data center 120 may be required to serve as the primary data center. Similarly, the third data center 130 and the fourth data center 140 may also not require a server in normal operation.
[0020] Referring now to Figure 2, the example system 100 of Figure 1 is illustrated as forming two three-datacenter (3DC) arrangements. In this regard, the first 3DC arrangement 210 is illustrated by the solid lined triangle. The first 3DC arrangement 210 includes the first data center 110, more particularly, the primary data volume 112 of the first data center 110. The first 3DC arrangement 210 is further formed by the second data center 120 and the third data center (more particularly, the primary data volume 132 of the third data center 130). As described above with reference to Figure 1, the second data center 120 includes the primary data volume 122 having a synchronous replication of the primary data volume 112 from the first data center 110. Further the third data center 130 includes an asynchronous replication of the primary data volume 112 from the first data center 110.
[0021] The example system 100 further includes a second 3DC arrangement 220, as illustrated by the dashed lined triangle. The second 3DC arrangement 220 includes the third data center 130, more particularly, the recovery data volume 138 of the third data center 130. The second 3DC arrangement 220 is further formed by the fourth data center 140 and the first data center (more particularly, the recovery data volume 118 of the first data center 110). As described above with reference to Figure 1, the fourth data center 140 includes the recovery data volume 148 having a synchronous replication of the recovery data volume 138 from the third data center 130. Further the first data center 110 includes an asynchronous replication of the recovery data volume 138 from the third data center 130.
[0022] Thus, the first data center 110 may include the primary data volume 112 having current data, as well as a recovery data volume having time-lagging data. Further the replicated recovery data volume is isolated from the primary data volumes. Therefore, protection against a local disaster, which may affect the entire region in which both the first data center 110 and second data center 120 are located, as well as protection against propagation of data corruption, is provided.
[0023] Referring now to Figure 3, a flow chart illustrating an example method is provided. In accordance with the example method 300, primary data volumes at the first data center 110 and the second data center 120 are synchronously replicating (block 310), as illustrated by the line labeled "A" in Figures 1 and 2 above. As described above, since the first and second data centers are located relatively in close proximity, synchronous replication is an effective mode.
[0024] The primary data volume at the first data center 110 is asynchronously replicated to the third data center (block 312), as illustrated by the line labeled "B" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode.
[0025] At block 314, a determination is made as to whether the time for periodic update or synchronization of the recovery volume has arrived. As noted above, the recovery volume may be initially generated as a copy of the primary volume and may be updated or synchronized with the primary volume on a periodic basis. If the determination is made that the time for updating or synchronization of the recovery volume has not yet arrived, the process returns to block 310 and continues synchronous replication of the first and second data centers and the asynchronous replication of the third data center. When the time for update or synchronization of the recovery volume has arrived, the process proceeds to block 316, and a recovery data volume is updated or synchronized with the primary volume at the third data center 130, as illustrated by the line labeled "C" in Figures 1 and 2. As noted above, the frequency at which the recovery data volume is updated or synchronized may be set for the particular implementation.
[0026] The recovery data volume at the third data center may then be synchronously replicated to the fourth data center (block 318), as illustrated by the line labeled "D" in Figures 1 and 2. Since the third and fourth data centers are located in close proximity to each other, synchronous replication can be effectively achieved.
[0027] The recovery data volume at the third data center 130 is asynchronously replicated to the first data center 110 (block 320), as illustrated by the line labeled "E" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode. The process 300 then returns to block 310.
[0028] Referring again to block 314, as noted above, when the determination is made that the time for update or synchronization of the recovery volume has arrived, the process proceeds to block 316 and updates or synchronizes the recovery data volume at the third data center. Those skilled in the art will understand that the updating or synchronization of the recovery data volume may be performed at the same time as the replication in blocks 310 and 312. Thus, the replication in blocks 310 and 312, which may be substantially continuous, need not be suspended during the updating or synchronization of the recovery data volume.
[0029] Thus, in accordance with various examples described herein, four data centers may be used to form two separate 3DC arrangements. One of the 3DC arrangements provides replication of the primary data of a data center at a nearby data center (synchronously) and a distant data center (asynchronously), while the other 3DC arrangement provides similar replication for a recovery data volume which lags in time. Thus, data protection is provided against regional disasters, as well as propagation of data corruption.
[0030] Various examples described herein are described in the general context of method steps or processes, which may be implemented in one example by a software program product or component, embodied in a machine-readable medium, including executable instructions, such as program code, executed by entities in networked environments. Generally, program modules may include routines, programs, objects, components, data structures, etc. which perform particular tasks or implement particular abstract data types. Executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
[0031] Software implementations of various examples can be accomplished with standard programming techniques with rule-based logic and other logic to accomplish various database searching steps or processes, correlation steps or processes, comparison steps or processes and decision steps or processes.
[0032] The foregoing description of various examples has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or limiting to the examples disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various examples. The examples discussed herein were chosen and described in order to explain the principles and the nature of various examples of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various examples and with various modifications as are suited to the particular use contemplated. The features of the examples described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products.
[0033] It is also noted herein that while the above describes examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope as defined in the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A system, comprising:
a first data center having a primary data volume;
a second data center having a replication of the primary data volume from the first data center;
a third data center having a replication of the primary data volume from the first data center, the third data center having a recovery data volume updated or synchronized at predetermined intervals, the recovery data volume being a copy of the primary data volume; and a fourth data center having a replication of the recovery data volume from the third data center,
wherein the first data center includes a replication of the recovery data volume from at least one of the third data center or the fourth data center.
2. The system of claim 1, wherein the first data center is in a same region as the second data center.
3. The system of claim 1, wherein the third data center is in a same region as the fourth data center.
4. The system of claim 1 , wherein the replication of the primary data volume at the second data center is a synchronous replication, and
wherein the replication of the primary data volume at the third data center is an asynchronous replication.
5. The system of claim 1, wherein the replication of the recovery data volume at the fourth data center is a synchronous replication.
6. The system of claim 1, wherein the replication of the recovery data volume at the first data center is an asynchronous replication.
7. A method, comprising:
replicating a data volume at a first data center to a second data center; replicating the data volume to a third data center;
periodically updating or synchronizing, at the third data center, a copy of the data volume; and
replicating the copy of the data volume at the third data center to at least one of the first data center or a fourth data center.
8. The method of claim 7, wherein the first data center is in a same region as the second data center.
9. The method of claim 7, wherein the third data center is in a same region as the fourth data center.
10. The method of claim 7, wherein the replicating the data volume at the first data center to the second data center is performed synchronously, and
wherein the replicating the data volume to the third data center is performed
asynchronously.
11. The method of claim 7, wherein the replicating the copy of the data volume at the third data center to at least one of the first data center or a fourth data center includes replicating the copy of the data volume at the third data center to the first data center asynchronously.
12. The method of claim 7, wherein the replicating the copy of the data volume at the third data center to at least one of the first data center or a fourth data center includes replicating the copy of the data volume at the third data center to the fourth data center synchronously, wherein the fourth data center is in a same region as the third data center.
13. The method of claim 7, wherein the replicating the copy of the data volume at the third data center to at least one of the first data center or a fourth data center includes replicating the copy of the data volume at the third data center to the first data center asynchronously and the fourth data center synchronously.
14. A system, comprising:
a first three-data-center (3DC) arrangement formed by:
a first data center having a primary data volume; a second data center having a synchronous replication of the primary data volume from the first data center; and
a third data center having an asynchronous replication of the primary data volume from the first data center; and
a second 3DC arrangement formed by
the first data center;
the third data center; and
a fourth data center,
wherein the third data center includes a recovery data volume updated or synchronized at predetermined intervals, the recovery data volume being a copy of the primary data volume,
wherein the fourth data center includes a synchronous replication of the recovery data volume from the third data center, and
wherein the first data center includes an asynchronous replication of the recovery data volume from at least one of the third data center or the fourth data center.
15. The system of claim 14, wherein the first data center and the second data center are in a first region, and the third data center and the fourth data center are in a second region.
PCT/US2013/067629 2013-10-30 2013-10-30 Datacenter replication WO2015065399A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/US2013/067629 WO2015065399A1 (en) 2013-10-30 2013-10-30 Datacenter replication
EP13896501.7A EP3063638A4 (en) 2013-10-30 2013-10-30 Datacenter replication
CN201380081317.6A CN105980995A (en) 2013-10-30 2013-10-30 Datacenter replication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2013/067629 WO2015065399A1 (en) 2013-10-30 2013-10-30 Datacenter replication

Publications (1)

Publication Number Publication Date
WO2015065399A1 true WO2015065399A1 (en) 2015-05-07

Family

ID=53004812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/067629 WO2015065399A1 (en) 2013-10-30 2013-10-30 Datacenter replication

Country Status (3)

Country Link
EP (1) EP3063638A4 (en)
CN (1) CN105980995A (en)
WO (1) WO2015065399A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126388A1 (en) * 2001-12-27 2003-07-03 Hitachi, Ltd. Method and apparatus for managing storage based replication
US20040230756A1 (en) * 2001-02-28 2004-11-18 Hitachi. Ltd. Three data center adaptive remote copy
US20040230859A1 (en) * 2003-05-15 2004-11-18 Hewlett-Packard Development Company, L.P. Disaster recovery system with cascaded resynchronization
US20120290787A1 (en) * 2003-06-27 2012-11-15 Hitachi, Ltd. Remote copy method and remote copy system
US8359491B1 (en) * 2004-03-30 2013-01-22 Symantec Operating Corporation Disaster recovery rehearsal using copy on write

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7117386B2 (en) * 2002-08-21 2006-10-03 Emc Corporation SAR restart and going home procedures
US7979651B1 (en) * 2006-07-07 2011-07-12 Symantec Operating Corporation Method, system, and computer readable medium for asynchronously processing write operations for a data storage volume having a copy-on-write snapshot
US8745006B2 (en) * 2009-04-23 2014-06-03 Hitachi, Ltd. Computing system and backup method using the same
US8281094B2 (en) * 2009-08-26 2012-10-02 Hitachi, Ltd. Remote copy system
CN103197988A (en) * 2012-01-05 2013-07-10 中国移动通信集团湖南有限公司 Data backup and recovery method, device and database system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040230756A1 (en) * 2001-02-28 2004-11-18 Hitachi. Ltd. Three data center adaptive remote copy
US20030126388A1 (en) * 2001-12-27 2003-07-03 Hitachi, Ltd. Method and apparatus for managing storage based replication
US20040230859A1 (en) * 2003-05-15 2004-11-18 Hewlett-Packard Development Company, L.P. Disaster recovery system with cascaded resynchronization
US20120290787A1 (en) * 2003-06-27 2012-11-15 Hitachi, Ltd. Remote copy method and remote copy system
US8359491B1 (en) * 2004-03-30 2013-01-22 Symantec Operating Corporation Disaster recovery rehearsal using copy on write

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3063638A4 *

Also Published As

Publication number Publication date
EP3063638A4 (en) 2017-07-26
EP3063638A1 (en) 2016-09-07
CN105980995A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
CN108376109B (en) Apparatus and method for copying volume of source array to target array, storage medium
US9251008B2 (en) Client object replication between a first backup server and a second backup server
US7406487B1 (en) Method and system for performing periodic replication using a log
US10565071B2 (en) Smart data replication recoverer
US9772789B1 (en) Alignment fixing on a data protection system during continuous data replication to deduplicated storage
US7299378B2 (en) Geographically distributed clusters
US7987158B2 (en) Method, system and article of manufacture for metadata replication and restoration
US9672117B1 (en) Method and system for star replication using multiple replication technologies
US7536523B2 (en) Point in time remote copy for multiple sites
US7330859B2 (en) Database backup system using data and user-defined routines replicators for maintaining a copy of database on a secondary server
KR101662212B1 (en) Database Management System providing partial synchronization and method for partial synchronization thereof
US10229056B1 (en) Alignment fixing on a storage system during continuous data replication to deduplicated storage
US11080148B2 (en) Method and system for star replication using multiple replication technologies
US20150213100A1 (en) Data synchronization method and system
US9891849B2 (en) Accelerated recovery in data replication environments
US9251230B2 (en) Exchanging locations of an out of synchronization indicator and a change recording indicator via pointers
CN105574187B (en) A kind of Heterogeneous Database Replication transaction consistency support method and system
US20140108349A1 (en) Merging an out of synchronization indicator and a change recording indicator in response to a failure in consistency group formation
US9229970B2 (en) Methods to minimize communication in a cluster database system
CN102368267A (en) Method for keeping consistency of copies in distributed system
US20140156595A1 (en) Synchronisation system and method
US7979396B1 (en) System and method for performing consistent resynchronization between synchronized copies
US6859811B1 (en) Cluster database with remote data mirroring
US10339010B1 (en) Systems and methods for synchronization of backup copies
EP3063638A1 (en) Datacenter replication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13896501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2013896501

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013896501

Country of ref document: EP