WO2015085529A1 - Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage - Google Patents

Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage Download PDF

Info

Publication number
WO2015085529A1
WO2015085529A1 PCT/CN2013/089173 CN2013089173W WO2015085529A1 WO 2015085529 A1 WO2015085529 A1 WO 2015085529A1 CN 2013089173 W CN2013089173 W CN 2013089173W WO 2015085529 A1 WO2015085529 A1 WO 2015085529A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage device
data
copied
copy
information
Prior art date
Application number
PCT/CN2013/089173
Other languages
English (en)
Chinese (zh)
Inventor
陈怡佳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201380002386.3A priority Critical patent/CN104363977A/zh
Priority to PCT/CN2013/089173 priority patent/WO2015085529A1/fr
Publication of WO2015085529A1 publication Critical patent/WO2015085529A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2074Asynchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2076Synchronous techniques

Definitions

  • the present invention relates to storage technologies, and more particularly to a data replication method, a data replication device, and a storage device.
  • the traditional two-site data center disaster recovery consists of three data centers deployed in two places.
  • the three data centers are Data Center, Data Center B, and Data Center C.
  • data center A and data center B are located at two different locations in the same city, and the distance is often within 100 km.
  • Data Center C is located in another city that is far away (eg, 1 000km).
  • synchronous remote replication can be used for data disaster recovery. Therefore, data stored in data center A and data center B are consistent. Due to the long distance between Data Center A and Data Center C, asynchronous remote replication is usually used for data disaster recovery.
  • data center A experiences a disaster
  • the replication link between data center A and data center C is interrupted, and data center B can re-copy all data to data center C in an asynchronous remote replication manner.
  • Embodiments of the present invention provide a data replication method, a data replication device, and a storage device to improve data replication efficiency.
  • a first aspect of the embodiments of the present invention provides a data replication method, where the method is applied to a storage system, where the storage system includes at least three storage devices, where the first storage device and the second storage device store the same data. , the method includes:
  • the second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device; Determining, in the data to be copied, the data that has not been copied in the data to be copied according to the replication progress information and the data to be copied saved by the second storage device, where the data to be copied is the first a data to be copied by the storage device to the third storage device;
  • the receiving, by the second storage device, the replication progress information sent by the first storage device includes:
  • the second storage device receives a message that is sent by the first storage device and includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the second storage device is configured with the saved data information, where the copied data information is used to obtain the Data to be copied.
  • the method further includes: when receiving the replication startup information sent by the first storage device, generating, according to the replication startup information Copy data information.
  • the method further includes: receiving the copy data information that is sent by the first storage device.
  • the method further includes: receiving detection information sent by the third storage device, The detection information is used to indicate that the first storage device is faulty.
  • a second aspect of the embodiments of the present invention provides a data replication method, where the method is applied to a storage system, where the storage system includes at least three storage devices, where the first storage device and the second storage device store the same data. , the method includes:
  • the second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device; according to the replication progress information Modifying the first copy data information into the second copy data information; Determining, in the data to be copied, the data that has not been copied in the data to be copied according to the second copy data information and the data to be copied saved by the second storage device, where the data to be copied is Decoding data of the first storage device to the third storage device; and transmitting the unreplicated data to the third storage device.
  • the receiving, by the second storage device, the replication progress information sent by the first storage device includes:
  • the second storage device receives a message that is sent by the first storage device and includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the first copy data information is used to obtain the third time before the receiving the copy progress information Storage device data.
  • the method further includes: when receiving the replication startup information sent by the first storage device, generating, according to the replication startup information A copy of the data information.
  • the method further includes: receiving the copy data information sent by the first storage device.
  • the method further includes: receiving detection information sent by the third storage device, The detection information is used to indicate that the first storage device is faulty.
  • a third aspect of the embodiments of the present invention provides a data replication apparatus, where the data replication apparatus is located in a second storage device, and includes:
  • a receiving module configured to receive the copy progress information sent by the first storage device, where the copy progress information is used to indicate that the first storage device has copied data to the third storage device;
  • a determining module configured to determine, according to the copy progress information and the to-be-copied data saved by the second storage device, data that has not been copied in the data to be copied, where the first storage device is faulty, where the data to be copied is to be copied.
  • Data is that the first storage device is to be copied to the third storage Equipment data;
  • a sending module configured to send the unreplicated data to the third storage device.
  • the receiving module is configured to receive a message that is sent by the first storage device and includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the second storage device is configured with the saved data information, where the copied data information is used to obtain the The data to be copied.
  • the method further includes: a generating module, configured to: when receiving the copy startup information sent by the first storage device, according to the The copy start information generates the copy data information.
  • the receiving module is further configured to receive the copy data information that is sent by the first storage device.
  • the receiving module is further configured to: when the first storage device fails, receive the third storage device The detected detection information is used to indicate that the first storage device is faulty.
  • a fourth aspect of the embodiments of the present invention provides a data replication device, where the data replication device is located in a second storage device, and includes:
  • a receiving module configured to receive the copy progress information sent by the first storage device, where the copy progress information is used to indicate that the first storage device has copied data to the third storage device;
  • a modifying module configured to modify the first copy data information into the second copy data information according to the copy progress information
  • a determining module configured to determine, according to the second copy data information and the to-be-copied data saved by the second storage device, data that has not been copied in the data to be copied when the first storage device is faulty,
  • the data to be copied is the first storage device to be copied to the third Storage device data;
  • the receiving module is configured to receive a message that is sent by the first storage device and includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the first copy data information is used to obtain data that has not been copied to the third storage device last time before the copy progress information is received.
  • the method further includes: a generating module, configured to: when receiving the copy startup information sent by the first storage device, according to the The copy start information generates the first copy data information.
  • the receiving module is further configured to receive the first copy data information that is sent by the first storage device.
  • the receiving module is further configured to: when the first storage device fails, receive the third storage device The detected detection information is used to indicate that the first storage device is faulty.
  • a fifth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
  • processor and the memory communicate via the communication bus
  • the memory is used to save a program
  • the processor is configured to execute the program to implement the data copying method according to the first aspect of the embodiments of the present invention.
  • a sixth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
  • the processor and the memory communicate via the communication bus;
  • the memory is used to save a program;
  • the processor is configured to execute the program to implement the data copying method according to the second aspect of the embodiments of the present invention.
  • the second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device;
  • the second storage device may determine, according to the replication progress information and the data to be copied saved by the second storage device, data that has not been copied in the data to be copied, and send the data that has not been copied to the data.
  • the third storage device Since the second storage device can determine the data that has not been copied, it avoids resending all the data to be copied, which improves the copying efficiency.
  • FIG. 1 is a schematic diagram of an application network architecture of a data replication method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a data replication method according to an embodiment of the present invention
  • FIG. 3 is a difference bitmap provided by an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of another data replication method according to an embodiment of the present invention.
  • FIG. 5 is a signaling interaction diagram of another data replication method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another data replication apparatus according to an embodiment of the present invention;
  • FIG. 8 is a schematic structural diagram of a storage device according to an embodiment of the present invention;
  • FIG. 9 is a schematic structural diagram of another storage device according to an embodiment of the present invention.
  • the data replication method provided by the embodiment of the present invention can be implemented in a storage system.
  • 1 is a schematic structural diagram of a storage system according to an embodiment of the present invention.
  • the storage system includes at least one application server 11 (also referred to as a host), at least one production center, and at least two disaster recovery centers.
  • the application server 11 may comprise any computing device known in the art, such as a server, desktop computer, and the like.
  • the production center includes a production array 22 .
  • the disaster recovery center includes a primary site and a secondary site.
  • the production center and the primary site and the secondary site can be connected in a star network or other networking mode, which is not limited here.
  • the primary site includes a disaster recovery array 33 and the secondary site includes a disaster recovery array 44.
  • the production center can receive the write data request sent by the host 11 and save the data carried in the write data request in the production array 22. After the production center receives the write data request sent by the host, it needs to synchronize the data carried by the write data request to the primary site.
  • the primary site may also receive a write data request sent by the host (not shown), and save the data carried in the write data request in the disaster recovery array 33. Similarly, when the primary site receives the write data request sent by the host, it also needs to synchronize the data carried by the write data request to the production array.
  • the production array 22 may be a storage device known in the prior art, such as Redundant Arrays of Inexpensive Disks (RAID), Just a Bunch Of Disks (JBOD), Direct Access Storage Device (Direct Access Storage Device, DASD ) One or more interconnected disk drives, such as tape libraries, tape storage devices with one or more storage units.
  • RAID Redundant Arrays of Inexpensive Disks
  • JBOD Just a Bunch Of Disks
  • DASD Direct Access Storage Device
  • One or more interconnected disk drives such as tape libraries, tape storage devices with one or more storage units.
  • production array 22 can include controllers and memories (not shown) Shown) where the controller contains the processor and the cache.
  • the cache is a memory existing between the controller and the hard disk, the capacity is smaller than the hard disk but the speed is much higher than the hard disk;
  • the storage medium is the main memory of the production array 22,
  • the storage space is generally used to refer to a non-volatile storage medium.
  • the disk disaster recovery array 33 and the disaster recovery array 44 are similar to the system architecture of the production array 22.
  • the storage space of the production array 22 may include multiple data volumes.
  • the data volume is a logical storage space mapped by physical storage space.
  • the data volume may be a Logic Unit Number (LUN) or a file system.
  • LUN Logic Unit Number
  • the storage space of the disaster recovery array 33 and the disaster recovery array 44 may include multiple data volumes.
  • the production center and the primary site may be deployed at two different locations in the same city, and the distance may be within 100 km.
  • Data transmission can be performed between the production center and the primary site via IP (Internet Protocol) or FC (Fiber Chanel).
  • Synchronous remote replication can be used to implement data disaster recovery between the production array 22 and the disaster recovery array 33.
  • the production array 22 receives the write data request sent by the host 11, the data carried by the write data request may be written into the production array 22, and the data carried by the write data request is sent to the disaster recovery array 33.
  • the disaster recovery array 33 stores the data. After the data is successfully written into the disaster recovery array 33, the production array 22 returns a write completion response (also referred to as a response message of the write data request) of the write data request to the host 11.
  • the production center and the first-level site can receive the data of the host, and the data stored in the production center and the first-level site are consistent, the production center and the first-level The roles of the sites are interchangeable.
  • the secondary site can be deployed in another city beyond a long distance (eg 1000km).
  • Data transmission can be performed between the production center and the secondary site through IP (Internet Protocol) or FC (Fiber Chanel).
  • Data remote disaster recovery can be implemented between the production array 22 and the disaster recovery array 44.
  • the production array 22 receives the write data request sent by the host 11, the data carried by the write data request can be written into the production array 22 and then returned to the host 11.
  • the production array 22 sends the data carried by the write data request to the disaster recovery array 44 for the disaster recovery array 44 to store the data.
  • the data stored in the disaster recovery array 44 will have a certain time delay than the data stored in the production array 22.
  • production array 22 will receive multiple write data requests over a period of time, and while production array 22 is performing asynchronous remote replication to disaster recovery array 44, host 11 may still send write data requests to production array 22. It is therefore necessary to distinguish the data that the production array 22 sends to the disaster recovery array 44 from the data it has received that has not been replicated to the disaster recovery array 33.
  • a snapshot is an image of data at a point in time.
  • the purpose of the snapshot is to create a state view for the data volume at a specific point in time. Only the data of the data volume at the time of creation can be seen through this view. After this time point, the data volume is modified (new data is written). Will not be reflected in the snapshot view. With this snapshot view, you can copy the data.
  • the production array 22 since the snapshot data is "stationary", the production array 22 can copy the snapshot data to the disaster recovery array 44 after the data of each time point is increased, and the remote data replication can be completed. It also does not affect the write data request sent by the host 11 to continue to be received at the production array 22.
  • the production array 22 can snapshot the data in a data volume at intervals (e.g., one hour) to form a data copy of the data volume at that time, and send the data copy to the disaster recovery array 44.
  • the data copy is the data to be copied to the disaster recovery array 44 corresponding to the current replication task.
  • the embodiment of the present invention may also solve the above problem by using a method of adding a time slice number to each write data request received by the production array 22.
  • the production array 22 may include a current time slice number manager, where the current time slice number manager stores the current time slice number, and the current time slice number may be represented by a numerical value, such as 0, 1, or 2, or It is represented by letters, such as a, b, c, which are not limited here.
  • a first number is added to the address of the data and data carried by the write data request, and the first number is assigned to it by the current time slice number.
  • the current time slice number is modified to identify a subsequent write data request, and the first number is corresponding.
  • the data and data addresses are sent to the disaster recovery array 44.
  • the data corresponding to the first number is the data to be copied to the disaster recovery array 44 corresponding to the current replication task.
  • the data to be copied to the disaster recovery array 44 corresponding to the current replication task may be obtained in other manners, which is not limited herein.
  • IP Internet Protocol
  • IP Internet Protocol
  • FC Fiber Chanel
  • FIG. 2 it is a flowchart of a data replication method according to an embodiment of the present invention.
  • the foregoing production array 22 is referred to as a first storage device.
  • the standby array 33 is referred to as a second storage device, and the disaster recovery array 44 is referred to as a third storage device.
  • Each storage device may include a controller and a memory, wherein the controller may include a processor and a cache, and the following steps may be performed by a processor in the second storage device.
  • the method is applied to a storage system, where the storage system includes at least three storage devices, wherein the first storage device and the second storage device store the same data, and the method includes:
  • Step 21 The second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device.
  • a replication task (also called an asynchronous remote replication task) means that the first storage device sends data carried by the write data request received by one data volume for a period of time to the third storage device. Specifically, the first storage device can During this time, all the data received by one data volume is sent to the third storage device, and the difference data (also called incremental data) received relative to the last replication task can be sent to the third time during this period.
  • Storage devices are not limited here.
  • the first storage device receives multiple write data requests sent by the host before the replication task is triggered, where each write data request carries data and an address of the data.
  • the first storage device may update the pre-set differential data information (e.g., a difference bitmap) based on the plurality of write data requests.
  • the difference data information is used to record data written to the first storage device after the trigger of the most recent replication task before the current replication task is triggered.
  • each grid of the difference bitmap corresponds to an address, and a flag is stored in each grid, which may be 1 or 0.
  • “ 1 " means that data is written in the address of the segment during this time; "0" means that no data is written in the segment address during this time. Therefore, when the address carried by the received write data request belongs to the address range of a certain grid, the flag position of the grid is set to 1.
  • each cell of the difference bitmap corresponds to an address, and when the address that is received by the write data request is the same as the address of a certain cell, the flag of the cell is set to 1.
  • the difference bitmap may be saved in the cache of the first storage device or may be saved in the memory of the first storage device. It can be understood that the embodiment of the present invention can also use other flag bits to indicate whether data is written, which is not limited herein.
  • the difference data information may be a difference binary map, a difference binary tree, a difference B+ tree, and other trees, and is used to record that the current replication task is triggered before the last completed replication task is triggered.
  • Each leaf node corresponds to an address, and each leaf node stores a flag, which may be 1 or 0. " 1 " means that data is written in the address of the segment during this time; "0" means that no data is written in the segment address during this time. Therefore, when the address carried by the received write data request belongs to an address range of a leaf node, the flag position of the leaf node is Is 1.
  • each leaf node of the tree corresponds to an address, and when the address that is received by the write data request is the same as the address of a leaf node, the flag position of the leaf node is 1.
  • the tree may be saved in the cache of the first storage device, or may be saved in the memory of the first storage device. It is to be understood that the embodiment of the present invention may also use other flag bits to indicate whether data is written, which is not limited herein.
  • the difference data information may also be a differential linked list or other table, configured to record data written to the first storage device after the trigger of the last completed replication task before the current replication task is triggered.
  • Each entry corresponds to an address, and each entry retains a flag, which may be 1 or 0.
  • "1" means that data is written in the segment address during this time; "0" means that no data is written in the segment address during this time. Therefore, when the address carried by the received write data request belongs to an address range of an entry, the flag position of the leaf node is set to 1.
  • an optional implementation manner is that each entry of the table corresponds to an address, and when the address that is received by the write data request is the same as the address of an entry, the flag of the entry is set to 1.
  • the table may be saved in the cache of the first storage device, or may be saved in the memory of the first storage device. It can be understood that the embodiment of the present invention can also use other flag bits to indicate whether there is data writing, which is not limited herein.
  • the difference data information may also be in the form of a log or the like, and is used to record data that is written to the first storage device after the triggering of the last completed replication task before the current replication task is triggered, and details are not described herein again. .
  • the first storage device may generate the copy data information according to the difference bitmap (the following steps take the copy bitmap as an example), and the copy data information is used to indicate the address information of the data to be copied to the third storage device. Therefore, the data to be copied to the third storage device corresponding to the current replication task can be obtained by using the duplicate data information.
  • An implementation method for generating a copy bitmap is to copy a difference bitmap of a copy task trigger time as a copy bitmap, and then clear the difference bitmap to continue recording after receiving the write data. Find the carried data, or delete the difference bitmap, and generate a new difference bitmap to continue to record the data carried by the write data request received afterwards; another way to generate the copy bitmap is to copy directly
  • the difference bitmap when the task is triggered is used as the copy bitmap, and then the copy bitmap corresponding to the last copy task is cleared as the difference bitmap, and the data carried by the write data request received after the recording is continued, thereby showing that
  • the first storage device may include at least one difference bitmap and one copy bitmap, and the roles of the difference bitmap and the copy bitmap may be interchanged with each trigger task.
  • the embodiments of the present invention do not limit other implementations for generating a duplicate bitmap according to the difference bitmap.
  • the second storage device since the data is disaster-tolerant between the first storage device and the first storage device, the stored data is consistent with the first storage device.
  • the first storage device may send a copy start message to the second storage device, where the copy start message includes copy start information for Notifying the second storage device that the current replication task is triggered.
  • the copy data information for example, a copy bitmap
  • the difference data information for example, a difference bitmap
  • the content, usage, and generation manner of the difference bitmap of the second storage device are the same as the difference bitmap of the first storage device, and the content and use of the duplicate bitmap of the second storage device are both the same as the storage device of the first storage device.
  • the copy bitmap is the same. However, in the manner of generating the copy bitmap of the second storage device, the copy bitmap of the first storage device may be generated in the same manner, or the copy bitmap of the first storage device may be carried by the first storage device.
  • the copy initiation message is sent to the second storage device.
  • the second storage device may also record the difference data information in the form of a tree, a linked list, and a log, and is not limited herein.
  • the first storage device can obtain the data to be copied corresponding to the current replication task according to the copy bitmap. Specifically, the first storage device may determine, according to the copy bitmap, an address where the data to be copied is located, and then obtain data to be copied from the memory according to the addresses, or directly from the cache, and generate a plurality of write data commands according to the address sequence. Send to the third storage device.
  • the first storage device may send a copy progress information to the second storage device, where the first storage device is copied to the first storage device.
  • Three storage device data Three storage device data.
  • the second storage device is required to take over the first storage device and continue to copy the data that has not been copied to Third storage device.
  • Step 22 When the first storage device is faulty, determining, according to the replication progress information and the data to be copied saved by the second storage device, data that has not been copied in the data to be copied, where the data to be copied is the The data of the first storage device to be copied to the third storage device.
  • the second storage device and the first storage device can communicate with each other through a heartbeat.
  • the first storage device may be known to be faulty.
  • Another optional implementation The method is: when the third storage device finds that the replication between the storage device and the first storage device is interrupted, the device may send a detection information to the second storage device to indicate that the first storage device is faulty.
  • the copy progress information may be an address or an address indicating the address of the data that the first storage device has copied.
  • the replication progress information may also be a duplicate bitmap.
  • the copy bitmap can be modified according to the address of the part of the data. For example, delete the grid where the address of the copied data in the copy bitmap is located, or set the flag position of the grid where the address of the copied data in the bitmap is located to 0. Then, the modified copy bitmap is sent to the second storage device.
  • the data to be copied to the third storage device is the data that has not been copied from the data to be copied, because the data to be copied to the third storage device is saved in advance. It should be noted that the data to be copied refers to all data that needs to be copied to the third storage device in the current copying task, and is the sum of the data that the first storage device has copied to the third storage storage storage and the unreplicated data. .
  • the determining manner may be: the second storage device determines, according to the copy progress information, and the copy bitmap, an address of the data that has not been copied in the data to be copied.
  • the copy bitmap is a copy bitmap that is saved in advance in the second storage device, and may be a duplicate bitmap generated by the second storage device according to the difference bitmap when the current replication task is triggered, or may be sent by the first storage device.
  • the copy bitmap of the second storage device is not limited herein.
  • Step 23 Send the unreplicated data to the third storage device.
  • the unreplicated data can be obtained from the memory or cache of the second storage device based on the address.
  • the unreplicated data may be obtained from a snapshot volume of the memory according to the address of the unreplicated data.
  • the unreplicated data may be obtained from the cache according to the address of the unreplicated data. data.
  • one or more write data commands may be generated in the address order and sent to the third storage device. It should be noted that, in the embodiment of the present invention, the copied data and the unreplicated data are stored in the same data volume of the third storage device after being sent to the third storage device.
  • the second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device;
  • the second storage device can be copied according to the The degree information and the data to be copied saved by the second storage device determine data that has not been copied in the data to be copied, and send the data that has not been copied to the third storage device. Since the second storage device can determine the data that has not been copied, re-sending all the data to be copied is avoided, and the copying efficiency is improved.
  • the receiving, by the second storage device, the replication progress information sent by the first storage device may be:
  • the first storage device sends a replication progress message to the second storage device, where the replication progress message carries a progress address field, where the progress address field is used to indicate the replication progress information.
  • the progress address field can contain an address or an address.
  • the one address is used to indicate an address of the last data that the first storage device has successfully copied;
  • the progress address field includes a segment of the address, the segment of the address is used for Indicates the address of all data that the first storage device has currently copied.
  • the name of the message is not limited in the embodiment of the present invention.
  • the replication progress message may further include an identifier field of the replication task, where the identifier field of the replication task includes an identifier of the replication task.
  • the copy task refers to that the first storage device sends data carried by the write data request received by one data volume to the third storage device for a period of time.
  • the first storage device contains multiple data volumes, multiple replication tasks are generated, one for each data volume, and each replication task has a unique identifier (such as an ID).
  • the second storage device may determine the current replication task according to the identifier of the replication task, and obtain a replication bitmap corresponding to the current replication task.
  • a replication relationship between each LUN in the first storage device, the second storage device, and the third storage device may be pre-configured, and a replication relationship may include multiple replication tasks, as described above.
  • a replication task also known as this replication task refers to one of the replication tasks in the replication relationship.
  • the identifier of the replication relationship may be included, and the labels of the LUNs of the first storage device, the second storage device, and the third storage device may also be included. Knowledge.
  • the identifier of the replication relationship may also be represented by other identifiers, which is not limited herein, as long as the storage device can identify the replication relationship to which the current replication task belongs.
  • the replication relationship may be saved in the first storage device, the second storage device, and the third storage device.
  • the replication task is a replication task between LUNs in two different storage devices initiated according to the replication relationship, and at some point, only one replication task may exist between two data volumes having a replication relationship.
  • the embodiment of the present invention mainly relates to an interaction process between devices in a replication task. Therefore, in the embodiment of the present invention, a replication task identifier may be used to represent a replication task. In another embodiment, in the embodiment of the present invention, the identifier of the replication task may be the same as the identifier of the replication relationship.
  • the identifier of the replication task may be different from the identifier of the replication relationship.
  • the identifier of the replication task may be A time stamp is further added on the basis of the identifier of the replication relationship, and is used to indicate that the replication task is a replication task initiated at different times between two data volumes having the same replication relationship.
  • the replication progress message may further carry an identifier of the data volume of the first storage device in the current replication process. And determining, by the identifier of the data volume of the second storage device and the identifier of the data volume of the third storage device.
  • a LUN001 B LUN002 CLUN003 is used to indicate that the replication task is a LUN that identifies data as 001 from the storage device A, and/or a LUN that is identified as 002 in the storage device B is copied to the storage device C with the identifier 003. In the LUN.
  • the copy progress message may further include an operation code field, where the operation code field includes an operation code of the copy progress message, where the message is used to send the copy progress information to the second storage device. Message.
  • an opcode eg, opCode
  • a source device ID eg, srcAppId
  • a target device ID eg, dstAppId
  • the opcode field is used to indicate that the type of the message is a replication progress message.
  • the source device ID field is used to identify the originator of the replication progress message (in an embodiment) Refers to the first storage device), and the target device ID field is used to identify the recipient of the replication progress message (in the embodiment, the second storage device). Both the source device ID and the target device ID field can be identified by an IP address.
  • the format of the content portion of the replication progress message (for example, the data field in the replication progress message) can be as follows:
  • the LUN ID is a unique identifier.
  • ReplicationObject LUN ID The ID of the target LUN of the third storage device that can be located in the 4th to 8th bytes. It can be the LUN ID of the third storage device. In other words, this field is used to indicate the target LUN in the target storage device pointed to by the current replication task.
  • the replication progress information may be one or a logical block address (LBA) address, or may be an updated copy bitmap. For example, it may be the updated first copy sub-bitmap.
  • LBA logical block address
  • the replication progress information is an LBA, it can be used to represent the LBA of the last data that has been copied.
  • LBA logical block address
  • the replication progress information is an LBA, it can be used to indicate the address of all data that has been copied.
  • the specific form of the copy progress information is not limited here, as long as the progress of the copy can be indicated.
  • the "source device ID” field may be the IP address of the first storage device, and the “target device ID” field may be The IP address of the second storage device.
  • the "LUN ID” field may be the data to be copied belonging to the first storage device. ID of the LUN; the "ReplicationObject LUN ID” field can be the ID of the target LUN in the third storage device; the "Address” field can be the LBA of the last data that has been copied. It should be noted that, in the embodiment of the present invention, the replication relationship between each LUN in the first storage device, the second storage device, and the third storage device is pre-configured.
  • the second storage device may also determine the LUN in the second storage device corresponding to the current replication task according to the LUN ID field in the replication progress message. It can be understood that the LUN correspondence relationship is stored in both the first storage device and the second storage device.
  • the IDs of the LUNs corresponding to the first storage device and the second storage device may be the same or different, as long as the corresponding LUNs storing the same data can be determined according to the LUN correspondence relationship.
  • the copy initiation message may carry a copy start information field, and the copy start information field includes copy start information for notifying the second storage device of the current copy task trigger. It should be noted that, as long as the message carrying the copy start information field can be regarded as the copy start message in the embodiment of the present invention, the name of the message is not limited in the embodiment of the present invention.
  • the replication initiation message may further include an identifier field of the replication task, where the identifier field of the replication task includes an identifier of the replication task.
  • the second storage device may determine the current replication task according to the identifier of the replication task.
  • the copy start message may further include an operation code field, where the operation code field includes an operation code of the copy start message, and is used to indicate that the message is used to notify the second storage device of the current copy task. Triggered message.
  • FIG. 4 is a flowchart of a method for another method for data replication according to the present invention. As shown in FIG. 4, the method is applied to a storage system, where the storage system includes at least three storage devices. The first storage device and the second storage device store the same data, and the method includes:
  • Step 31 The second storage device receives the replication progress information sent by the first storage device, where The copy progress information is used to indicate that the first storage device has copied data to the third storage device.
  • the copy progress information is used to indicate that the first storage device has copied data to the third storage device.
  • Step 32 Modify the first copy data information into the second copy data information according to the copy progress information.
  • the first storage device may send a replication start message to the second storage device, where the replication startup message includes replication startup information, and is used to notify the second storage device of the replication task triggering, where the replication startup message is generated.
  • the replication startup message includes replication startup information
  • the first copy data information may be carried in the copy start message and sent to the second storage device.
  • the second storage device may also generate the first copy data information after receiving the copy start information, where the copy start message further carries an identifier of the copy task, so the second storage device may be configured according to the The identifier of the replication task determines the current replication task, and generates the first replication data information corresponding to the current replication task.
  • the first copy data information at this time is used to indicate address information of all data to be copied to the third storage device.
  • the first copy data information may also refer to the modified copy data information.
  • the second storage device may be the copy data information modified according to the last copy progress information when the last copy progress information is received, or the first copy data information is used to receive the current copy progress information. The data that has not been copied to the third storage device most recently.
  • the current copy progress information refers to the copy progress information sent for the third time
  • the first The copy data information refers to a copy bitmap modified by the second storage device according to the second copy progress information, and is used to obtain the third copy progress information before receiving the third copy
  • the unreplicated data corresponding to the copied bitmap after the secondary copy progress information is modified.
  • the manner of modifying the first copy data information to the second copy data information refer to the description of related parts in step 22 of the foregoing embodiment, and details are not described herein again.
  • Step 33 When the first storage device is faulty, determining, according to the second copy data information and the data to be copied saved by the second storage device, data that has not been copied in the data to be copied, where the data to be copied is The data of the first storage device to be copied to the third storage device.
  • the data to be copied refers to all data that needs to be copied to the third storage device in the current replication task, and is the sum of the data that the first storage device has copied to the third storage device and the data that has not been copied. .
  • the second copy data information is used to indicate the address information of the unreplicated data, so the unreplicated data can be obtained according to the second copy data information and the to-be-copied data saved by the second storage device.
  • the address of the unreplicated data may be determined according to the second copy data information, and the unreplicated data is obtained from the cache or the memory of the second storage device according to the address.
  • Step 34 Send the data that has not been copied in the data to be copied to the third storage device to the third storage device.
  • the copied data and the unreplicated data are stored in the same data volume of the third storage device after being sent to the third storage device.
  • the second storage device receives the copy progress information sent by the first storage device, where the copy progress information is used to indicate the data that the first storage device has copied, and the copy data information is modified according to the copy progress information; Since the second storage device stores the first storage device in advance The data to be copied to the third storage device is prepared, so when the first storage device fails, the second storage device may determine, according to the modified copy data information and the data of the first storage device to be copied to the third storage device. Data that has not been copied, and the unreplicated data is sent to the third storage device. The first storage device is prevented from resending all the data to be copied after sending a failure, thereby improving the replication efficiency.
  • the following takes the signaling interaction diagram shown in FIG. 5 as an example to describe the detailed process of completing the asynchronous remote replication task by the first storage device, the second storage device, and the third storage device.
  • the first storage device refers to the production array 22 in FIG. 1
  • the second storage device refers to the disaster recovery array 33 in FIG. 1
  • the third storage device refers to the disaster recovery array 44 in FIG. 1 .
  • the method includes: Step 401: A timer triggers an asynchronous remote replication task.
  • replication task can be triggered manually, and is not limited here.
  • Step 402 The first storage device blocks receiving a write data request sent by the host.
  • Step 403 The first storage device processes the write data request received before blocking.
  • the write data request may be one or multiple.
  • Processing the write data request means: writing the data and the address of the data to the cache according to the data carried by the write data request and the address of the data, and recording the data in the difference bitmap; For the implementation, reference may be made to the description of FIG. 3 and step 21.
  • Step 404 The first storage device sends a copy start message to the second storage device, where the copy start message includes copy start information, and is used to notify the second storage device that the copy task is triggered.
  • step 404 there is no order between step 404 and step 403.
  • Step 405 The second storage device blocks receiving the write data request.
  • Step 406 The second storage device processes the write data request received before blocking.
  • the write data request may be one or multiple.
  • Processing the write data request means, according to the data carried by the write data request and the address of the data, the data and the data Address is written to the cache, and the data is recorded in the difference bitmap; the specific implementation can refer to the figure
  • Step 407 The second storage device creates a copy of the data volume.
  • a replication task is the sending of data received by a data volume over a period of time to a third storage device. Therefore, a copy of the data volume needs to be created when the replication task is triggered.
  • the copy of the data volume is also the data to be copied to the third storage device in the copy task.
  • a copy of the data volume may be created by means of a snapshot or by adding a time slice number to the received write data request.
  • Step 408 The second storage device generates a copy bitmap according to the difference bitmap.
  • step 21 For a specific implementation, refer to the description of the relevant part of step 21.
  • Step 409 The second storage device sends a response message of the copy initiation message to the first storage device.
  • Step 410 The first storage device creates a copy of the current data volume.
  • the copy of the data volume is also the data to be copied to the third storage device in the current replication task.
  • a copy of the data volume may be created by means of a snapshot or by adding a time slice number to the received write data request.
  • Step 411 The first storage device generates a copy bitmap according to the difference bitmap.
  • step 21 For a specific implementation, refer to the description of the relevant part of step 21.
  • step 410 and step 411 may be performed after step 407 and step 408, or may be performed before step 407 and step 408.
  • Step 412 The first storage device sends an unblocking message to the second storage device.
  • the unblocking message can be sent to the second storage device.
  • Step 413 The first storage device releases the write data request sent by the receiving host. After the first storage device is unblocked, the write data request sent by the host or the second storage device may continue to be received and processed, and the difference bitmap may be modified.
  • Step 414a The second storage device unblocks the request to receive the write data.
  • the write data request sent by the host or the first storage device can continue to be received and processed, and the difference bitmap can be modified.
  • Step 414b The first storage device obtains data to be copied to the third storage device according to the copy bitmap.
  • the first storage device may obtain an address of the data to be copied to the third storage device according to the flag bit in the copy bitmap, and obtain the to-be-copied according to the address of the data to be copied to the third storage device.
  • the data of the third storage device may obtain an address of the data to be copied to the third storage device according to the flag bit in the copy bitmap, and obtain the to-be-copied according to the address of the data to be copied to the third storage device.
  • the data of the third storage device may obtain an address of the data to be copied to the third storage device according to the flag bit in the copy bitmap, and obtain the to-be-copied according to the address of the data to be copied to the third storage device.
  • Step 415 The first storage device sends a write data command to the third storage device, where the write data command includes a part of the data to be written to the third storage device corresponding to the current copy task.
  • Step 416 The third storage device sends a response message to the first storage device to write a data command.
  • Step 417 The first storage device sends a replication progress message to the second storage device, where the replication progress message includes replication progress information, where the replication progress information may be an address or an address, indicating that the first storage device has been copied. The address of the completed data.
  • Step 418 The first storage device modifies its saved copy bitmap. For example, delete the grid where the address of the copied data in the copy bitmap is located, or set the flag position of the grid where the address of the copied data in the bitmap is located to 0.
  • step 418 may be performed first, and then step 417 is performed.
  • the copy progress information sent in step 417 may be an address or an address, or may be modified in step 418. Copy the bitmap.
  • Step 419 The second storage device modifies the saved copy bitmap according to the copy progress information. For example, when the copy progress information is an address or a piece of address, the second storage device may delete the cell in which the address of the copied data in the copy bitmap is located according to the address, or copy the copied data in the bitmap. The flag of the grid where the address is located is 0.
  • Step 421 The second storage device obtains data that has not been copied to the third storage device according to the modified copy bitmap.
  • the second storage device obtains data that has not been copied to the third storage device according to the modified copy bitmap.
  • Step 422 The second storage device sends a write data command to the third storage device, where the write data command may be one or more, for sending the data that has not been copied to the third storage device.
  • Step 423 The third storage device sends a response message of the write data command to the second storage device.
  • the response message of the write data command may be sent to the second storage device.
  • the second storage device may determine, according to the response message, that the copy task has been completed.
  • the device may send an indication message to the second storage device to indicate that all data corresponding to the current replication task has been copied.
  • Step 424 The first storage device fails to recover.
  • the second storage device may determine that the first storage device failure has been recovered by means of a heartbeat signal or by a third storage device notification.
  • Step 425 The second storage device sends the replication progress information to the first storage device, to indicate that the copy task is completed.
  • the copy progress information is similar to the copy progress information in step 417.
  • the copy progress information may be used to indicate that the current copy task has been completed.
  • the second storage device may further send an indication message to the first storage device, to indicate that the current replication task is completed.
  • Step 426 The first storage device deletes the copy bitmap corresponding to the current replication task.
  • the first storage device may delete the replication bitmap corresponding to the replication task to save storage space.
  • Step 427 The second storage device deletes the copy bitmap corresponding to the current replication task.
  • the second storage device may delete the duplicate bitmap corresponding to the replication task to save storage space.
  • the second storage device may replace the first storage device to send the unreplicated data corresponding to the current replication task to the third storage device, and avoid resending the corresponding copy task. All the data improves replication efficiency.
  • FIG. 6 is a schematic structural diagram of a data replication apparatus 60 according to an embodiment of the present invention.
  • the storage device 60 includes: a receiving module 601, a determining module 602, and a sending module 603.
  • the receiving module 601 is configured to receive the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device.
  • a determining module 602 configured to determine, according to the copy progress information and the to-be-copied data saved by the second storage device, data that has not been copied, the data to be copied, when the first storage device is faulty. Is data of the first storage device to be copied to the third storage device;
  • the sending module 603 is configured to send the unreplicated data to the third storage device.
  • the receiving module 601 is configured to receive, by the first storage device, a message that includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the second storage device stores the copied data information, where the copied data information is used to obtain the data to be copied.
  • the data replication device 60 may further include a generating module 604, configured to generate the replication data information according to the replication startup information when receiving the replication startup information sent by the first storage device.
  • the receiving module 601 is further configured to receive the duplicate data information sent by the first storage device.
  • the receiving module 601 is further configured to receive the detection information of the third storage device, where the detection information is used to indicate that the first storage device is faulty.
  • the device provided by the embodiment of the present invention may be disposed in the controller of the second storage device described in the previous embodiment, and is used to execute the data replication method described in the foregoing embodiments.
  • each module function refer to the method implementation. The description in the example will not be repeated here.
  • the second storage device receives the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied data to the third storage device;
  • the second storage device may determine, according to the replication progress information and the data to be copied saved by the second storage device, data that has not been copied in the data to be copied, and send the data that has not been copied to the data.
  • the third storage device Since the second storage device can determine the data that has not been copied, it avoids resending all the data to be copied, which improves the copying efficiency.
  • FIG. 7 is a schematic structural diagram of a data replication apparatus 70 according to an embodiment of the present invention.
  • the storage device 70 includes: a receiving module 701, a modifying module 702, a determining module 703, and a sending module 704. .
  • the receiving module 701 is configured to receive the replication progress information sent by the first storage device, where the replication progress information is used to indicate that the first storage device has copied the data to the third storage device.
  • the modifying module 702 is configured to modify the first copy data information into the second copy data information according to the copy progress information.
  • a determining module 703 configured to: when the first storage device fails, according to the second copy The data information and the data to be copied saved by the second storage device determine data that has not been copied in the data to be copied, and the data to be copied is data to be copied by the first storage device to the third storage device;
  • the sending module 704 is configured to send the unreplicated data to the third storage device.
  • the receiving module 701 is configured to receive a message that is sent by the first storage device and includes a progress address field, where the progress address field is used to indicate the copy progress information.
  • the first copy data information is used to obtain data that has not been copied to the third storage device last time before receiving the copy progress information.
  • the data replication device 70 may further include a generating module 705, configured to generate first replication data information according to the replication startup information when receiving the replication startup information sent by the first storage device.
  • a generating module 705 configured to generate first replication data information according to the replication startup information when receiving the replication startup information sent by the first storage device.
  • the receiving module 704 is further configured to receive the first copy data information that is sent by the first storage device.
  • the receiving module 704 is further configured to receive the detection information of the third storage device, where the detection information is used to indicate that the first storage device is faulty.
  • the device provided by the embodiment of the present invention may be disposed in the controller of the second storage device described in the previous embodiment, and is used to execute the data replication method described in the foregoing embodiments.
  • each module function refer to the method implementation. The description in the example will not be repeated here.
  • the second storage device receives the copy progress information sent by the first storage device, where the copy progress information is used to indicate the data that the first storage device has copied, and the copy data information is modified according to the copy progress information; Since the second storage device saves the data of the first storage device to be copied to the third storage device in advance, when the first storage device fails, the second storage device may according to the modified copy data information and the first storage.
  • the data of the device to be copied to the third storage device determines the data that has not been copied, and transmits the data that has not been copied to the third storage device.
  • the first storage device is prevented from resending all the data to be copied after sending a failure, thereby improving the replication efficiency.
  • a storage device includes:
  • the processor 801, the memory 802, and the communication interface 803 are connected by a system bus 805 and perform communication with each other.
  • Processor 801 may be a single core or multi-core central processing unit, or a particular integrated circuit, or one or more integrated circuits configured to implement embodiments of the present invention.
  • the memory 802 can be a high speed RAM memory or a non-volatile memory, for example, at least one hard disk memory.
  • Communication interface 803 is used to communicate with the storage device.
  • Memory 802 is used to store computer execution instructions 8021. Specifically, the computer executes the instructions
  • Program code can be included in the 8021.
  • the processor 801 runs the computer execution instruction 8021, and the method flow described in FIG. 2 can be performed.
  • FIG. 9 is a storage device according to an embodiment of the present invention, including:
  • the processor 901, the memory 902, and the communication interface 903 are connected by a system bus 905 and perform communication with each other.
  • Processor 901 may be a single core or multi-core central processing unit, or a particular integrated circuit, or one or more integrated circuits configured to implement embodiments of the present invention.
  • the memory 902 can be a high speed RAM memory or a nonvolatile memory.
  • non-volatile memory for example, at least one hard disk is stored in the device.
  • Communication interface 903 is used to communicate with the storage device.
  • Memory 902 is used to store computer execution instructions 9021. Specifically, the program code may be included in the computer execution instruction 9021.
  • the processor 901 runs the computer execution instruction 9021, which can execute the map.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a logical function division.
  • there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another device, or some features can be ignored, or not executed.
  • the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some communication interface, device or module, and may be in electrical, mechanical or other form.
  • the modules described as separate components may or may not be physically separated.
  • the components displayed as modules may or may not be physical sub-modules, that is, may be located in one place, or may be distributed to multiple network sub-modules. on. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de réplication de données, un dispositif de réplication de données et un dispositif de stockage. Le procédé comprend : réception, par un deuxième dispositif de stockage, d'informations de progression de la réplication envoyées par un premier dispositif de stockage, les informations de progression de la réplication étant utilisées pour indiquer des données qui ont déjà été répliquées par le premier dispositif de stockage vers un troisième dispositif de stockage ; lorsque le premier dispositif de stockage tombe en panne, selon les informations de progression de la réplication et les données à répliquer et qui sont enregistrées par le deuxième dispositif de stockage, détermination des données qui n'ont pas été répliquées parmi les données à répliquer, les données à répliquer étant les données à répliquer par le premier dispositif de stockage vers le troisième dispositif de stockage ; et envoi des données qui n'ont pas été répliquées au troisième dispositif de stockage. L'efficacité de la réplication des données peut être améliorée.
PCT/CN2013/089173 2013-12-12 2013-12-12 Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage WO2015085529A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201380002386.3A CN104363977A (zh) 2013-12-12 2013-12-12 数据复制方法、数据复制装置和存储设备
PCT/CN2013/089173 WO2015085529A1 (fr) 2013-12-12 2013-12-12 Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/089173 WO2015085529A1 (fr) 2013-12-12 2013-12-12 Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage

Publications (1)

Publication Number Publication Date
WO2015085529A1 true WO2015085529A1 (fr) 2015-06-18

Family

ID=52530953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/089173 WO2015085529A1 (fr) 2013-12-12 2013-12-12 Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage

Country Status (2)

Country Link
CN (1) CN104363977A (fr)
WO (1) WO2015085529A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119329A (zh) * 2019-02-27 2019-08-13 咪咕音乐有限公司 数据复制容灾方法及容灾系统

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159794A (zh) * 2015-08-18 2015-12-16 浪潮(北京)电子信息产业有限公司 镜像实现系统和方法
CN106855834B (zh) * 2015-12-08 2020-11-10 华为技术有限公司 一种数据备份方法、装置和系统
EP3427157B1 (fr) * 2016-03-09 2023-10-11 Alibaba Group Holding Limited Transmission inter-régionale de données
CN110019097B (zh) * 2017-12-29 2021-09-28 中国移动通信集团四川有限公司 虚拟逻辑副本管理方法、装置、设备及介质
CN108733513A (zh) * 2018-05-07 2018-11-02 杭州宏杉科技股份有限公司 一种数据更新方法及装置
CN108762984B (zh) * 2018-05-23 2021-05-25 杭州宏杉科技股份有限公司 一种连续性数据备份的方法及装置
CN109062735B (zh) * 2018-08-02 2022-04-26 郑州云海信息技术有限公司 一种存储系统的容灾方法、存储系统和相关装置
CN111143115A (zh) * 2018-11-05 2020-05-12 中国移动通信集团云南有限公司 基于备份数据的远程容灾方法及装置
CN109725855B (zh) * 2018-12-29 2023-09-01 杭州宏杉科技股份有限公司 一种连跳复制的方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1642030A (zh) * 2004-01-05 2005-07-20 华为技术有限公司 一种网管双机容灾备份的实现方法
CN101635638A (zh) * 2008-07-25 2010-01-27 中兴通讯股份有限公司 一种容灾系统及其容灾方法
CN102938778A (zh) * 2012-10-19 2013-02-20 浪潮电子信息产业股份有限公司 一种在云存储中实现多节点容灾的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100370761C (zh) * 2005-10-26 2008-02-20 华为技术有限公司 一种智能网业务控制设备容灾系统
CN101414946B (zh) * 2008-11-21 2011-11-16 上海爱数软件有限公司 一种远程数据备份方法及介质服务器
CN101808137B (zh) * 2010-03-29 2014-09-03 成都市华为赛门铁克科技有限公司 数据传输方法、装置和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1642030A (zh) * 2004-01-05 2005-07-20 华为技术有限公司 一种网管双机容灾备份的实现方法
CN101635638A (zh) * 2008-07-25 2010-01-27 中兴通讯股份有限公司 一种容灾系统及其容灾方法
CN102938778A (zh) * 2012-10-19 2013-02-20 浪潮电子信息产业股份有限公司 一种在云存储中实现多节点容灾的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119329A (zh) * 2019-02-27 2019-08-13 咪咕音乐有限公司 数据复制容灾方法及容灾系统
CN110119329B (zh) * 2019-02-27 2024-02-23 咪咕音乐有限公司 数据复制容灾方法及容灾系统

Also Published As

Publication number Publication date
CN104363977A (zh) 2015-02-18

Similar Documents

Publication Publication Date Title
US11734306B2 (en) Data replication method and storage system
US11829607B2 (en) Enabling data integrity checking and faster application recovery in synchronous replicated datasets
JP6344798B2 (ja) データ送信方法、データ受信方法、及びストレージデバイス
WO2015085529A1 (fr) Procédé de réplication de données, dispositif de réplication de données et dispositif de stockage
US20090070528A1 (en) Apparatus, system, and method for incremental resynchronization in a data storage system
WO2023046042A1 (fr) Procédé de sauvegarde de données et groupement de bases de données
WO2015054897A1 (fr) Procédé de stockage de données, appareil de stockage de données, et dispositif de stockage
US20190317872A1 (en) Database cluster architecture based on dual port solid state disk
US11768624B2 (en) Resilient implementation of client file operations and replication
WO2019080370A1 (fr) Procédé et appareil de lecture et d'écriture de données, et serveur de stockage
JP2017208113A (ja) データ格納方法、データストレージ装置、及びストレージデバイス

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13899225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13899225

Country of ref document: EP

Kind code of ref document: A1