CN107608623B - Asynchronous remote copying method, device and system - Google Patents

Asynchronous remote copying method, device and system Download PDF

Info

Publication number
CN107608623B
CN107608623B CN201610544024.2A CN201610544024A CN107608623B CN 107608623 B CN107608623 B CN 107608623B CN 201610544024 A CN201610544024 A CN 201610544024A CN 107608623 B CN107608623 B CN 107608623B
Authority
CN
China
Prior art keywords
data
volume
incremental
disk array
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610544024.2A
Other languages
Chinese (zh)
Other versions
CN107608623A (en
Inventor
牛克强
戴庆军
秦长鹏
梁庆永
刘卯银
张翼
舒坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201610544024.2A priority Critical patent/CN107608623B/en
Publication of CN107608623A publication Critical patent/CN107608623A/en
Application granted granted Critical
Publication of CN107608623B publication Critical patent/CN107608623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a method, a device and a system for asynchronous remote copy; the method is applied to a disk array of local equipment, wherein the disk array is additionally provided with an incremental data area besides a main volume, and the method comprises the following steps: the primary volume receives a write request sent by a host; when the main volume determines that the current target data block is in the asynchronous remote copying process, the disk array sets the main volume to be in an internal read-only state, and writes new data into the data block in the incremental data area; the disk array establishes address mapping between data blocks of new data in the incremental data area and data blocks in the main volume; in the asynchronous remote copying process, the disk array makes the data blocks in the main volume and the data blocks in the incremental data area into difference data according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sends the difference data to the slave volume.

Description

Asynchronous remote copying method, device and system
Technical Field
The present invention relates to data protection technologies, and in particular, to a method, an apparatus, and a system for asynchronous remote copy.
Background
Remote copy is a commonly used data protection method in the storage field, and multiple copies of the same data can be generated for one copy of data by performing remote copy operation on a local device and a remote device. The types of remote copy processes may include synchronous remote copy and asynchronous remote copy; the local volume created by the local device is called a master volume, and the remote volume created by the remote device is called a slave volume.
Synchronous remote copy refers to copying local data to a remote in a completely synchronous manner, and therefore, each Input/Output (I/O) of a local device needs to wait for a completion confirmation message returned by the remote device. This results in that it takes a certain time to send data to the remote device, which has a certain effect on the performance of the upper layer application of the local device, and there is a disadvantage that the round trip propagation causes a long delay, so that the synchronous remote copy is limited to the application at a relatively short distance between the local device and the remote device.
The asynchronous remote copy ensures that the writing into the local storage system is finished before the data stored in the remote device is updated, the successful confirmation message can be written back to the upper layer application of the local device as long as the data corresponding to the write request is successfully written into the main volume of the local device, and the remote data copy is performed in a background synchronization mode, so that the influence on the system performance of the local device is small, the transmission distance of the asynchronous remote copy is long, and the requirement on the network bandwidth is small.
The current asynchronous remote copy is realized by asynchronous remote copy at block level based on snapshot, and the specific principle is as follows: when the remote copy synchronization cycle is started, the master volume and the slave volume respectively create snapshots. The local device reads data from the primary volume snapshot and sends the data to the secondary volume, the primary volume snapshot can ensure that the primary volume data read in the synchronization process have consistency, and the synchronization process and the writing process of the local device host (the subsequent part is simply referred to as the host) to the primary volume can be simultaneously carried out. The secondary volume snapshot is used for backing up data of the secondary volume before synchronization starts, and data of the secondary volume is prevented from being unavailable due to abnormity of a synchronization process.
When the current primary volume receives Write IO data, Copy-On-Write (COW) processing is sometimes required, which causes different processing delays of the Write IO data and a certain performance fluctuation problem.
Disclosure of Invention
To solve the foregoing technical problems, embodiments of the present invention are intended to provide a method, an apparatus, and a system for asynchronous remote copy, which can implement fast response of a host to write data.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an asynchronous remote copy method, which is applied to a disk array of a local device, where the disk array is additionally provided with an incremental data area in addition to a primary volume, and the method includes:
the primary volume receives a write request sent by the host; the write request comprises new data to be written and a target data block of the primary volume into which the new data is written.
When the main volume determines that the current target data block is in the asynchronous remote copying process, the disk array sets the main volume to be in an internal read-only state, and writes the new data into the data block in the incremental data area;
the disk array establishes address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
in the asynchronous remote copying process, the disk array combines the data blocks in the main volume and the data blocks in the incremental data area into difference data according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sends the difference data to the slave volume.
In the above scheme, the incremental data area specifically selects a high-speed storage medium; the high-speed storage medium comprises a Solid State Disk (SSD).
In the foregoing solution, the address mapping of the new data between the data block in the incremental data area and the data block in the primary volume specifically includes:
and writing the new data into the corresponding relation among the target data block identification of the main volume, the start address of the data to be updated in the target data block and the length of the new data.
In the foregoing solution, the writing, by the disk array, the new data into the data block in the incremental data area specifically includes:
and the magnetic disk array arranges the written new data and the writing sequence of the data in the incremental data area.
In the above solution, the disk array may further include a history data area, and accordingly, the method further includes:
when the new data block of the incremental data area is written into the main volume, the historical data in the main volume is stored in the historical data area, and then the new data block is copied to the corresponding data block in the main volume.
In the above scheme, the storing the history data in the primary volume to the history data area, and then copying the new data block to the corresponding data block in the primary volume includes:
writing the new data to the incremental data area;
transmitting the new data to the main volume by storing the new data in the new data block of the incremental data area;
the primary volume copies the original data blocks replaced by the new data blocks to the historical data area;
and the main volume stores the new data block to the corresponding data block.
In the above aspect, the method further includes:
and modifying the difference identification of the data block into a sector level, wherein the corresponding difference data is the new data in the incremental data area.
In a second aspect, an embodiment of the present invention provides a disk array, where the disk array is disposed in a local device, and the disk array includes: the device comprises a main volume, an incremental data area, a receiving module, a writing module, a mapping module and a sending module; wherein,
the receiving module is configured to receive a write request sent by the host to the primary volume; the write request comprises new data to be written and a target data block of the primary volume into which the new data is written.
The write-in module is configured to set the primary volume to an internal read-only state when it is determined that the current target data block is in the asynchronous remote copy process, and write the new data into the data block in the incremental data area;
the mapping module is used for establishing address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
and the sending module is used for forming difference data by the data blocks in the main volume and the data blocks in the incremental data area according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sending the difference data to the slave volume.
In the above scheme, the incremental data area specifically selects a high-speed storage medium; the high-speed storage medium comprises a Solid State Disk (SSD).
In the foregoing solution, the address mapping of the new data between the data block in the incremental data area and the data block in the primary volume specifically includes:
and writing the new data into the corresponding relation among the target data block identification of the main volume, the start address of the data to be updated in the target data block and the length of the new data.
In the foregoing scheme, the writing module is specifically configured to sort the written new data and the writing sequence of the data in the incremental data area.
In the above solution, the disk array further includes a history data area, configured to store history data in the primary volume when a new data block in the incremental data area is written to the primary volume,
the writing module is further configured to copy the new data block to a corresponding data block in a primary volume.
In the foregoing solution, the writing module is configured to:
writing the new data to the incremental data area;
and transmitting the new data to the main volume by storing the new data in the new data block of the incremental data area;
the historical data area is used for receiving transmission of original data blocks of the main volume to be replaced by new data blocks;
and the primary volume is used for storing the new data blocks to the corresponding data blocks.
In the above solution, the incremental data area is configured to modify the difference identifier of the data block to a sector level;
accordingly, the difference data is the new data in the incremental data area.
In a third aspect, an embodiment of the present invention provides a system for asynchronous remote copy, where the system includes a local device and a remote device slave volume; the local device comprises a host and a disk array; the disk array is additionally provided with an incremental data area besides a main volume;
the primary volume is used for receiving a write request sent by the host; the write request comprises new data to be written and a target data block of the primary volume into which the new data is written.
The primary volume is used for determining that the target data block is in an asynchronous remote copy process currently;
the disk array is used for setting the main volume to be in an internal read-only state and writing the new data into a data block in the incremental data area;
the disk array is used for establishing address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
and the disk array is used for forming difference data by the data blocks in the main volume and the data blocks in the incremental data area according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data in the asynchronous remote copying process, and sending the difference data to the slave volume.
In the asynchronous remote copying process, when a main volume receives a write-in request, new data is written into an incremental data area, and address mapping between data blocks in the incremental data area and data blocks in the main volume is established, so that COW (chip on array) is not needed in the asynchronous remote copying, the write-in delay of a host caused by COW is avoided, the write-in performance of the host is improved, and the host can quickly respond to the write-in data.
Drawings
FIG. 1 is a schematic diagram of a current block-level asynchronous remote replication based on snapshots;
fig. 2 is a schematic flowchart of a method for asynchronous remote copy according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a mapping table according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a sequential write operation according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a historical data saving process according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of asynchronous remote copy according to an embodiment of the present invention;
FIG. 7 is a flow chart illustrating an asynchronous remote copy according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a disk array structure of a local device according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a disk array structure located in a local device according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of a system for asynchronous remote copy according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Referring to FIG. 1, which illustrates a current snapshot-based block-level asynchronous remote copy flow, may include:
s11: the host machine issues a write request to the primary volume;
s12: the primary volume returns the write result to the host;
when the remote copy synchronization period is started, the snapshot of the master volume and the slave volume is activated;
at this time, step S13 is executed: the primary volume snapshot copies data to the secondary volume;
s14: after the data writing of the slave volume is completed, returning a result to the snapshot of the master volume;
note that, when the host writes the data block a to the primary volume, the written data block a does not perform COW, and in this case, if the new data block B is written to the data block a, the COW process is also performed, as shown in fig. 1, the method further includes:
s15: the host writes a data block A into the primary volume; it is noted that after writing the data block a, the COW is not performed for the data block a;
at this time, if a new data block B is written at a location of the data block a, such as LBAx, step S16 needs to be executed: copying the data block A into the snapshot;
s17: data block B is written to the location of data block a.
Therefore, the process of updating the data block a into the data block B by the host is only a write-once process, but due to the triggering of COW, the process becomes that the host reads the data block a synchronously first, and then writes the data block a into the snapshot data area, and finally writes the new data block B into the primary volume, that is, the write-once process of the host becomes a read-once and write-twice process. Therefore, in the current block-level asynchronous remote copy flow mechanism based on the snapshot, in the process of data writing, the time delay of performing COW copy on the written data block is increased, so that the writing performance of the host is affected, and the writing time delay of the host is increased.
With respect to the scheme shown in fig. 1, the basic idea of the embodiment of the present invention is: an incremental data area is additionally arranged at a host end of local equipment, when asynchronous remote copying is carried out, a main volume is set to be read only, data written in by the host directly enters the incremental data area, and a mapping relation between the written data and a main volume address is established, so that when the main volume needs to be written in, the writing delay of the host caused by COW is avoided in the asynchronous remote copying process.
Based on the above basic idea, the following embodiments of the present invention are proposed.
Example one
Referring to fig. 2, it shows a method for asynchronous remote copy, which is provided in an embodiment of the present invention and is applied to a disk array of a local device, where the disk array is provided with an incremental data area in addition to a primary volume, and the method includes:
s201: the primary volume receives a write request sent by a host;
the write request includes new data to be written and a target data block of the primary volume into which the new data is written.
S202: when the main volume determines that the current target data block is in the asynchronous remote copying process, the main volume is set to be in an internal read-only state, and new data are written into the data block in the incremental data area;
s203: the disk array establishes an address mapping of the new data between the data blocks in the incremental data area and the data blocks in the primary volume.
S204: in the asynchronous remote copying process, the disk array makes the data blocks in the main volume and the data blocks in the incremental data area into difference data according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sends the difference data to the slave volume.
For the above scheme, it should be noted that after the disk array writes the new data into the data block in the incremental data area, a write success response may be directly returned to the host, and the data in the incremental data area may be subsequently and asynchronously stored in the primary volume, so that the existing two-step operation that needs to be completed synchronously as shown in fig. 1 is changed into an asynchronous operation, thereby improving the write performance of the host and reducing the write latency of the host.
It should be further noted that, after the primary volume receives the write request sent by the host, if it is determined that the current target data block is not in the asynchronous remote copy process, the new data is directly written into the primary volume.
Through the scheme shown in fig. 1, in the asynchronous remote copy process, when the primary volume receives a write request, new data is written into the incremental data area, and by establishing address mapping between the data blocks of the new data in the incremental data area and the data blocks in the primary volume, COW is not needed in the asynchronous remote copy, thereby avoiding write-in delay of the host due to COW and improving write-in performance of the host.
Based on the scheme shown in fig. 2, for example, in addition to the conventional mechanical hard disk, the incremental data area may specifically use a high-speed storage medium, such as a Solid State Drive (SSD), and although the cost of the SSD unit GB is much higher than that of the conventional mechanical hard disk, because the space used by the incremental data area is relatively small, the conventional mechanical hard disk may be replaced by the high-speed storage medium, such as the SSD, so that the SSD quickly receives the written data, and the write operation is returned as quickly as possible, thereby further improving the write performance of the host during asynchronous remote copy. For the user, the write performance of the host at this time is equivalent to the write performance of the SSD.
For the technical solution shown in fig. 2, exemplarily, the address mapping of the new data between the data block in the incremental data area and the data block in the primary volume may specifically include, as shown in fig. 3:
the new data is written into the corresponding relation among the identification of the target data block of the main volume, the starting address of the data to be updated in the target data block and the length (namely the offset) of the new data.
The correspondence may be stored in the form of a mapping table shown in fig. 3 in a specific implementation process as an example, which is not specifically limited in this embodiment.
Preferably, after the host write request is completed, in order to facilitate data query and the next data write request, the technical solution may further include: the written new data and the written sequence of the data in the incremental data area are sorted, so that the data storage is more reasonable, and the data operation is simpler and more convenient.
Specifically, in order to achieve quick response of host data writing, and enable data written randomly by the host to be written sequentially as much as possible after the data is reordered in the data area, especially in a database random small IO scene, the efficiency of refreshing the data area can be effectively improved. Detailed process with reference to figure 4,
s41: the incremental data area receives a write request of a host;
s42: the incremental data area responds to the write request quickly;
s43: after the preset time, the incremental data area sequentially refreshes the data randomly written by the host to the data area of the primary volume.
For the technical solution shown in fig. 2, for example, the disk array may further include a history data area, and accordingly, the technical solution of this embodiment may further include:
when the new data block of the incremental data area is written into the main volume, the historical data in the main volume is stored in the historical data area, and then the new data block is copied to the corresponding data block in the main volume.
It will be appreciated that the historical data may be the original data block that was replaced by the new data block. Incremental data areas of different time periods and slave volumes are merged and a replication volume at a particular time can be generated. Thereby enabling support for restoration to any point in time without requiring a prior generation of a copy from the master node.
In a specific implementation process, as shown in fig. 5, the foregoing examples may include:
s51: writing the new data into the incremental data area;
s52: transmitting the new data block in which the new data is stored in the incremental data area to the primary volume;
s53: the primary volume copies the original data blocks replaced by the new data blocks to the historical data area;
s54: and the main volume stores the new data blocks to the corresponding data blocks.
Correspondingly, when data at a certain time point in the incremental data area needs to be accessed, the data blocks which are not changed are directly read from the primary volume, and the blocks which are changed and copied are read from the historical data area. And when the latest data needs to be accessed, the metadata is checked, if the incremental data area has a corresponding data block, reading from the data area is preferentially carried out, otherwise, reading from the main volume is carried out, and the access speed of the hot spot data is improved by utilizing the SSD.
With respect to step S204 in the technical solution shown in fig. 2, it should be noted that, when the remote copy is started, the master volume sends the difference data to the slave volume, where the difference data is composed of the incremental data area data and the data in the data area of the master volume, and here, the selection of the granularity of the data block is involved, as shown in the schematic diagram of fig. 6, with one sector 512 bytes as a unit, assuming that the size of the data block is set to be 4K, and the number of occupied sector blocks is 8. The host newly writes only 2 blocks of actual data, the actual data writing is changed by only two blocks, and the asynchronous remote copy based on the block level of the snapshot is marked with a difference of 4K in the data block size when sending the difference data, so that the problem of completing reading exists, in this example, the six data read 123457 from the data volume and 68 data in the data area form one data block to be sent to the slave volume, where because one more completing reading affects the asynchronous copying performance, and the whole data block includes some old data and is transmitted on the link, the load of the link is increased, and in the embodiment of the present invention, in step S203, the method may further include:
and modifying the difference identification of the data block into a sector level, wherein the corresponding difference data is the new data in the incremental data area. Specifically, taking fig. 6 as an example, a flash translation layer of an SSD is modified in an SSD acceleration system to effectively reduce the metadata size, so that the metadata difference identifier can be at a sector level, and thus only changed data can be transmitted during transmission, for example, only 2 sectors need to be transmitted in fig. 6, and the number of bytes actually transmitted is reduced, thereby further reducing the data transmission amount on an actual network, that is, reducing the bandwidth occupation.
Example two
For the specific implementation of the above embodiment, it should be noted that, as shown in fig. 7, the data block in the primary volume contains data a and data B, and the host data write request updates data B to data C.
At this time, as shown in the left diagram in fig. 7: according to a conventional snapshot-based asynchronous remote copy flow at the block level, this may include:
s71: when the primary volume has established a snapshot, data AB in snapshot state 1 is a pointer to the primary volume AB at this point.
S72: when a primary volume after a snapshot is established receives a request for data writing for the first time, firstly, all data blocks containing data A and data B are copied to a storage area of a snapshot area, namely, a snapshot state 2;
s73: and writing the data C into the position where the original data B is stored in the data block.
Compared with the left diagram of fig. 7, by the technical solution of the above embodiment, as shown in the right diagram of fig. 7, the data C is directly written in the data area of the incremental data area, and then the address mapping between the data a and the data C in the incremental data area is established so as to form the updated new data index for the host. When the host needs to access the updated data, the data A in the main volume and the data C in the incremental data area are accessed and returned, and at the moment, the data of the host is updated. Meanwhile, the data blocks in the primary volume are not updated actually, and the historical data B is completely reserved.
EXAMPLE III
Based on the same technical concept of the foregoing embodiment, referring to fig. 8, a disk array 80 located in a local device according to an embodiment of the present invention is shown, where the disk array 80 may include: a primary volume 801, an incremental data area 802, a receiving module 803, a writing module 804, a mapping module 805, and a sending module 806; wherein,
the receiving module 803 is configured to receive a write request sent by the host to the primary volume 801; the write request includes new data to be written and a target data block of the primary volume 801 to which the new data is written.
The write module 804 is configured to, when it is determined that the current target data block is in the asynchronous remote copy process, set the primary volume 801 to an internal read-only state, and write the new data into the data block in the incremental data area 802;
the mapping module 805 is configured to establish an address mapping between a data block in the incremental data area 802 and a data block in the primary volume 801 for the new data;
the sending module 806 is configured to combine the data blocks in the main volume 801 and the data blocks in the incremental data area 802 into difference data according to the address mapping between the data blocks in the incremental data area 802 and the data blocks in the main volume 801 of the new data, and send the difference data to the slave volume.
In the above solution, the incremental data area 802 specifically uses a high-speed storage medium; the high-speed storage medium comprises a Solid State Disk (SSD).
In the above scheme, the address mapping of the new data between the data block in the incremental data area 802 and the data block in the primary volume 801 specifically includes:
the new data is written into the corresponding relationship among the target data block identifier of the primary volume 801, the start address of the data to be updated in the target data block, and the length of the new data.
In the above scheme, the writing module 804 is specifically configured to sort the written new data and the writing sequence of the data in the incremental data area 802.
In the above solution, referring to fig. 9, the disk array 80 further includes a history data area 807 for saving the history data in the main volume 801 when writing the new data blocks of the incremental data area 802 into the main volume 801,
the writing module 804 is further configured to copy the new data block to a corresponding data block in the primary volume 801.
In the foregoing scheme, the writing module 804 is configured to:
writing the new data to the delta data area 802;
and transferring the new data block stored in the incremental data area 802 to the primary volume 801;
the history data area is used for receiving transmission of original data blocks to be replaced by new data blocks of the main volume 801;
the primary volume 801 is configured to store the new data block to a corresponding data block.
In the above solution, the incremental data area 802 is configured to modify the difference identifier of the data block to a sector level;
accordingly, the difference data is the new data in the delta data area 802.
Example four
Based on the same technical concept of the foregoing embodiment, referring to fig. 10, it shows a system 100 for asynchronous remote replication provided by the implementation of the present invention, where the system 100 includes a local device 110 and a remote device slave volume 120; the local device 110 includes a host 1101 and a disk array 1102; besides the primary volume 11021, the disk array 1102 is additionally provided with an incremental data area 11022;
the primary volume 11021 is configured to receive a write request sent by the host 1101; the write request includes new data to be written and a target data block in which the new data is written in the primary volume 11021.
The primary volume 11021 is used to determine that the target data block is currently in an asynchronous remote copy process;
the disk array 1102 is configured to set the primary volume 11021 to an internal read-only state, and write the new data to a data block in the incremental data area 11022;
the disk array 1102 is configured to establish an address mapping between data blocks of the new data in the incremental data area 11022 and data blocks of the primary volume 11021;
the disk array 1102 is configured to, in an asynchronous remote copy process, combine data blocks in the primary volume 11021 and data blocks in the incremental data area 11022 into differential data according to address mapping between data blocks in the incremental data area 11022 of the new data and data blocks in the primary volume 11021, and send the differential data to the secondary volume 120.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (15)

1. A method for asynchronous remote copy is applied to a disk array of a local device, wherein the disk array is additionally provided with an incremental data area besides a main volume, and the method comprises the following steps:
a primary volume receives a write request sent by the host; the write request comprises new data to be written and a target data block of the primary volume, wherein the new data is written in the primary volume;
when the main volume determines that the current target data block is in the asynchronous remote copying process, the disk array sets the main volume to be in an internal read-only state, and writes the new data into the data block in the incremental data area;
the disk array establishes address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
in the asynchronous remote copying process, the disk array combines the data blocks in the main volume and the data blocks in the incremental data area into difference data according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sends the difference data to the slave volume.
2. The method according to claim 1, wherein the incremental data area is a high-speed storage medium; the high-speed storage medium comprises a Solid State Disk (SSD).
3. The method according to claim 1, wherein the mapping of the address of the new data between the data block in the incremental data area and the data block in the primary volume specifically includes:
and writing the new data into the corresponding relation among the target data block identification of the main volume, the start address of the data to be updated in the target data block and the length of the new data.
4. The method according to claim 1, wherein the writing of the new data to the data block in the incremental data area by the disk array specifically includes:
and the magnetic disk array arranges the written new data and the writing sequence of the data in the incremental data area.
5. The method of claim 1, wherein the disk array further comprises a history data area, and accordingly, the method further comprises:
when the new data block of the incremental data area is written into the main volume, the historical data in the main volume is stored in the historical data area, and then the new data block is copied to the corresponding data block in the main volume.
6. The method according to claim 5, wherein the saving the history data in the primary volume to the history data area and then copying the new data block to the corresponding data block in the primary volume comprises:
writing the new data to the incremental data area;
transmitting the new data to the main volume by storing the new data in the new data block of the incremental data area;
the primary volume copies the original data blocks replaced by the new data blocks to the historical data area;
and the main volume stores the new data block to the corresponding data block.
7. The method of claim 1, further comprising:
and modifying the difference identification of the data block into a sector level, wherein the corresponding difference data is the new data in the incremental data area.
8. A disk array, wherein the disk array is provided in a local device, the disk array comprising: the device comprises a main volume, an incremental data area, a receiving module, a writing module, a mapping module and a sending module; wherein,
the receiving module is configured to receive a write request sent by the host to the primary volume; the write request comprises new data to be written and a target data block of the primary volume, wherein the new data is written in the primary volume;
the write-in module is configured to set the primary volume to an internal read-only state when it is determined that the current target data block is in the asynchronous remote copy process, and write the new data into the data block in the incremental data area;
the mapping module is used for establishing address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
and the sending module is used for forming difference data by the data blocks in the main volume and the data blocks in the incremental data area according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data, and sending the difference data to the slave volume.
9. The disk array of claim 8, wherein the incremental data area is a high-speed storage medium; the high-speed storage medium comprises a Solid State Disk (SSD).
10. The disk array of claim 8, wherein the mapping of the address of the new data between the data block in the incremental data area and the data block in the primary volume specifically includes:
and writing the new data into the corresponding relation among the target data block identification of the main volume, the start address of the data to be updated in the target data block and the length of the new data.
11. The disk array of claim 8, wherein the write module is specifically configured to sort the written new data and the write order of the data in the incremental data area.
12. The disk array of claim 8, wherein the disk array further comprises a history data area for saving history data in the primary volume when a new data block of the delta data area is written to the primary volume,
the writing module is further configured to copy the new data block to a corresponding data block in a primary volume.
13. The disk array of claim 12, wherein the write module is configured to:
writing the new data to the incremental data area;
and transmitting the new data to the main volume by storing the new data in the new data block of the incremental data area;
the historical data area is used for receiving transmission of original data blocks of the main volume to be replaced by new data blocks;
and the primary volume is used for storing the new data blocks to the corresponding data blocks.
14. The disk array of claim 8, wherein the delta data area is configured to modify the difference indication of the data blocks to a sector level;
accordingly, the difference data is the new data in the incremental data area.
15. A system for asynchronous remote replication, the system comprising a local device and a remote device slave volume; the local device comprises a host and a disk array; the disk array is additionally provided with an incremental data area besides a main volume;
the primary volume is used for receiving a write request sent by the host; the write request comprises new data to be written and a target data block of the primary volume into which the new data is written.
The primary volume is used for determining that the target data block is in an asynchronous remote copy process currently;
the disk array is used for setting the main volume to be in an internal read-only state and writing the new data into a data block in the incremental data area;
the disk array is used for establishing address mapping between data blocks of the new data in the incremental data area and data blocks in the main volume;
and the disk array is used for forming difference data by the data blocks in the main volume and the data blocks in the incremental data area according to the address mapping between the data blocks in the incremental data area and the data blocks in the main volume of the new data in the asynchronous remote copying process, and sending the difference data to the slave volume.
CN201610544024.2A 2016-07-11 2016-07-11 Asynchronous remote copying method, device and system Active CN107608623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610544024.2A CN107608623B (en) 2016-07-11 2016-07-11 Asynchronous remote copying method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610544024.2A CN107608623B (en) 2016-07-11 2016-07-11 Asynchronous remote copying method, device and system

Publications (2)

Publication Number Publication Date
CN107608623A CN107608623A (en) 2018-01-19
CN107608623B true CN107608623B (en) 2021-08-31

Family

ID=61054912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610544024.2A Active CN107608623B (en) 2016-07-11 2016-07-11 Asynchronous remote copying method, device and system

Country Status (1)

Country Link
CN (1) CN107608623B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858162A (en) * 2020-06-29 2020-10-30 苏州浪潮智能科技有限公司 Data processing method, system and device in remote copy
CN115098044B (en) * 2022-07-29 2024-10-22 苏州浪潮智能科技有限公司 Lifting period asynchronous remote copying method, system and host

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069888A1 (en) * 2004-09-29 2006-03-30 International Business Machines (Ibm) Corporation Method, system and program for managing asynchronous cache scans
CN103729301A (en) * 2013-12-31 2014-04-16 深圳市安云信息科技有限公司 Data processing method and device
CN104102554A (en) * 2013-04-07 2014-10-15 杭州信核数据科技有限公司 Data backup method and data backup device
CN104407936A (en) * 2014-11-18 2015-03-11 华为数字技术(成都)有限公司 Data snapshot method and device
CN104407933A (en) * 2014-10-31 2015-03-11 华为技术有限公司 Data backup method and device
CN105302665A (en) * 2015-10-12 2016-02-03 深圳市安云信息科技有限公司 Improved copy-on-write snapshot method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069888A1 (en) * 2004-09-29 2006-03-30 International Business Machines (Ibm) Corporation Method, system and program for managing asynchronous cache scans
CN104102554A (en) * 2013-04-07 2014-10-15 杭州信核数据科技有限公司 Data backup method and data backup device
CN103729301A (en) * 2013-12-31 2014-04-16 深圳市安云信息科技有限公司 Data processing method and device
CN104407933A (en) * 2014-10-31 2015-03-11 华为技术有限公司 Data backup method and device
CN104407936A (en) * 2014-11-18 2015-03-11 华为数字技术(成都)有限公司 Data snapshot method and device
CN105302665A (en) * 2015-10-12 2016-02-03 深圳市安云信息科技有限公司 Improved copy-on-write snapshot method and system

Also Published As

Publication number Publication date
CN107608623A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
US10620869B2 (en) Storage control device, storage control method, and recording medium
RU2596585C2 (en) Method for sending data, data receiving method and data storage device
US10534547B2 (en) Consistent transition from asynchronous to synchronous replication in hash-based storage systems
CN115599747B (en) Metadata synchronization method, system and equipment of distributed storage system
CN107729536B (en) Data storage method and device
CN112000426B (en) Data processing method and device
CN105760218A (en) Online migration method and device for virtual machine
CN109902034A (en) Snapshot creation method, device, electronic equipment and machine readable storage medium
JP4419884B2 (en) Data replication apparatus, method, program, and storage system
CN111158858B (en) Cloning method and device of virtual machine and computer readable storage medium
CN107608623B (en) Asynchronous remote copying method, device and system
CN104360953A (en) Data copying method and device
CN104917788A (en) Data storage method and apparatus
CN114328018B (en) Snapshot creation method, computing device and storage medium
CN113434470B (en) Data distribution method and device and electronic equipment
WO2016206070A1 (en) File updating method and storage device
CN113297234B (en) Data processing method, device, equipment and computer readable storage medium
JP4915365B2 (en) Data storage method and apparatus, duplicate data storage method and apparatus, and control program thereof
US9235349B2 (en) Data duplication system, data duplication method, and program thereof
CN111026783A (en) Anti-jamming data storage method, system and device
CN107422990B (en) Dynamic migration method and device
CN109254870B (en) Data backup method and device
US20200342065A1 (en) Replicating user created snapshots
CN111045961A (en) Data processing method and storage controller using the same
CN118426711B (en) Storage device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant