WO2015010327A1 - 数据发送方法、数据接收方法和存储设备 - Google Patents
数据发送方法、数据接收方法和存储设备 Download PDFInfo
- Publication number
- WO2015010327A1 WO2015010327A1 PCT/CN2013/080203 CN2013080203W WO2015010327A1 WO 2015010327 A1 WO2015010327 A1 WO 2015010327A1 CN 2013080203 W CN2013080203 W CN 2013080203W WO 2015010327 A1 WO2015010327 A1 WO 2015010327A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage device
- data
- written
- address information
- time slice
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 323
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000010076 replication Effects 0.000 claims abstract description 39
- 238000004891 communication Methods 0.000 claims description 23
- 238000011084 recovery Methods 0.000 description 61
- 238000004519 manufacturing process Methods 0.000 description 53
- 230000001960 triggered effect Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011022 operating instruction Methods 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2097—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/46—Caching storage objects of specific type in disk cache
- G06F2212/466—Metadata, control data
Definitions
- the present invention relates to storage technologies, and in particular, to a data transmission method, a data reception method, and a storage device.
- BACKGROUND Data disaster tolerance also known as remote data replication technology, refers to the establishment of an off-site data system, which is an available copy of local data. In the event of a disaster in local data and the entire application system, the system maintains at least one copy of the critical business data available in the field.
- a typical data disaster recovery system includes a production center and a disaster recovery center.
- hosts and storage arrays are deployed for normal service operations.
- hosts and storage arrays are deployed to take over their services after a disaster occurs in the production center.
- the storage array of the production center or the disaster recovery center includes multiple data volumes, and the data volume is a logical storage space mapped by physical storage space. After the data generated by the service of the production center is written to the production array, it can be copied to the disaster recovery center through the DR link and written to the disaster recovery array. To ensure that the data of the disaster recovery center can support the service takeover after the disaster occurs, the data copied to the disaster recovery array must ensure consistency.
- Assuring data consistency essentially means that there is a dependency write request, and the dependency needs to be guaranteed.
- Applications, operating systems, and databases all rely on this logic of writing data request dependencies to run their services, for example: first write data request 1 and then write data request 2, the order is fixed. That is to say, the system will ensure that the write data request 1 is sent after the write data request 1 is completely returned successfully. Therefore, it is possible to rely on an inherent method to recover the service when a failure causes the execution process to be interrupted. Otherwise, such a situation may occur, for example: When reading data, the data stored in the write data request 2 can be read, but the data stored in the write data request 1 cannot be read, which causes the service to be unrecoverable.
- a snapshot is an image of data at a point in time (the point in time when the copy begins).
- the purpose of the snapshot is to create a state view for the data volume at a specific point in time. Only the data of the data volume at the time of creation can be seen through this view. After this time point, the data volume is modified (new data is written). Will not be reflected in the snapshot view. With this snapshot view, you can copy the data.
- the production center since the snapshot data is "stationary", the production center can copy the snapshot data to the disaster recovery center after the snapshot data is added to the time point, and the remote data replication can be completed. The effect continues to execute write data requests at the production center.
- data consistency requirements can also be met. For example, the data of the data request 2 is successfully copied to the disaster recovery center, and the data of the data request 1 is not successfully copied. The data of the disaster recovery center can be restored to the previous state by using the snapshot data before the data request 2.
- the production center performs the snapshot processing when the data request is executed, the generated snapshot data is saved in the data volume dedicated to the storage of the snapshot data. Therefore, when the production center copies the snapshot data to the disaster recovery center, it needs to first The snapshot data stored in the data volume is read into the cache and then sent to the disaster recovery center. However, the data used to generate the snapshot data may still exist in the cache, but this part of the data cannot be reasonably utilized. Each copy needs to read the snapshot data in the data volume first, resulting in longer data replication and lower efficiency. . Summary of the invention
- the embodiment of the invention provides a data sending method, which can directly send the information carried by the write data request to the second storage device from the cache of the first storage device, thereby improving the efficiency of data copying.
- a first aspect of the embodiments of the present invention provides a data sending method, including:
- the first number is used to identify a current replication task, and the method further includes:
- the second number is recorded, and the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the second possible implementation manner of the first aspect further includes:
- the number before the first number corresponds to the data to be written and the address information
- the data to be written and the address information corresponding to the number before the first number are sent to the second storage device.
- a third possible implementation manner of the first aspect of the embodiments of the present invention further includes: recording a current time slice number, where the current time slice number is used to generate the first number.
- a second aspect of the embodiments of the present invention provides a data receiving method, including:
- the second storage device receives the address information sent by the first storage device
- the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the received address.
- the information is the same, the first number is the number before the current time slice number; the data to be written corresponding to the first number and the address information are added to the second number and written into the cache.
- the method further includes: recording the current time slice number, where the current time slice number is used to generate the second number.
- the method further includes: Receiving a read data request sent by the host, where the read data request includes the received address letter, determining that the latest number corresponding to the received address information is the second number; Write data is sent to the host.
- a third aspect of the embodiments of the present invention provides a storage device, including:
- a receiving module configured to receive a first write data request sent by the host, where the first write data request carries data to be written and address information;
- a read/write module configured to add the first number to the data to be written and the address information, and write the cache, where the first number is a current time slice number; and the first number corresponding to the cache is read from the cache The data to be written and the address information;
- a current time slice number manager configured to modify the current time slice number to identify information carried by the subsequent write data request
- a sending module configured to send the to-be-written data and address information to the second storage device.
- the first number is used to identify a current replication task.
- the current time slice number manager is further configured to record a second number, where the second number is a number corresponding to the most recently completed copy task before the current copy task.
- the read/write module is further configured to read the second number from the cache. After that, the number before the first number corresponds to the data to be written and the address information;
- the sending module is further configured to send the data to be written and the address information corresponding to the number before the first number to the second storage device after the second number.
- the current time slice number manager is further configured to record a current time slice number, where the current time slice number is used to generate the first number .
- a fourth aspect of the embodiments of the present invention provides a storage device, including: a receiving module, configured to receive address information sent by the first storage device;
- a locating module configured to: when determining that the first storage device is faulty, the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number The received address information is the same, and the first number is a number before the current time slice number;
- a write module configured to add a second number to the data to be written and the address information corresponding to the first number, and write the cache.
- the method further includes: a current time slice number manager, configured to record the current time slice number, where the current time slice number is used to generate the Two numbers.
- the receiving module is further configured to receive a read data request sent by a host, where the read data request includes the received address information;
- the searching module is further configured to determine that the latest number corresponding to the received address information is the second number
- the storage device further includes a sending module, where the sending module is configured to send data to be written corresponding to the second number to the host.
- a fifth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
- processor and the memory communicate via the communication bus
- the memory is used to save a program
- the processor is configured to execute the program to:
- the first write data request carrying the data to be written and the address information; adding the first number to the data to be written and the address information, and writing the cache, wherein the first The number is the current time slice number; the data to be written and the address information corresponding to the first number are read from the cache; and the current time slice number is modified to identify subsequent writes.
- the information carried by the data request; the data to be written and the address information are sent to the second storage device.
- the first number is used to identify a current replication task, and the processor is further configured to:
- the second number is recorded, and the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the processor is further configured to: after reading the second number from the cache The data before the first number corresponds to the data to be written and the address information; after the second number, the data to be written and the address information corresponding to the number before the first number are sent to the second Storage device.
- the processor is further configured to: record a current time slice number, where the current time slice number is used to generate the first number.
- a sixth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
- processor and the memory communicate via the communication bus
- the memory is used to save a program
- the processor is configured to execute the program to:
- the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the received address.
- the information is the same, the first number is the number before the current time slice number; the data to be written corresponding to the first number and the address information are added to the second number and written into the cache.
- the processor is further configured to record the current time slice number, where the current time slice number is used to generate the second
- the processor is further configured to receive a read data request sent by the host, where the read data request includes the received address information; The latest number corresponding to the received address information is the second number; and the data to be written corresponding to the second number is sent to the host.
- the information carried by the write data request includes data to be written and address information, and the first number is added to the data to be written and the address information.
- the write buffer, the first number is the current time slice number, and when the copy task is triggered, the data to be written corresponding to the first number and the address information are read from the cache, and sent to the second storage device, and in addition, in the copy task
- the current time slice number is modified, so that when the first storage device receives the write data request, the first storage device adds the same number to the modified current time slice number in the information carried by the first storage device, and thus needs to be sent to the cache in the cache.
- the information carried by the write data request of the second storage device is separated from the information carried by the write data request that the first storage device is receiving, and the information carried by the write data request is directly sent from the cache to the second storage device. Since the information is sent directly from the cache, there is no need to read the data from the data volume, so the data is complex Less time, improve the efficiency of data replication.
- FIG. 1 is a schematic diagram of an application network architecture of a data sending method according to an embodiment of the present invention
- FIG. 2 is a flowchart of a data sending method according to an embodiment of the present invention
- FIG. 3 is a flowchart of a data receiving method according to an embodiment of the present invention.
- FIG. 4 is a signaling diagram of a data sending method according to an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a storage device according to an embodiment of the present disclosure
- FIG. 6 is a schematic structural diagram of another storage device according to an embodiment of the present invention
- FIG. 7 is a schematic structural diagram of still another storage device according to an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of still another storage device according to an embodiment of the present invention. detailed description
- a production center includes a production host, a connection device, and a production array (corresponding to the first storage device in the following embodiment);
- the system architecture is similar to that of the production center, including the disaster recovery host, the connection device, and the disaster recovery array (corresponding to the second storage device in the following embodiment).
- the production center and the disaster recovery center can transmit data through IP (Internet Protocol) or FC (Fiber Chanel).
- IP Internet Protocol
- FC Fiber Chanel
- the control center can be deployed on the production center or on the disaster recovery center.
- Third-party devices can be deployed between the production center and the disaster recovery center. in.
- the control center is configured to signal the disaster recovery array to take over the production array to handle the host service when the production array fails.
- Both the production host and the disaster recovery host can be any computing device known in the art, such as servers, desktop computers, and the like. Inside the host, an operating system and other applications are installed.
- connection device can include any interface between the storage device known to the prior art and the host, such as a fiber switch, or other existing switch.
- Both the production array and the disaster recovery array can be storage devices known in the prior art, such as independent disks.
- Redundant Array of bad 1 J Redundant Arrays of Inexpensive Disks , RAID
- disk I Just a Bunch Of Disks , JBOD
- Direct Access or a plurality of interconnected memory Direct Access Storage Device, DASD) disk Drive
- a tape storage device such as a tape library, one or more storage units.
- the storage space of the production array may include multiple data volumes.
- the data volume is a logical storage space mapped by physical storage space.
- the data volume may be a Logic Unit Number (LUN) or a file system.
- LUN Logic Unit Number
- the structure of the disaster recovery array is similar to the production array.
- FIG. 1 is an embodiment of a data transmission method according to the present invention.
- the embodiment of the present invention is applied to a first storage device, where the first storage device includes a controller and a cache memory (hereinafter referred to as a cache or Cache ) and storage media.
- the controller is a processor of the first storage device, configured to execute 10 commands and other data services;
- the cache is a memory existing between the controller and the hard disk, and the capacity is smaller than the hard disk but the speed is much higher than the hard disk;
- the storage medium It is a main memory of the first storage device, and is generally referred to as a non-volatile storage medium, for example, a magnetic disk.
- the physical storage space included in the first storage device is referred to as a storage medium. Specifically performing the following steps may be a controller in the first storage device.
- Step S101 The first storage device receives a first write data request sent by the host, where the first write data request carries data to be written and address information.
- the address information may include a logical block address (LB A ).
- the address information may further include an ID of the data volume of the first storage device.
- Step S102 Add the first number to the data to be written and the address information, and write the buffer to the cache, where the first number is the current time slice number.
- the current time slice number manager may be included in the current time slice number manager, and the current time slice number may be represented by a numerical value, such as 0, 1, 2, or Expressed by letters, such as a, b, c, are not limited here.
- the information carried in the modified first write data request is written into the cache, so that the first write data request carries the data to be written, the address information, and the first A number is saved in the cache.
- write data requests can be received for a period of time. It is also necessary to add a first number to the information carried by it and write it to the cache. It should be noted that before the current time slice number is changed, the first number is added to the information carried in the write data request.
- Step S103 Read the to-be-written data and address information corresponding to the first number from the cache.
- the first storage device may read the data to be written and the address information corresponding to the first number from the cache. It may be understood that the data to be written and the address information corresponding to the first number may be more than One.
- the copying task is that the first storage device sends the information carried by the write data request received by one data volume to the second storage device for a period of time, and the information carried by the write data request is added with the same number as the current time slice number.
- the replication task triggering may be triggered by a timer or an artificial trigger, which is not limited herein.
- the purpose of the replication is to send the data to be written carried by the write data request received by the first storage device to the second storage device for storage, so that when the first storage device fails, the second storage device can take over the operation of the first storage device.
- the address information (for example, LBA) carried by the write data request also needs to be sent to the second storage device, and the LBA is used to instruct the second storage device to store the address of the data to be written. Since the second storage device has the same physical structure as the first storage device, the LBA applicable to the first storage device is also applicable to the second storage device.
- LBA address information
- the copy task is for one data volume of the first storage device, and when the first storage device includes multiple data volumes, one copy task corresponding to each data volume.
- Step S104 Modify the current time slice number to identify that the copying task carried by the subsequent write data request is triggered, and the current time slice number manager needs to modify the current time slice number.
- the subsequent The information carried by the write data request needs to be added with another number, and the other number is assigned to it by the modified current time slice number.
- the information carried by the write data request that needs to be sent to the second storage device can be distinguished from the information carried by the write data request that the first storage device is receiving in the cache.
- step S103 there is no order between step S103 and step S104.
- Step S105 Send the data to be written and the address information to the second storage device.
- the first storage device sends the data to be written and the address information corresponding to the first number read from the cache to the second device.
- the first storage device may directly send all the to-be-written data and address information that is read to the second storage device; or after obtaining the ID of the data volume of the second storage device, according to each write data.
- the data to be written and the address information to be carried, and the ID of the data volume of the second storage device are respectively generated, and a new write data request is generated, and then sent to the second storage device.
- the information carried by the write data request includes data to be written and address information, and the first number is added to the data to be written and the address information.
- the write buffer, the first number is the current time slice number, and when the copy task is triggered, the data to be written corresponding to the first number and the address information are read from the cache, and sent to the second storage device, and in addition, in the copy task
- the current time slice number is modified, so that when the first storage device receives the write data request, the first storage device adds the same number to the modified current time slice number in the information carried by the first storage device, and thus needs to be sent to the cache in the cache.
- the information carried by the write data request of the second storage device is separated from the information carried by the write data request that the first storage device is receiving, and the information carried by the write data request is directly sent from the cache to the second storage device. Since the information is sent directly from the cache, there is no need to read the data from the data volume, so when the data is copied Short, improve the efficiency of data replication. It can be understood that, in the foregoing embodiment, when the replication task is triggered, the first storage device sends the data to be written and the address information corresponding to the current time slice number to the second storage device, and simultaneously modifies the current time slice number. Identifies the information carried by the subsequent write data request.
- the data and address information to be written corresponding to the modified current time slice number is sent to the second storage device, and the current time slice number is modified again.
- the first storage device can be guaranteed to send the information carried by the write data request received by the first storage device to the second storage device in a batch.
- the storage device corresponding to the second disaster recovery center is the third device, and the first storage device also needs to send the information carried by the received data request to the third storage device. device.
- the current time slice number manager will modify the current time slice number. At this time, the number of the current time slice number assigned to the second storage device and the third storage device is modified. The number. However, the write data request carrying information corresponding to the number before the current time slice number modification has not been sent to the third storage device.
- Step S106 Record a second number, where the second number is a number corresponding to the most recently completed replication task before the current replication task.
- the first number is the same as the current time slice number, and may be used to identify the current copy task.
- the current copy task refers to the first storage device carrying the write data request received by a data volume in the current time period.
- the information is sent to the second storage device, and the information carried by the write data requests is added with the same number as the current time slice number.
- the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the current time slice number may be modified when a replication task is initiated to the storage devices of other disaster recovery centers. Therefore, the number corresponding to the last completed replication task needs to be recorded. If there is another number between the second number and the first number, the information carried by the write data request corresponding to the number is not sent to the second storage device, and the steps need to be performed.
- Step S107 After reading the second number from the cache, the number before the first number corresponds to the data to be written and the address information.
- step S103 The specific reading process is similar to step S103 and will not be described here.
- step S107 may be performed in sequence with step S103, or may be performed simultaneously.
- Step S108 After the second number, the data to be written and the address information corresponding to the number before the first number are sent to the second storage device.
- step S105 The specific sending process is similar to step S105, and details are not described herein again.
- FIG. 2 is an embodiment of a data receiving method according to the present invention.
- the embodiment of the present invention is used for an application scenario in which a disaster recovery center receives information written by a write data request sent by a production center.
- the method can include:
- Step S201 The second storage device receives the address information sent by the first storage device.
- the second storage device may receive the data to be written and the address information sent by the first storage device, and may also receive the write data request sent by the first storage device, where the write data request includes the data to be written and the address.
- Information the address information may be a logical block address (LBA).
- LBA logical block address
- the address information may further include an ID of the data volume of the second storage device. It can be understood that there can be more than one address information here.
- the second storage device After receiving the data to be written and the address information, the second storage device adds the same number as the current time slice number to the data to be written and the address information, and writes the cache so that the cache saves the same as the current time slice number. The number, the data to be written and the address information.
- the second storage device also includes a current time slice number manager, where the current time slice number manager stores the current time slice number, and the current time slice number can be represented by a numerical value, for example, 0, 1 2, can be expressed by letters, such as a, b, c, not limited here.
- the current time slice number here may not be associated with the current time slice number in the first storage device.
- Step S202 When it is determined that the first storage device is faulty, the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the receiving The obtained address information is the same, and the first number is the number before the current time slice number.
- the second storage device can receive the information carried by the write data request, and The information carried by each write data request is added to the same number as the current time slice number and stored in the cache.
- the second storage device may only receive the corresponding portion of the data to be written of the current time slice number of the first storage device, in which case the data held by the second storage device It may be unrealistic data. If you directly take over the first storage device, data consistency cannot be guaranteed.
- the second storage device searches for the latest number corresponding to the address information, and then The current time slice number corresponding to the write to be sent to the host, however, the data is not true. Therefore, at this time, it is necessary to restore the data in the cache of the second storage device to the data corresponding to the number before the current time slice number of the second storage device.
- the method for determining that the first storage device is faulty may be that the control center sends a signal to the second storage device, where the signal is used to indicate that the first storage device is faulty, and the second storage device is The first storage device needs to be replaced to handle the host service.
- the control center may send an indication of successful copying to the first storage device and the second storage device, respectively. If the second storage device does not receive the indication, then the current replication task is not completed.
- the completion of the replication task means that the first storage device sends the information carried by all the write data requests corresponding to the current time slice number to the second storage device, and the second storage device also receives the completion.
- the second storage device determines that the first storage device is faulty, if the current replication task is completed, the second storage device can directly take over the operation of the first storage device, and data consistency can be guaranteed. This situation is not within the scope of the embodiments of the present invention.
- the data in the cache of the second storage device needs to be restored to the data corresponding to the number before the current time slice number.
- the specific recovery method may be: searching for the same address information as the address information in the address information corresponding to the previous number of the current time slice number according to the received address information, and if not, continuing to The address information corresponding to the number is searched until the address information is found, and then the data to be written corresponding to the number is obtained.
- Step S203 Add the second number to be written data and address information corresponding to the first number, and write the buffer.
- the second number is the modified number of the current time slice number, and is the latest number saved in the cache in this embodiment.
- the second storage device learns that the latest number corresponding to the address information is the second number, The data to be written corresponding to the second number is sent to the host. This ensures data consistency.
- the second storage device receives the address information sent by the first storage device, and when the first storage device fails, obtains the data to be written corresponding to the number before the current time slice number according to the address information, and The data to be written and the address information corresponding to the number before the current time slice number are incremented by the second number and stored in the cache.
- FIG. 3 is an embodiment of a data transmission method according to the present invention.
- the cache in the production array is referred to as the first cache
- the cache in the disaster recovery array is referred to as the second cache.
- the method includes:
- Step S301 The production array receives the write data request A sent by the production host.
- the write data request A includes a volume ID, a to-be-written address A, and a to-be-written data A
- the to-be-written address A refers to a logical address of a production array to be written to which data A is to be written, such as an LBA, usually
- the production array needs to convert the LBA into PBA (Phys i cs B lock Addres s) when executing the write data request A, and then write the data A to be written into the storage medium according to the PBA.
- the volume ID is the ID of the data volume corresponding to the write data request A.
- the production array includes a volume (hereinafter referred to as a primary volume) as an example, and the information carried by the write data request A includes the primary volume ID, the address to be written A, and the data to be written A.
- Step S302 The production array changes the write data request A to the write data request, and writes the data request A, including the information carried by the write data request A and the first number.
- the controller of the production array may include a current time slot number (CTPN) manager, where the current time slice number is recorded in the CTPN manager, and the current time slice is recorded.
- CTPN current time slot number
- the number is used to generate the first number. Specifically, the first number is equal to the current time slice number.
- the write data request A is modified to write the data request A,.
- the modification may be performed by adding a first number to the information carried by the write data request A.
- the current time slice number may be 1, and the first number is also 1.
- the time stamp is recorded, and the time stamp is matched in a pre-saved number sequence to determine the number corresponding to the time stamp.
- the sequence of numbers may be a mapping table or other forms, which is not limited herein.
- the compilation The number sequence includes a plurality of numbers, and each number corresponds to an interval of a time stamp. As shown in Table 1:
- the write data request A can be modified to the write data request A according to the number.
- Step S303 The production array writes the write data request A to the first cache, so that the information carried by the write data request A' is saved in the first cache.
- the information carried by the write data request A' includes a first number, a primary volume ID, an address to be written A, and a data to be written A.
- the information carried by all the write data requests will be added with the first number.
- the write data request B can also be received, and the write data request B is modified to be a write data request, so that the write data request B further includes the first number;
- the write data request C can be received, and the write data request C is modified to the write data request C' such that the write data request C' further includes the first number.
- the saved information in the first cache can be as shown in Table 2:
- the production array includes one data volume ( For example, in the main volume, the write data request, the write data request B, and the write data request C, the ID of the data volume carried is the primary volume ID.
- the production array may contain a plurality of data volumes, so the IDs of the data volumes carried by the write data request 4, the write data request B, and the write data request C may be different.
- Table 2 is only an example of a form in which the information carried in the data request is saved in the first cache, and may be saved in the form of a tree, which is not limited herein.
- the number, the volume ID, and the address to be written can be regarded as the index of Table 2. According to the index, the corresponding data to be written can be found. When the index is the same, the corresponding data to be written is It should be the same. Therefore, when writing a new write data request, it is necessary to determine whether the first cache has the same information as the new write data request number, the volume ID, and the address to be written, and if so, use the new one. The information carried by the write data request overwrites the original information.
- the write data request D includes a primary volume.
- the write data request D is modified to the write data request D, so that the write data request 0, the first number is also included. Then, when the write data request D is written into the first cache, it is determined whether the first cache stores the same information as the write data request D', the volume ID, and the address to be written, if any, Then, the data is requested by D, and the carried data covers the original information. Since the number, volume ID, and address to be written carried in the write data request D' are the same as the number included in the write data request B, the volume ID, and the address to be written, in the first cache, the data request D is written. The information of ' will overwrite the information of the write data request B'.
- the information saved in the first cache may be as shown in Table 3: 1 primary volume ID to be written to address B to be written to data D
- Step S304 When the copy task is triggered, the production array modifies the current time slice number included in the CTPN manager; for example, the current time slice number can be changed from 1 to 2.
- the current time slice number of the production array is referred to as the first current time slice number
- the disaster recovery array is used to distinguish the current time slice number of the production array from the current time slice number of the disaster recovery array.
- the current time slice number is referred to as the second current time slice number.
- the write data request E includes a primary volume ID, an address to be written A, a data to be written E, and a write data request E is modified to a write data request, so that the write data request E , also includes the number 2;
- receiving the write data request F the write data request F includes the main volume ID, the address to be written F, the data to be written F, and the write data request F is modified to the write data request F, Make the write data request F, which also contains the number 2.
- the information stored in the first cache can be as shown in Table 4:
- Step S305 The disaster recovery array modifies the second current time slice number included in the CTPN manager; for example, it can be modified from 11 to 12.
- the disaster recovery array may also include its own CTPN manager.
- the CTPN manager in the production array modifies the first current time slice number.
- the control center can also send a control signal to the disaster recovery array, so that the disaster recovery array also modifies its own CTPN management.
- the second current time slice number contained in the device Therefore, there is no order between the steps S305 and S304.
- Step S306A The production array reads the information carried by the write data request corresponding to the first number from the first cache.
- the information carried by the write data request corresponding to the first number is as shown in Table 3.
- Step S306B The production array obtains an ID of a data volume to be written into the disaster recovery array.
- Step S306C The production array generates a new write data request according to the ID of the data volume and the information carried by the write data request corresponding to the first number.
- the write data request A" may be generated according to the ID of the data volume, the address A to be written, and the data A to be written; the write data request may be generated according to the ID of the data volume, the address to be written B, and the data D to be written. D"; A write data request C" can be generated according to the ID of the data volume, the address to be written C, and the data to be written C.
- both the production array and the disaster recovery array may include a plurality of data volumes, and then write data request A,, write data request D,, write data request C" contains the ID of the data volume It may be different.
- the ID of each data volume in the disaster recovery array corresponds to the ID of each data volume in the production array.
- Step S307 The production array sends the generated new write data request to the disaster recovery array.
- the production array sends a write data request A", writes a data request D", and writes a data request C" to the disaster recovery array.
- Step S308 The disaster recovery array modifies the received write data request.
- the write data request A can be modified to the write data request A, according to the second current time slice number recorded in the CTPN manager.
- the modification manner may be that the write data request A "carrying Add the number 12 to the message.
- the number 12 can be added to the information carried in the write data request B", the write data request B is modified to the write data request B,,; the number 12 is added to the information carried in the write data request C", and the write will be made.
- the data request C is "modified to write data request C".
- Step S309 The disaster recovery array writes the modified write data request to the second cache.
- the information stored in the second cache can be as shown in Table 5:
- Step S310 The disaster recovery array writes the data to be written into the storage medium corresponding to the address to be written according to the address to be written requested by the write data.
- the data in the cache needs to be written to the hard disk. Specifically, the data to be written A is written into the storage medium corresponding to the address A to be written, the data to be written D is written into the storage medium corresponding to the address B to be written, and the data to be written C is written to be Write to the storage medium corresponding to address C.
- Step S311 The production array writes the data to be written into the storage medium corresponding to the address to be written according to the address to be written requested by the write data.
- the cache of the production array needs to write the data in the cache to the hard disk when its space utilization reaches a certain threshold.
- the following information is stored in the first cache:
- the main volume ID is to be written to the address F.
- the data to be written F Specifically, for write data requests with the same volume ID and the same address to be written but different in number, the write request with the smaller number of write data can be written first. Incoming data, and then writing the data to be written carried by the larger number of write data requests, for example, writing the data to be written D first, then writing the data to be written E; or directly writing the write data request with a larger number
- the data to be written carried is not written to the data to be written carried by the write data request with a smaller number, for example, the data E to be written is directly written.
- step S310 There is no order between step S310 and step S311.
- Step S312 When the copy task is triggered, the production array modifies the first current time slice number included in its CTPN manager; for example, the current time slice number can be changed from 2 to 3.
- the number 3 is added to the information carried by the write data request received by the production array.
- Step S313 The disaster recovery array modifies the second current time slice number included in the CTPN manager; for example, the second current time slice number can be modified from 12 to 13.
- Step S314 The production array reads the information carried by the write data request corresponding to the number 2, and generates a corresponding write data request to send to the disaster recovery array.
- the information carried by the write data request corresponding to the number 2 includes the information carried by the write data request E and the information carried by the write data request F.
- the production array may generate a write data request E" according to the ID of the data volume, the address to be written A, and the data to be written E; the ID may be based on the ID of the data volume.
- Write address F data to be written F to generate write data request F,. Therefore, the production array sends to the disaster recovery array.
- the write data request is a write data request E" and a write data request F".
- the production array when the production array sends a write data request to the disaster recovery array, it is not randomly divided according to the sequence, and may be randomly sent. Specifically, the write data request may be sent first. , then send the write data request F"; also send the write data request F", and then send the write data request E".
- the second current time slice number in the CTPN manager of the disaster recovery array is 1 3, so the disaster recovery array needs to modify the write data request E after receiving the write data request E".
- Step S315 The disaster recovery array receives the instruction to take over the production array to process the host service.
- the disaster recovery array needs to take over the production array to process the host service, so the disaster recovery array needs to meet the data consistency requirement.
- step S314 the write data request that the disaster recovery array needs to receive includes a write data request E" and a write data request F.
- the write data request E" and the write data request F" are modified All of them have been successfully written to the second cache.
- the disaster recovery array starts to take over the production array to process the host service, the current replication cycle has been completed and the data consistency requirements are met.
- the disaster recovery array changes the write data request E to the write data request E and successfully writes the second cache, and writes the data request F
- the production array fails before the successful write to the second cache, and the disaster recovery array Beginning to take over the production array processing host service, then the current replication task is not completed, and does not meet the data consistency requirements.
- the disaster recovery array changes the write data request F" to write data request F", and successfully writes After the second cache, the data request E", before the second cache is successfully written, the production array fails, and the disaster recovery array starts to take over the production array to process the host service, then the current replication task is not completed, and the data consistency is not satisfied. Claim.
- Step S316 The disaster recovery array acquires the to-be-written address carried by the write data request that has been successfully written into the second cache in the current replication period.
- the write data request E" has been successfully written into the second cache, and the address to be written carried by the address is the address A to be written.
- Step S317 The disaster recovery array performs matching according to the to-be-written address in the information carried by the write data request corresponding to the previous number, and finds the same to-be-written address as the address A to be written.
- step S318 is performed; if not, the matching is continued in the information carried by the write data request corresponding to the previous number (for example, number 11) until Find the same to be written address as the write data request E,,, and the address to be written A.
- the information carried by the write data request corresponding to the number 12 is as shown in Table 5.
- the write data request A, carrying the address to be written and the write data request E, are the same as the address to be written.
- each write data request includes the ID of the data volume
- the condition that the address to be written and the ID of the data volume are the same are met. .
- Step S318 Generate a new write data request to write to the second cache according to the found information of the address to be written, and the new write data request includes the modified number.
- the information read from the second cache includes the address A to be written and the data A to be written (which may also contain the slave ID), based on the read information, plus the modified number (eg , modify the number from 13 to 14) to generate a new write data request.
- the modified number eg , modify the number from 13 to 14
- the corresponding relationship saved in the cache is as shown in Table 6: 12 slave volume ID to be written to address B to be written to data D
- the disaster recovery array searches for the ID of the data volume in the second cache.
- the data to be written corresponding to the volume ID and the address to be written is the address A to be written and the latest number is sent to the host.
- the data to be written A corresponding to the number 14 is sent from the second cache to the host.
- the production array can directly send the information carried by the received write data request from the cache to the disaster recovery array, and the related information cannot be read in the data volume, thereby improving the efficiency of data replication and the disaster recovery.
- the array also guarantees data consistency.
- data replication is implemented by using snapshot data, which requires that each time the production array performs a write data request, the data carried by the write data request needs to be put into the cache, according to the data request carried in the write data request.
- To write the address read the old data saved in the address, store it in the data volume, and then write the data in the cache to the address to be written. After these operations are completed, the response message of the write data request can be returned. Because of the step of adding snapshot processing, the delay of writing data request processing is increased. In the embodiment of the present invention, there is no need to perform snapshot processing on the data, and although the write data request is modified, it takes less time. Therefore, the embodiment of the present invention reduces the delay of the write data request processing as compared with the prior art.
- FIG. 5 is a schematic structural diagram of a storage device 50 according to an embodiment of the present invention.
- the storage device 50 includes: a receiving module 501, a reading and writing module 502, and a current time slice number manager 503. And sending module 504.
- the receiving module 501 is configured to receive a first write data request sent by the host, where the first write data is The request carries the data to be written and the address information.
- the address information may include a logical unit address (LBA).
- LBA logical unit address
- the address information may further include an ID of the data volume of the storage device 50.
- the reading and writing module 502 is configured to add the first number to the data to be written and the address information, and write the buffer, where the first number is a current time slice number; and the first number is read from the cache. Corresponding to the data to be written and the address information.
- a current time slice number manager 503 may be included in the storage device 50.
- the current time slice number manager 503 stores a current time slice number, and the current time slice number may be represented by a numerical value, such as 0, 1, or 2, It is represented by letters, such as a, b, c, which are not limited here.
- the information carried in the modified first write data request is written into the cache, so that the first write data request carries the data to be written, the address information, and the first A number is saved in the cache.
- write data requests can be received for a period of time. It is also necessary to add a first number to the information carried by it and write it to the cache. It should be noted that before the current time slice number is changed, the information carried in the write data request is increased by the first number.
- the storage device 50 can read the data to be written and the address information corresponding to the first number from the cache. It can be understood that the data to be written and the address information corresponding to the first number may be more than one. .
- the replication task is that the storage device 50 sends the information carried by the write data request received by a data volume to the storage device of the disaster recovery center for a period of time, and the information carried by the write data request is added with the same current time slice number. Numbering.
- the replication task triggering may be triggered by a timer or an artificial trigger, which is not limited herein.
- the purpose of copying is to connect the storage device 50
- the storage device to be written to the storage device is sent to the storage device of the disaster recovery center. When the storage device 50 fails, the storage device of the disaster recovery center can take over the storage device 50.
- the address information for example, the LBA
- the LBA is used to indicate that the storage device of the disaster recovery center stores the address of the data to be written.
- the storage device of the disaster recovery center has the same physical structure as the storage device 50, and thus is applicable to the LBA of the storage device 50, and is also applicable to the storage device of the disaster recovery center.
- the copy task is for one data volume of the storage device 50, and when the storage device 50 includes multiple data volumes, one copy task corresponding to each data volume.
- the current time slice number manager 503 is configured to modify the current time slice number to identify information carried by the subsequent write data request.
- the current time slice number manager 503 needs to modify the current time slice number.
- the information carried by the subsequent write data request needs to be added with another number, the other number. It is assigned to it by the modified current time slice number.
- the sending module 504 is configured to send the data to be written and the address information to the storage device of the disaster recovery center.
- the storage device 50 sends the data to be written and the address information corresponding to the first number read from the cache to the storage device of the disaster recovery center.
- the storage device 50 can directly send all the data to be written and the address information to the storage device of the disaster recovery center; or after obtaining the ID of the data volume of the storage device of the disaster recovery center, according to each
- the data to be written and the information of the data volume of the storage device of the disaster recovery center are generated by a write data request, and a new write data request is generated, and then sent to the storage device of the disaster recovery center.
- the storage device 50 After receiving the write data request sent by the host, the storage device 50 The information carried in the data includes the data to be written and the address information, and the first number is added to the data to be written and the address information, and the write buffer is used. The first number is the current time slice number.
- the slave cache The data to be written and the address information corresponding to the first number are read and sent to the storage device of the disaster recovery center.
- the current time slice number is modified, so that the storage device 50 receives the write data request subsequently.
- FIG. 6 is a schematic structural diagram of a storage device 60 according to an embodiment of the present invention. As shown in FIG. 6, the storage device 60 includes: a receiving module 601, a searching module 602, and a writing module 604.
- the receiving module 601 is configured to receive address information sent by the storage device 50.
- the storage device 60 may receive the data to be written and the address information sent by the storage device 50.
- the write data request sent by the storage device 50 may also be received, where the write data request includes data to be written and address information.
- the address information may be a logical block address (LBA).
- LBA logical block address
- the address information may also include the ID of the data volume of the storage device 60. It can be understood that there can be more than one address information here.
- the storage device 60 After receiving the data to be written and the address information, the storage device 60 adds the same number as the current time slice number to the data to be written and the address information, and writes the cache so that the same time slice number is saved in the cache. Number, data to be written and address information.
- the storage device 60 may also include a current time slice number manager 603, where the current time slice number manager 603 stores the current time slice number.
- the pre-time slice number can be represented by a numerical value, for example, 0, 1, and 2, and can be represented by a letter, for example, a, b, and c, which are not limited herein.
- the current time slice number here may not be associated with the current time slice number in the storage device 50.
- the searching module 602 is configured to: when determining that the storage device 50 is faulty, the storage device 60 acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number is The received address information is the same, and the first number is the number before the current time slice number.
- the storage device 50 sends the information carried by the write data request, and the storage device 60 can receive the information carried by the write data request, and write each message.
- the information carried in the data request is added to the same number as the current time slice number and stored in the cache.
- the storage device 60 may only receive the corresponding portion of the current time slice number of the storage device 50 to be written, in which case the data held by the storage device 60 may be untrue. The data, if directly replacing the storage device 50, data consistency can not be guaranteed.
- the storage device 60 searches for the latest number corresponding to the address information. Then, the current time slice number corresponding to the write to be sent to the host, but the data is not true. Therefore, at this time, it is necessary to restore the data in the cache of the storage device 60 to the data corresponding to the number preceding the current time slice number of the storage device 60.
- the address information eg, LBA
- the method for determining that the storage device 50 is faulty may be the control center to the storage device.
- the storage device 60 sends a signal indicating that the storage device 50 is faulty, and the storage device 60 needs to take over the storage device 50 to handle the host service.
- the control center can send an indication of successful copying to the storage device 50 and the storage device 60, respectively. If the storage device 60 does not receive the indication, then the current replication task is not completed.
- the completion of the replication task means that the storage device 50 transmits the information carried by all the write data requests corresponding to the current time slice number to the storage device 60, and Storage device 60 is also receiving completion.
- the storage device 60 determines that the storage device 50 has failed, if the current copy task has been completed, the storage device 60 can directly take over the storage device 50 and the data consistency can be guaranteed. This situation is not within the scope of the embodiments of the present invention.
- the data in the cache of the storage device 60 needs to be restored to the data corresponding to the number preceding its current time slice number.
- the specific recovery method may be: searching for the same address information as the address information in the address information corresponding to the previous number of the current time slice number according to the received address information, and if not, continuing to The address information corresponding to the number is searched until the address information is found, and then the data to be written corresponding to the number is obtained.
- the writing module 604 is configured to add a second number to the data to be written and the address information corresponding to the first number, and write the buffer.
- the second number is the number after the current time slice number is modified. In this embodiment, it is the latest number saved in the cache.
- the storage device 60 searches for the latest number corresponding to the address information to be the second. No. The data to be written corresponding to the second number is sent to the host. This ensures data consistency.
- the storage device 60 receives the address information sent by the storage device 50.
- the storage device 50 fails, the data to be written corresponding to the number before the current time slice number is obtained according to the address information, and the current time slice is obtained.
- the data to be written and the address information corresponding to the number before the number are added with the second number and stored in the cache. This ensures data consistency.
- FIG. 7 an embodiment of the present invention provides a schematic diagram of a storage device 700.
- the storage device 700 may include storage devices known in the prior art, and the specific embodiments of the present invention do not limit the specific implementation of the storage device 700.
- Storage device 700 includes:
- Processor 710 communication interface 720, memory (memory) 730, communication bus 740.
- the processor 710, the communication interface 720, and the memory 730 complete communication with each other via the communication bus 740.
- the communication interface 720 is configured to communicate with a network element, such as a host or a switch.
- the processor 710 is configured to execute the program 732.
- program 732 can include program code, the program code including computer operating instructions.
- the processor 710 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
- CPU central processing unit
- ASIC Application Specific Integrated Circuit
- the memory 730 is configured to store the program 732.
- Memory 730 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
- the program 732 may specifically include:
- the receiving module 501 is configured to receive a first write data request sent by the host, where the first write data request carries data to be written and address information.
- the reading and writing module 502 is configured to add the first number to the data to be written and the address information, and write the buffer, where the first number is a current time slice number; and the first number is read from the cache. Corresponding to the data to be written and the address information.
- the current time slice number manager 503 is configured to modify the current time slice number to identify information carried by the subsequent write data request.
- the sending module 504 is configured to send the data to be written and the address information to the storage device of the disaster recovery center.
- an embodiment of the present invention provides a schematic diagram of a storage device 800.
- the storage device 800 may include a storage device that is known in the prior art.
- the specific embodiment of the present invention does not limit the specific implementation of the storage device 800.
- Storage device 800 includes:
- a processor 810 a communication interface 720, a memory 830, and a communication bus 840.
- the processor 810, the communication interface 820, and the memory 830 complete communication with each other via the communication bus 840.
- the communication interface 820 is configured to communicate with a network element, such as a host or a switch.
- the processor 810 is configured to execute the program 832.
- program 832 can include program code, the program code including computer operating instructions.
- the processor 810 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
- CPU central processing unit
- ASIC Application Specific Integrated Circuit
- the memory 830 is configured to store the program 832.
- Memory 830 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
- the program 832 may specifically include:
- the receiving module 601 is configured to receive address information sent by the storage device 50.
- the searching module 602 is configured to: when determining that the storage device 50 is faulty, the storage device
- the writing module 604 is configured to add a second number to the data to be written and the address information corresponding to the first number, and write the buffer.
- each module in the program 832 refers to the corresponding module in the embodiment shown in FIG. I will not go into details here.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the modules is only a logical function division.
- there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another device, or some features can be ignored, or not executed.
- the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some communication interface, device or module, and may be electrical, mechanical or otherwise.
- the modules described as separate components may or may not be physically separated.
- the components displayed as modules may or may not be physical sub-modules, that is, may be located in one place, or may be distributed to multiple network sub-modules. on. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
- a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
- the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Computer Security & Cryptography (AREA)
- Retry When Errors Occur (AREA)
Abstract
本发明实施例提供了一种数据发送方法、接收方法和存储设备,包括:第一存储设备接收主机发送的第一写数据请求,所述第一写数据请求携带待写入数据和地址信息;将所述待写入数据和地址信息增加第一编号,写入緩存,其中所述第一编号为当前时间片编号;从所述緩存中读取所述第一编号对应的所述待写入数据和地址信息;修改所述当前时间片编号以标识后续写数据请求携带的信息;将所述待写入数据和地址信息发送给第二存储设备。可以提高数据复制的效率。
Description
数据发送方法、 数据接收方法和存储设备
技术领域
本发明涉及存储技术, 尤其涉及一种数据发送方法、 数据接收方法和存 储设备。 背景技术 数据容灾, 又称为远程数据复制技术, 是指建立一个异地的数据系统, 该系统是本地数据的一个可用复制。在本地数据及整个应用系统出现灾难时, 系统至少在异地保存有一份可用的关键业务的数据。
典型的数据容灾系统包括生产中心和灾备中心。在生产中心,部署有主机、 存储阵列, 用于正常的业务运行; 在灾备中心, 部署有主机、 存储阵列, 用 于在生产中心发生灾难后, 接管其业务。 其中, 生产中心或灾备中心的存储 阵列均包含多个数据卷, 数据卷是物理存储空间映射而成的一段逻辑存储空 间。 生产中心的业务产生的数据写入生产阵列后, 可以经容灾链路复制到灾 备中心, 写入灾备阵列。 为了保证灾难发生后, 灾备中心的数据能够支撑业 务接管, 复制到灾备阵列的数据必须保证一致性(consistency )。 保证数据一 致性本质上是指, 有依赖关系的写数据请求, 该依赖关系需要得到保证。 应 用程序、 操作系统、 数据库都内在地依靠这种写数据请求的依赖关系的逻辑 来运行其业务, 例如: 先完成写数据请求 1 , 再完成写数据请求 2, 顺序是固 定的。 也就是说, 系统会确保写数据请求 1完全返回成功后, 才会下发写数据 请求 2。 由此, 才能实现当出现故障导致执行过程中断时, 可以依靠固有的办 法来恢复业务。 否则, 可能会出现这样的情况, 例如: 在读取数据时, 可以 读到写数据请求 2存储的数据, 却读不到写数据请求 1存储的数据, 这将导致 业务无法恢复。
在现有技术中, 可以利用快照技术解决这个问题。 快照是数据在某个时 间点 (拷贝开始的时间点) 的映像。 快照的目的是为数据卷创建一个在特定 时间点的状态视图, 通过这个视图只可以看到数据卷在创建时刻的数据, 在 此时间点之后数据卷的修改(有新的数据写入) , 不会反映在快照视图中。 利用这个快照视图, 就可以做数据的复制。 对于生产中心而言, 由于快照数 据是 "静止的" , 因此生产中心可以在将各个时间点的数据增加快照之后, 再将快照数据复制到灾备中心, 既可以完成远程数据复制, 也不会影响在生 产中心继续执行写数据请求。 对于灾备中心而言, 也可以满足数据一致性的 要求。 例如, 写数据请求 2的数据成功复制到灾备中心, 写数据请求 1的数 据没有成功复制, 可以利用写数据请求 2之前的快照数据, 将灾备中心的数 据恢复到之前的状态。
由于生产中心在执行写数据请求的时候要进行快照处理, 将生成的快照 数据保存在专门用于存储快照数据的数据卷中, 因此生产中心在将快照数据 复制到灾备中心时, 需要先将数据卷中存储的快照数据读到緩存, 然后再发 送给灾备中心。 然而, 用于生成快照数据的数据可能还存在緩存中, 但这部 分数据不能被合理利用, 每次复制都需要先到数据卷中读取快照数据, 导致 数据复制的时间较长, 效率较低。 发明内容
本发明实施例提供了一种数据发送方法, 可以直接从第一存储设备的 緩存中将写数据请求携带的信息发送给第二存储设备, 提高了数据复制的 效率。
本发明实施例第一方面提供了一种数据发送方法, 包括:
第一存储设备接收主机发送的第一写数据请求, 所述第一写数据请求 携带待写入数据和地址信息;
将所述待写入数据和地址信息增加第一编号, 写入緩存, 其中所述第
一编号为当前时间片编号;
从所述緩存中读取所述第一编号对应的所述待写入数据和地址信息; 修改所述当前时间片编号以标识后续写数据请求携带的信息; 将所述待写入数据和地址信息发送给第二存储设备。
在本发明实施例第一方面的第一种可能的实现方式中, 所述第一编号 用于标识当前的复制任务; 所述方法还包括:
记录第二编号, 所述第二编号是当前的复制任务之前, 最近一次已完 成的复制任务对应的编号。
结合本发明实施例第一方面的第一种实现方式, 第一方面的第二种可 能的实现方式还包括:
从所述緩存中读取所述第二编号之后, 所述第一编号之前的编号对应 的待写入数据和地址信息;
将所述第二编号之后, 所述第一编号之前的编号对应的待写入数据和 地址信息发送给所述第二存储设备。
本发明实施例第一方面的第三种可能的实现方式, 还包括: 记录当前 时间片编号, 所述当前时间片编号用于生成所述第一编号。
本发明实施例第二方面提供了一种数据接收方法, 包括:
第二存储设备接收第一存储设备发送的地址信息;
当确定所述第一存储设备故障时, 所述第二存储设备根据所述地址信 息, 获取第一编号对应的待写入数据, 所述第一编号对应的地址信息与所 述接收到的地址信息相同, 所述第一编号为当前时间片编号之前的编号; 将所述第一编号对应的待写入数据和地址信息增加第二编号, 写入緩 存。
在本发明实施例第二方面的第一种可能的实现方式中, 还包括: 记录 所述当前时间片编号, 所述当前时间片编号用于生成所述第二编号。
在本发明实施例第二方面的第二种可能的实现方式中, 还包括:
接收主机发送的读数据请求, 所述读数据请求包含所述接收到的地址信 确定所述接收到的地址信息对应的最新的编号是所述第二编号; 将所述第二编号对应的待写入数据发送给所述主机。
本发明实施例第三方面提供了一种存储设备, 包括:
接收模块, 用于接收主机发送的第一写数据请求, 所述第一写数据请 求携带待写入数据和地址信息;
读写模块, 用于将所述待写入数据和地址信息增加第一编号, 写入緩 存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第一编 号对应的所述待写入数据和地址信息;
当前时间片编号管理器, 用于修改所述当前时间片编号以标识后续写 数据请求携带的信息;
发送模块, 用于将所述待写入数据和地址信息发送给第二存储设备。 在本发明实施例第三方面的第一种可能的实现方式中, 所述第一编号 用于标识当前的复制任务;
所述当前时间片编号管理器, 还用于记录第二编号, 所述第二编号是 当前的复制任务之前, 最近一次已完成的复制任务对应的编号。
结合本发明实施例第三方面的第一种实现方式, 在第三方面的第二种 可能的实现方式中: 所述读写模块, 还用于从所述緩存中读取所述第二编 号之后, 所述第一编号之前的编号对应的待写入数据和地址信息;
所述发送模块, 还用于将所述第二编号之后, 所述第一编号之前的编 号对应的待写入数据和地址信息发送给所述第二存储设备。
在本发明实施例第三方面的第三种可能的实现方式中, 所述当前时间 片编号管理器, 还用于记录当前时间片编号, 所述当前时间片编号用于生 成所述第一编号。
本发明实施例第四方面提供了一种存储设备, 包括:
接收模块, 用于接收第一存储设备发送的地址信息;
查找模块, 用于当确定所述第一存储设备故障时, 所述第二存储设备 根据所述地址信息, 获取第一编号对应的待写入数据, 所述第一编号对应 的地址信息与所述接收到的地址信息相同, 所述第一编号为当前时间片编 号之前的编号;
写入模块, 用于将所述第一编号对应的待写入数据和地址信息增加第 二编号, 写入緩存。
在本发明实施例第四方面的第一种可能的实现方式中, 还包括: 当前时间片编号管理器, 用于记录所述当前时间片编号, 所述当前时 间片编号用于生成所述第二编号。
在本发明实施例第四方面的第二种可能的实现方式中,所述接收模块, 还用于接收主机发送的读数据请求, 所述读数据请求包含所述接收到的地址 信息;
所述查找模块,还用于确定所述接收到的地址信息对应的最新的编号是 所述第二编号;
所述存储设备还包括发送模块, 所述发送模块用于将所述第二编号对应 的待写入数据发送给所述主机。
本发明实施例第五方面提供了一种存储设备, 包括: 处理器、 存储器 和通信总线;
其中, 所述处理器和所述存储器通过所述通信总线进行通信;
所述存储器用于保存程序;
所述处理器用于执行所述程序, 以实现:
接收主机发送的第一写数据请求, 所述第一写数据请求携带待写入数 据和地址信息; 将所述待写入数据和地址信息增加第一编号, 写入緩存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第一编号对 应的所述待写入数据和地址信息; 修改所述当前时间片编号以标识后续写
数据请求携带的信息; 将所述待写入数据和地址信息发送给第二存储设 备。
在本发明实施例第五方面的第一种可能的实现方式中, 所述第一编号 用于标识当前的复制任务; 所述处理器还用于:
记录第二编号, 所述第二编号是当前的复制任务之前, 最近一次已完 成的复制任务对应的编号。
结合本发明实施例第五方面的第一种实现方式, 在第五方面的第二种 可能的实现方式中, 所述处理器还用于: 从所述緩存中读取所述第二编号 之后, 所述第一编号之前的编号对应的待写入数据和地址信息; 将所述第 二编号之后, 所述第一编号之前的编号对应的待写入数据和地址信息发送 给所述第二存储设备。
在本发明实施例第五方面的第三种可能的实现方式中, 所述处理器还 用于:记录当前时间片编号,所述当前时间片编号用于生成所述第一编号。
本发明实施例第六方面提供了一种存储设备, 包括: 处理器、 存储器 和通信总线;
其中, 所述处理器和所述存储器通过所述通信总线进行通信;
所述存储器用于保存程序;
所述处理器用于执行所述程序, 以实现:
接收第一存储设备发送的地址信息;
当确定所述第一存储设备故障时, 所述第二存储设备根据所述地址信 息, 获取第一编号对应的待写入数据, 所述第一编号对应的地址信息与所 述接收到的地址信息相同, 所述第一编号为当前时间片编号之前的编号; 将所述第一编号对应的待写入数据和地址信息增加第二编号, 写入緩 存。
在本发明实施例第六方面的第一种可能的实现方式中, 所述处理器还 用于记录所述当前时间片编号, 所述当前时间片编号用于生成所述第二编
在本发明实施例第六方面的第二种可能的实现方式中, 所述处理器还 用于接收主机发送的读数据请求, 所述读数据请求包含所述接收到的地址信 息; 确定所述接收到的地址信息对应的最新的编号是所述第二编号; 将所述 第二编号对应的待写入数据发送给所述主机。
本发明实施例中, 第一存储设备在接收主机发送的写数据请求后, 所 述写数据请求携带的信息包括待写入数据和地址信息, 在待写入数据和地 址信息中增加第一编号, 写入緩存, 第一编号为当前时间片编号, 在复制 任务触发时, 从緩存中读取第一编号对应的待写入数据和地址信息, 发送 给第二存储设备, 另外, 在复制任务触发时, 修改当前时间片编号, 使得 第一存储设备在后续接收写数据请求时, 在其携带的信息中增加与修改后 的当前时间片编号相同的编号, 由此在緩存中将需要发送给第二存储设备 的写数据请求携带的信息, 与第一存储设备正在接收的写数据请求携带的 信息区分开来, 实现了直接从緩存中将写数据请求携带的信息发送给第二 存储设备,由于信息是直接从緩存中发送的,不需要从数据卷中读取数据, 因此数据复制的时间较短, 提高了数据复制的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作筒单地介绍, 显而易见地, 下面 描述中的附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不 付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明实施例提供的一种数据发送方法的应用网络架构示意图; 图 2为本发明实施例提供的一种数据发送方法的流程图;
图 3为本发明实施例提供的一种数据接收方法的流程图;
图 4为本发明实施例提供的一种数据发送方法的信令图;
图 5为本发明实施例提供的一种存储设备的结构示意图;
图 6为本发明实施例提供的另一种存储设备的结构示意图; 图 7为本发明实施例提供的再一种存储设备的结构示意图;
图 8为本发明实施例提供的又一种存储设备的结构示意图。 具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发 明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于 本发明中的实施例, 本领域普通技术人员在没有作出创造性劳动前提下所获 得的所有其他实施例, 都属于本发明保护的范围。
本发明实施例提供的数据发送方法可以在存储设备中实现。 图 1为本发 明实施例提供的数据发送方法的系统架构示意图, 如图 1所示, 生产中心包 括生产主机、 连接设备和生产阵列 (对应下面实施例的第一存储设备) ; 灾 备中心的系统架构与生产中心类似, 包括灾备主机、连接设备和灾备阵列(对 应下面实施例的第二存储设备) 。 在本发明实施例中, 灾备中心可以不止一 个。其中,生产中心与灾备中心可以通过 IP ( Internet Protocol )或者 FC ( Fiber Chanel )进行数据传输。 生产中心和灾备中心之间可以有一个控制中心, 所述控制中心可以部署在生产中心侧, 也可以部署在灾备中心侧, 还可以 部署在生产中心和灾备中心之间的第三方设备中。 所述控制中心用于当生 产阵列发生故障时, 向灾备阵列发出信号让其接替生产阵列处理主机业 务。
生产主机和灾备主机均可以是当前技术已知的任何计算设备,如服务器、 台式计算机等等。 在主机内部, 安装有操作系统以及其他应用程序。
连接设备可以包括当前技术已知的存储设备和主机之间的任何接口, 如 光纤交换机, 或者其他现有的交换机。
生产阵列和灾备阵列均可以是当前技术已知的存储设备, 如独立磁盘
冗余阵歹1 J ( Redundant Arrays of Inexpensive Disks , RAID ) 、 磁盘 I: ( Just a Bunch Of Disks, JBOD )、直接存取存储器( Direct Access Storage Device , DASD ) 的一个或多个互连的磁盘驱动器, 诸如磁带库、 一个或多个存储 单元的磁带存储设备。
生产阵列的存储空间可以包括多个数据卷, 数据卷是由物理存储空间 映射而成的一段逻辑存储空间, 例如, 数据卷可以是逻辑单元(Logic Unit Number, LUN ) , 也可以是文件系统。 在本发明实施例中, 灾备阵列的结 构和生产阵列类似。
请参考图 1 , 图 1是本发明一种数据发送方法的实施例, 本发明实施例 应用在第一存储设备中, 其中, 第一存储设备包括控制器、 高速緩沖存储 器(以下筒称緩存或 cache )和存储介质。 其中, 控制器是第一存储设备的 处理器, 用于执行 10命令以及其他数据业务; 緩存是存在于控制器与硬盘 之间的存储器, 容量较硬盘小但速度比硬盘高得多; 存储介质是第一存储 设备主要的存储器, 通常情况下是指非易失性存储介质, 例如, 磁盘, 在 本发明实施例中将第一存储设备包含的物理存储空间都称为存储介质。 具 体执行下述步骤的可以是第一存储设备中的控制器。
步骤 S101 : 第一存储设备接收主机发送的第一写数据请求, 所述第一 写数据请求携带待写入数据和地址信息。
其中, 地址信息可以包括逻辑块地址 ( Logic Unit Address, LB A ) , 当第一存储设备中包含多个数据卷时, 地址信息还可以包括第一存储设备 的数据卷的 ID。
步骤 S102: 将所述待写入数据和地址信息增加第一编号, 写入緩存, 其中所述第一编号为当前时间片编号。
在第一存储设备中可以包含一个当前时间片编号管理器, 所述当前时 间片编号管理器中保存有当前时间片编号, 当前时间片编号可以用数值表 示, 例如 0、 1、 2, 也可以用字母表示, 例如 a、 b、 c, 在此不 限定。
当接收到第一写数据请求时, 在第一写数据请求携带的待写入数据和 地址信息中增加第一编号, 所述第一编号是由当前时间片编号赋值给它 的。
在第一写数据请求携带的信息中增加第一编号后, 再将修改后的第一 写数据请求携带的信息写入緩存, 使得第一写数据请求携带的待写入数 据、 地址信息和第一编号都保存在緩存中。
另外, 在一段时间内, 还可以接收到其他写数据请求, 同样需要在其 携带的信息中增加第一编号, 并写入緩存。 需要说明的是在当前时间片编 号改变之前, 在写数据请求携带的信息中增加的都是第一编号。
步骤 S103: 从所述緩存中读取所述第一编号对应的所述待写入数据和 地址信息。
当复制任务触发时, 第一存储设备可以从緩存中将第一编号对应的待 写入数据和地址信息读取出来, 可以理解的是, 第一编号对应的待写入数 据和地址信息可以不止一条。
复制任务是指第一存储设备将一段时间内一个数据卷接收到的写数 据请求携带的信息发送给第二存储设备, 这些写数据请求携带的信息都被 增加了与当前时间片编号相同的编号。 复制任务触发可以是由定时器触 发, 也可以是人为触发, 在此不做限定。 复制的目的是将第一存储设备接 收的写数据请求携带的待写入数据发送给第二存储设备存储, 使得第一存 储设备故障时,第二存储设备可以接替第一存储设备工作。可以理解的是, 写数据请求携带的地址信息(例如 LBA )也需要发送给第二存储设备, 所 述 LBA用于指示第二存储设备存储待写入数据的地址。 由于第二存储设备 具有与第一存储设备相同的物理结构, 因此适用于第一存储设备的 LBA, 也适用于第二存储设备。
在本发明实施例中, 复制任务是针对第一存储设备的一个数据卷而言 的, 当第一存储设备包含多个数据卷时,每个数据卷对应的一个复制任务。
步骤 S 104: 修改所述当前时间片编号以标识后续写数据请求携带的信 当复制任务触发时, 当前时间片编号管理器需要修改当前时间片编 号, 当接收到后续写数据请求时, 则后续的写数据请求携带的信息则需增 加另一个编号, 所述另一个编号是由修改后的当前时间片编号赋值给它 的。 由此, 可以在緩存中将需要发送给第二存储设备的写数据请求携带的 信息, 与第一存储设备正在接收的写数据请求携带的信息区分开来。
需要说明的是, 步骤 S103和步骤 S104之间没有先后顺序之分。
步骤 S105: 将所述待写入数据和地址信息发送给第二存储设备。
第一存储设备将从緩存中读取的第一编号对应的待写入数据和地址 信息, 发送给第二设备。
具体的, 第一存储设备可以将读取到的所有的待写入数据和地址信息 直接发送给第二存储设备; 也可以在获得第二存储设备的数据卷的 ID后, 根据每一个写数据请求携带的待写入数据和地址信息, 以及第二存储设备 的数据卷的 ID, 分别生成新的写数据请求, 再发送给第二存储设备。
本发明实施例中, 第一存储设备在接收主机发送的写数据请求后, 所 述写数据请求携带的信息包括待写入数据和地址信息, 在待写入数据和地 址信息中增加第一编号, 写入緩存, 第一编号为当前时间片编号, 在复制 任务触发时, 从緩存中读取第一编号对应的待写入数据和地址信息, 发送 给第二存储设备, 另外, 在复制任务触发时, 修改当前时间片编号, 使得 第一存储设备在后续接收写数据请求时, 在其携带的信息中增加与修改后 的当前时间片编号相同的编号, 由此在緩存中将需要发送给第二存储设备 的写数据请求携带的信息, 与第一存储设备正在接收的写数据请求携带的 信息区分开来, 实现了直接从緩存中将写数据请求携带的信息发送给第二 存储设备,由于信息是直接从緩存中发送的,不需要从数据卷中读取数据, 因此数据复制的时间较短, 提高了数据复制的效率。
可以理解的是, 在上述实施例中, 第一存储设备在复制任务触发时, 将当前时间片编号对应的待写入数据和地址信息, 发送给第二存储设备, 同时修改当前时间片编号以标识后续写数据请求携带的信息。 在下一次复 制任务触发时, 将修改后的当前时间片编号对应的待写入数据和地址信息 发送给第二存储设备, 同时再次修改当前时间片编号。 可以保证第一存储 设备分批次地完整地将其接收到的写数据请求携带的信息, 发送给第二存 储设备。
然而, 当存在多个灾备中心时, 假设第二个灾备中心对应的存储设备 是第三设备, 第一存储设备还需要将其接收到的写数据请求携带的信息, 发送给第三存储设备。 对于第二存储设备而言, 复制任务触发时, 当前时 间片编号管理器将修改当前时间片编号, 此时, 当前时间片编号赋值给第 二存储设备和第三存储设备的编号都是修改后的编号。 然而, 当前时间片 编号修改之前的编号对应的写数据请求携带信息还没有发送给第三存储 设备。
因此, 对于多个灾备中心的场景, 上述实施例还可以包括以下步骤: 步骤 S106: 记录第二编号, 所述第二编号是当前的复制任务之前, 最 近一次已完成的复制任务对应的编号。
在上述实施例中, 第一编号与当前时间片编号相同, 可以用于标识当 前的复制任务, 当前的复制任务是指第一存储设备将当前时间段内一个数 据卷接收到的写数据请求携带的信息发送给第二存储设备, 这些写数据请 求携带的信息都被增加了与当前时间片编号相同的编号。
第二编号是当前的复制任务之前, 最近一次已完成的复制任务对应的 编号。
当存在多个灾备中心时, 当前时间片编号可能在向其他灾备中心的存 储设备发起复制任务时修改的, 因此需要将上一次已完成的复制任务对应 的编号记录下来。
如果在第二编号和第一编号之间还存在其他编号, 那么该编号对应的 写数据请求携带的信息是没有发送给第二存储设备的, 需要执行步骤
S107。
步骤 S107: 从所述緩存中读取所述第二编号之后, 所述第一编号之前 的编号对应的待写入数据和地址信息。
具体的读取过程, 与步骤 S103类似, 这里不再赘述。
需要说明的是, 步骤 S107可以和步骤 S103没有先后顺序之分, 也可以 同时执行。
步骤 S108: 将所述第二编号之后, 所述第一编号之前的编号对应的待 写入数据和地址信息发送给所述第二存储设备。
具体的发送过程, 与步骤 S105类似, 这里不再赘述。
本发明实施例, 除了将当前时间片编号对应的写数据请求携带的信息 发送给第二存储设备以外, 还可以将上一次已完成的复制任务对应的编 号, 与当前时间片编号之间的编号对应的写数据请求携带的信息发送给第 二存储设备, 适用于多个灾备中心的场景, 保证了数据复制的完整性。 请参考图 2, 图 2是本发明一种数据接收方法的实施例, 本发明实施例 用于灾备中心在接收到生产中心发送的写数据请求携带的信息的应用场 景。 所述方法可以包括:
步骤 S201 : 第二存储设备接收第一存储设备发送的地址信息。
具体的, 第二存储设备可以接收第一存储设备发送的待写入数据和地 址信息; 也可以接收第一存储设备发送的写数据请求, 其中, 所述写数据 请求包括待写入数据和地址信息,所述地址信息可以是逻辑块地址( Logic Unit Address, LBA ) 。 当第二存储设备包括多个数据卷时, 所述地址信息 还可以包括第二存储设备的数据卷的 ID。 可以理解的是, 这里的地址信息 可以不止一条。
第二存储设备在接收到待写入数据和地址信息后, 在待写入数据和地 址信息中增加与当前时间片编号相同的编号, 写入緩存, 使得在緩存中保 存与当前时间片编号相同的编号、 待写入数据和地址信息。
需要说明的是, 第二存储设备中也包含有一个当前时间片编号管理 器, 所述当前时间片编号管理器中保存有当前时间片编号, 当前时间片编 号可以用数值表示, 例如 0、 1、 2, 可以用字母表示, 例如 a、 b、 c, 在此 不做限定。 这里的当前时间片编号可以和第一存储设备中的当前时间片编 号没有联系。
步骤 S202: 当确定所述第一存储设备故障时, 所述第二存储设备根据 所述地址信息, 获取第一编号对应的待写入数据, 所述第一编号对应的地 址信息与所述接收到的地址信息相同, 所述第一编号为当前时间片编号之 前的编号。
通常情况下, 若第一存储设备和第二存储设备均运行正常, 那么第一 存储设备发送多少条写数据请求携带的信息, 第二存储设备就可以接收多 少条写数据请求携带的信息, 并且将每个写数据请求携带的信息都加上与 当前时间片编号相同的编号, 保存在緩存中。 然而, 如果第一存储设备发 生故障, 那么第二存储设备可能只接收到了第一存储设备的当前时间片编 号的对应的部分待写入数据, 在这种情况下, 第二存储设备保存的数据可 能是不真实的数据, 如果直接接替第一存储设备工作, 数据一致性就不能 得到保证。 举例来说, 若此时主机向第二存储设备发送一个读数据请求, 要求读取所述地址信息上保存的数据, 第二存储设备会查找与所述地址信 息对应的最新的编号, 然后将当前时间片编号对应的待写入发送给主机, 然而该数据是不真实的。 因此, 此时需要将第二存储设备的緩存中的数据 恢复成第二存储设备的当前时间片编号之前的编号对应的数据。
具体的, 确定第一存储设备发生故障的方式可以是控制中心向第二存 储设备发送一个信号, 该信号用于指示第一存储设备故障, 第二存储设备
需接替第一存储设备处理主机业务。
通常情况下, 当一个复制任务完成时, 控制中心可以分别向第一存储 设备和第二存储设备发送复制成功的指示。 如果第二存储设备没有接收该 指示, 则说明当前复制任务没有完成。 复制任务完成是指, 第一存储设备 将当前时间片编号对应的所有写数据请求携带的信息都发送给了第二存 储设备, 并且第二存储设备也接收完成。
当第二存储设备确定第一存储设备发生故障时, 若当前的复制任务已 完成, 则第二存储设备可以直接接替第一存储设备工作, 数据一致性可以 得到保证。 这种情况不在本发明实施例的讨论范围之内。
然而, 若当前的复制任务没有完成, 则需要将第二存储设备的緩存中 的数据恢复成其当前时间片编号之前的编号对应的数据。
具体的恢复方式可以是, 根据接收到的地址信息, 在当前时间片编号 的上一个编号对应的地址信息中查找是否有与所述地址信息相同的地址 信息, 如果没有, 则继续在再上一个编号对应的地址信息中查找, 直至找 到所述地址信息, 然后获得该编号对应的待写入数据。
步骤 S203: 将所述第一编号对应的待写入数据和地址信息增加第二编 号, 写入緩存。
其中, 第二编号是对当前时间片编号进行修改后的编号, 也是在本实 施例中緩存中保存的最新的编号。 当主机向第二存储设备发送一个读数据 请求, 要求读取所述地址信息上保存的数据时, 第二存储设备经过查找得 知与所述地址信息对应的最新的编号是第二编号, 将第二编号对应的待写 入数据发送给主机。 由此, 保证了数据的一致性。
本发明实施例中, 第二存储设备接收第一存储设备发送的地址信息, 在第一存储设备发生故障时, 根据该地址信息获得当前时间片编号之前的 编号对应的待写入数据, 并且将当前时间片编号之前的编号对应的待写入 数据和地址信息增加第二编号, 保存在緩存中。 由此,保证了数据一致性。
请参考图 3 , 图 3是本发明一种数据发送方法的实施例, 在本发明实施 例中, 为了将生产阵列中的緩存与灾备阵列中的緩存相区别, 在本发明实 施例中, 将生产阵列中的緩存称为第一緩存, 灾备阵列中的緩存称为第二 緩存。
如图 3所示, 所述方法包括:
步骤 S301 : 生产阵列接收生产主机发送的写数据请求 A。
所述写数据请求 A包括卷 ID、 待写入地址 A和待写入数据 A, 待写入地 址 A是指待写入数据 A将要写入的生产阵列的逻辑地址, 例如 LBA, 通常情 况下, 生产阵列在执行所述写数据请求 A时需要将 LBA转换为 PBA ( Phys i cs B lock Addres s )后, 再根据 PBA将待写入数据 A写入存储介质中。 卷 ID是 写数据请求 A对应的数据卷的 I D。 本实施例以生产阵列包含一个卷(以下 称为主卷) 为例, 那么写数据请求 A携带的信息中包含主卷 ID、 待写入地 址 A和待写入数据 A。
步骤 S302: 生产阵列将写数据请求 A修改为写数据请求 Α, , 写数据 请求 A, 包含写数据请求 A携带的信息以及第一编号。
在本发明实施例中, 生产阵列的控制器中可以包含一个当前时间片编 号 (Current Time Per i od Number , CTPN ) 管理器, 在 CTPN管理器中记录 有当前时间片编号, 所述当前时间片编号用于生成第一编号, 具体的, 第 一编号等于当前时间片编号。
在生产阵列接收到写数据请求 A后, 将写数据请求 A修改为写数据请 求 A,。 具体的, 其修改方式可以是在所述写数据请求 A携带的信息中增加 第一编号, 例如, 当前时间片编号可以为 1 , 那么第一编号也为 1。
可选的, 也可以在接收写数据请求 A时, 记录时间戳, 将所述时间戳 在预先保存的编号序列中进行匹配, 从而确定所述时间戳对应的编号。 具 体的, 所述编号序列可以是映射表或者其他形式, 在此不做限定。 所述编
号序列包括多个编号, 每个编号对应一段时间戳的区间。 如表 1所示:
以接收所述写数据请求 A的时间戳是 9:30为例, 其对应的编号为 1 , 则 可以根据所述编号, 将写数据请求 A修改为写数据请求 A,。
步骤 S303: 生产阵列将所述写数据请求 A,写入第一緩存, 使得在所述 第一緩存中保存写数据请求 A'携带的信息。 其中, 写数据请求 A'携带的信 息包括第一编号、 主卷 ID、 待写入地址 A和待写入数据 A。
在本发明实施例中, 第一编号对应的写数据请求可以有多个。 在 CTPN 管理器中记录的当前时间片编号修改之前, 接收到所有的写数据请求携带 的信息都会加上第一编号。
可以理解的是, 在接收写数据请求 A之后, 还可以接收写数据请求 B, 将写数据请求 B修改为写数据请求 Β, , 使得写数据请求 B, 中还包含所述 第一编号; 还可以接收写数据请求 C , 将写数据请求 C修改为写数据请求 C ' , 使得写数据请求 C ' 中还包含所述第一编号。
举例来说, 将写数据请求 A,, 写数据请求 B,, 写数据请求 C,写入第 一緩存后, 第一緩存中的保存的信息可以如表 2所示:
需要说明的是, 在本发明实施例中, 以生产阵列包含一个数据卷(可
以称为主卷)为例, 所述写数据请求 , 、写数据请求 B, 和写数据请求 C, 携带的数据卷的 ID均是主卷 ID。 在本发明另一个实施例中, 生产阵列可以 包含多个数据卷, 所以写数据请求4、 写数据请求 B和写数据请求 C携带的 数据卷的 ID可以不同。 另外, 表 2只是写数据请求携带的信息在第一緩存 中保存形式的一个示例, 还可以采用树的形式进行保存, 在此不作限定。
以表 2为例, 编号、 卷 I D、 待写入地址可以看作表 2的索引, 根据所述 索引可以查到其对应的待写入数据, 当索引相同时, 其对应的待写入数据 也应该相同。 因此, 当写入一个新的写数据请求时, 需判断第一緩存中是 否存储有与新的写数据请求的编号、 卷 I D以及待写入地址均相同的信息, 如果有,则用新的写数据请求携带的信息覆盖原来的信息。可以理解的是, 在将写数据请求 A ' 、 写数据请求 B ' 、 写数据请求 C ' 写入第一緩存时, 也需要判断其编号、 卷 I D、 待写入地址是否与第一緩存中已保存的信息相 同, 因为不相同, 所以可以将写数据请求 , 、 写数据请求 B, 、 写数据 请求 C ' 都写入第一緩存。
举例来说, 如果此时接收到写数据请求 D, 所述写数据请求 D包含主卷
I D、 待写入地址 B、 待写入数据 D, 将写数据请求 D修改为写数据请求 D, , 使得写数据请求0, 中还包含所述第一编号。 那么, 将所述写数据请求 D, 写入第一緩存时, 则需判断第一緩存中是否存储与写数据请求 D ' 的编号、 卷 ID以及待写入地址均相同的信息, 如果有, 则用写数据请求 D, 携带的 数据覆盖原来的信息。 由于写数据请求 D ' 中携带的编号、 卷 ID、 待写入 地址均与写数据请求 B, 中包含的编号、 卷 I D、 待写入地址相同, 因此在 第一緩存中, 写数据请求 D ' 的信息将覆盖写数据请求 B ' 的信息。
1 主卷 ID 待写入地址 C 待写入数据 C
步骤 S304: 当复制任务触发时, 生产阵列修改 CTPN管理器包含的当 前时间片编号; 举例来说, 可以将当前时间片编号由 1修改为 2。
为了将生产阵列的当前时间片编号与灾备阵列的当前时间片编号区 别开, 在本发明实施例中, 将生产阵列的当前时间片编号称为第一当前时 间片编号, 将灾备阵列的当前时间片编号称为第二当前时间片编号。
可以理解的是, 当第一当前时间片编号由 1修改为 2之后, 相应地, 此 后接收到的写数据请求携带的信息都会加上与编号 2。 例如, 接收到写数 据请求 E , 所述写数据请求 E包含主卷 ID、 待写入地址 A、 待写入数据 E , 将 写数据请求 E修改为写数据请求 Ε, , 使得写数据请求 E, 中还包含编号 2; 接收到写数据请求 F , 所述写数据请求 F包含主卷 I D、 待写入地址 F、 待写 入数据 F , 将写数据请求 F修改为写数据请求 F, , 使得写数据请求 F, 中还 包含编号 2。 在将写数据请求 E, 和写数据请求 F, 写入第一緩存之后, 第 一緩存中保存的信息可以如表 4所示:
步骤 S305: 灾备阵列修改其 CTPN管理器包含的第二当前时间片编号; 举例来说, 可以由 11修改为 12。
在本发明实施例中, 灾备阵列中也可以包含自己的 CTPN管理器。 当
生产阵列的复制任务触发时,生产阵列中的 CTPN管理器修改第一当前时间 片编号, 此时, 控制中心也可以给灾备阵列发送一个控制信号, 让在灾备 阵列也修改自己的 CTPN管理器中包含的第二当前时间片编号。 因此, 步骤 S305与步骤 S304之间没有先后顺序之分。
步骤 S306A: 生产阵列从第一緩存中读取第一编号对应的写数据请求 携带的信息。
具体的, 由上面的描述可知, 第一编号对应的写数据请求携带的信息 如表 3所示。
步骤 S306B: 生产阵列获得待写入灾备阵列的数据卷的 ID;
步骤 S306C: 生产阵列根据所述数据卷的 ID以及第一编号对应的写 数据请求携带的信息, 生成新的写数据请求;
具体的, 可以根据数据卷的 ID、 待写入地址 A、 待写入数据 A生成 写数据请求 A"; 可以根据数据卷的 ID、 待写入地址 B、 待写入数据 D生 成写数据请求 D"; 可以根据数据卷的 ID、 待写入地址 C、 待写入数据 C 生成写数据请求 C"。
在本发明另一个实施例中, 生产阵列和灾备阵列均可以包含多个数据 卷, 那么写数据请求 A,,、 写数据请求 D,,、 写数据请求 C"所包含的数 据卷的 ID可能是不相同的。 然而, 灾备阵列中每一个数据卷的 ID是与生 产阵列中每一个数据卷的 ID——对应的。
步骤 S307: 生产阵列向灾备阵列发送生成的新的写数据请求。
具体的, 生产阵列向灾备阵列发送写数据请求 A"、 写数据请求 D"、 写数据请求 C"。
步骤 S308: 灾备阵列修改接收到的写数据请求。
例如, 可以根据 CTPN管理器中记录的第二当前时间片编号, 将写数 据请求 A"修改为写数据请求 A,,,。 具体的, 其修改方式可以是在所述写 数据请求 A"携带的信息中增加编号 12。
同样的, 可以在写数据请求 B"携带的信息中增加编号 12, 将写数据 请求 B"修改为写数据请求 B,,,;在写数据请求 C"携带的信息中增加编号 12, 将写数据请求 C"修改为写数据请求 C",。
步骤 S309: 灾备阵列将修改后的写数据请求写入第二緩存。
具体的, 保存在第二緩存中的信息可以如表 5所示:
表 5
步骤 S310: 灾备阵列根据写数据请求的待写入地址,将待写入数据写 入待写入地址对应的存储介质中。
通常情况下, 由于緩存空间是有限的, 当其空间利用率达到一定阈值 时, 就需要将緩存中的数据写入硬盘。 具体的, 将待写入数据 A写入待写 入地址 A对应的存储介质中, 将待写入数据 D写入待写入地址 B对应的 存储介质中, 将待写入数据 C写入待写入地址 C对应的存储介质中。
步骤 S311 : 生产阵列根据写数据请求的待写入地址,将待写入数据写 入待写入地址对应的存储介质中。
同样的, 生产阵列的緩存在其空间利用率达到一定阈值时, 也需要将 緩存中的数据写入硬盘。 由上面的描述可知, 此时第一緩存中保存有如下 信息:
2 主卷 ID 待写入地址 F 待写入数据 F 具体的, 对于卷 ID相同、 待写入地址相同而编号不同的写数据请求, 可 以先写入编号较小的写数据请求携带的待写入数据, 再写入编号较大的写数 据请求携带的待写入数据, 例如先写入待写入数据 D, 再写入待写入数据 E; 或者直接写入编号较大的写数据请求携带的待写入数据, 而不写入编号较小 的写数据请求携带的待写入数据, 例如直接写入待写入数据 E。
步骤 S310和步骤 S311之间没有先后顺序之分。
步骤 S312: 当复制任务触发时,生产阵列修改其 CTPN管理器包含的 第一当前时间片编号; 举例来说, 可以将当前时间片编号由 2修改为 3。
可以理解的是, 当生产阵列的 CTPN管理器中的第一当前时间片编号 由 2修改为 3之后, 相应地, 此后生产阵列接收到的写数据请求携带的信 息中都会加上编号 3。
步骤 S313: 灾备阵列修改其 CTPN管理器包含的第二当前时间片编 号; 举例来说, 可以将第二当前时间片编号由 12修改为 1 3。
可以理解的是, 当灾备阵列的 CTPN管理器中的第二当前时间片编号 由 12修改为 1 3之后, 相应地, 此后灾备阵列接收到的写数据请求携带的 信息都会加上编号 1 3。
步骤 S314: 生产阵列读取编号 2对应的写数据请求携带的信息, 生成 相应的写数据请求发送给灾备阵列。
具体的, 由上面的描述可知, 编号 2对应的写数据请求携带的信息包 括写数据请求 E携带的信息和写数据请求 F携带的信息。 同样的, 生产阵 列在获得灾备阵列的数据卷的 ID后, 可以根据数据卷的 ID、 待写入地址 A、 待写入数据 E生成写数据请求 E" ; 可以根据数据卷的 ID、 待写入地 址F、 待写入数据 F生成写数据请求 F,,。 所以, 生产阵列向灾备阵列发送
的写数据请求是写数据请求 E"和写数据请求 F"。
需要说明的是, 在本发明实施例中, 生产阵列在向灾备阵列发送写数 据请求时并没有根据先后顺序之分, 可以是随机发送的, 具体而言, 可以 先发送写数据请求 E" , 再发送写数据请求 F" ; 也可以先发送写数据请求 F" , 再发送写数据请求 E"。
由上面的描述可知, 此时, 灾备阵列的 CTPN管理器中的第二当前时 间片编号是 1 3 , 因此灾备阵列在接收到写数据请求 E"之后需要将写数据 请求 E"修改为包含编号 13的写数据请求 E",; 同样的, 灾备阵列在接收 到写数据请求 F"之后需要将写数据请求 F"修改为包含编号 13的写数据 请求 F,"。
步骤 S315: 灾备阵列收到指示接替生产阵列处理主机业务。
在本发明实施例中, 如果生产阵列发生故障, 灾备阵列就需要接替生 产阵列处理主机业务, 因此灾备阵列需要满足数据一致性的要求。
由步骤 S314可知, 在当前的复制任务中, 灾备阵列需要接收的写数 据请求包括写数据请求 E"和写数据请求 F,。 当写数据请求 E"和写数据 请求 F" , 经修改后均已成功写入第二緩存中, 灾备阵列才开始接替生产 阵列处理主机业务时, 说明当前的复制周期已经完成, 具备数据一致性的 要求。
若灾备阵列在将写数据请求 E"修改为写数据请求 E",并成功写入第 二緩存之后, 写数据请求 F",成功写入第二緩存之前,生产阵列发生故障, 灾备阵列开始接替生产阵列处理主机业务, 那么当前的复制任务没有完 成, 不满足数据一致性的要求。 同样的, 若灾备阵列在将写数据请求 F" 修改为写数据请求 F",并成功写入第二緩存之后, 写数据请求 E",成功写 入第二緩存之前, 生产阵列故障, 灾备阵列开始接替生产阵列处理主机业 务, 那么当前的复制任务也没有完成, 也不满足数据一致性的要求。
此时, 需要将灾备阵列的緩存中的数据恢复成编号 12对应的复制任
务完成时的状态。 下面以灾备阵列在将写数据请求 E"修改为写数据请求 E",并成功写入第二緩存, 而写数据请求 F",没有成功写入第二緩存为例。
步骤 S316: 灾备阵列获取当前复制周期中已成功写入第二緩存的写数据 请求携带的待写入地址。
由上面的描述可知, 在编号 13对应的复制任务中, 写数据请求 E",已 成功写入第二緩存, 其携带的待写入地址为待写入地址 A。
步骤 S317: 灾备阵列根据所述待写入地址, 在上一个编号对应的写数据 请求携带的信息中进行匹配,查找到与所述待写入地址 A相同的待写入地址。
当查找到与所述待写入地址相同的待写入地址时,执行步骤 S318;若否, 则继续在再上一个编号 (例如编号 11 )对应的写数据请求携带的信息中进行 匹配, 直到查找到与写数据请求 E,,,携带的待写入地址 A相同的待写入地 址为止。
由上面的描述可知,编号 12对应的写数据请求携带的信息如表 5所示。 其中,写数据请求 A,,携带待写入地址与写数据请求 E,,,携带的待写入地址 相同。
可以理解的是, 当灾备阵列包含多个数据卷, 每个写数据请求携带的 信息中包含数据卷的 ID时,则需要待写入地址和数据卷的 ID两者均相同 时才满足条件。
步骤 S318: 根据查找到的待写入地址所在的信息, 生成新的写数据请求 写入所述第二緩存, 所述新的写数据请求包含修改后的编号。
举例来说,从第二緩存中读取的信息包括待写入地址 A和待写入数据 A (还可以包含从卷 ID ) , 根据读取到的信息, 再加上修改后的编号 (例 如, 将编号由 13修改为 14 ) , 可以生成一条新的写数据请求。 将所述新的 写数据请求写入第二緩存后, 此时緩存中保存的对应关系如表 6所示:
12 从卷 ID 待写入地址 B 待写入数据 D
12 从卷 ID 待写入地址 C 待写入数据 C
1 3 从卷 ID 待写入地址 A 待写入数据 E
14 从卷 ID 待写入地址 A 待写入数据 A
表 6
当主机发送读数据请求给灾备阵列,要求读取数据卷的 I D为从卷 I D、 待写入地址为待写入地址 A时,灾备阵列会在第二緩存中查找数据卷的 ID 为从卷 I D、待写入地址为待写入地址 A且最新的编号对应的待写入数据发 送给主机。 在本发明实施例中, 则是从第二緩存中将编号 14对应的待写 入数据 A发送给主机。
本发明实施例生产阵列可以直接从緩存中将接收到的写数据请求携 带的信息, 发送给灾备阵列, 不用到数据卷中读取相关信息, 因此可以提 高数据复制的效率, 并且对于灾备阵列来说也保证了数据一致性。
在现有技术中, 数据复制是通过快照数据来实现的, 这就要求在生产 阵列每次执行写数据请求时, 需要先将写数据请求携带的数据放入緩存, 根据写数据请求中携带的待写入地址, 将该地址中保存的旧数据读取出 来, 存储在数据卷中, 再将緩存中的数据写入所述待写入地址, 这些操作 完成以后才能返回写数据请求的响应消息, 由于增加了快照处理的步骤, 因此增加了写数据请求处理的时延。 而在本发明实施例中不需要对数据进 行快照处理, 虽然会对写数据请求进行修改, 但耗时较小。 所以, 和现有 技术相比, 本发明实施例减小了写数据请求处理的时延。 请参考图 5 , 图 5是本发明实施例一种存储设备 50的结构示意图, 如 图 5所示, 所述存储设备 50包括: 接收模块 501、 读写模块 502、 当前时 间片编号管理器 503和发送模块 504。
接收模块 501 ,用于接收主机发送的第一写数据请求, 所述第一写数据
请求携带待写入数据和地址信息。
其中, 地址信息可以包括逻辑块地址 (Logic Unit Address, LBA ) , 当存储设备 50中包含多个数据卷时, 地址信息还可以包括存储设备 50的 数据卷的 ID。
读写模块 502, 用于将所述待写入数据和地址信息增加第一编号, 写 入緩存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第 一编号对应的所述待写入数据和地址信息。
在存储设备 50中可以包含一个当前时间片编号管理器 503 , 所述当前 时间片编号管理器 503中保存有当前时间片编号, 当前时间片编号可以用 数值表示, 例如 0、 1、 2, 可以用字母表示, 例如 a、 b、 c, 在此不做限定。
当接收到第一写数据请求时, 在第一写数据请求携带的待写入数据和 地址信息中增加第一编号, 所述第一编号是由当前时间片编号赋值给它 的。
在第一写数据请求携带的信息中增加第一编号后, 再将修改后的第一 写数据请求携带的信息写入緩存, 使得第一写数据请求携带的待写入数 据、 地址信息和第一编号都保存在緩存中。
另外, 在一段时间内, 还可以接收到其他写数据请求, 同样需要在其 携带的信息中增加第一编号, 并写入緩存。 需要说明的是在当前时间片编 号改变之前, 在写数据请求携带的信息增加的都是第一编号。
当复制任务触发时, 存储设备 50可以从緩存中将第一编号对应的待写 入数据和地址信息读取出来, 可以理解的是, 第一编号对应的待写入数据 和地址信息可以不止一条。
复制任务是指存储设备 50将一段时间内一个数据卷接收到的写数据 请求携带的信息发送给灾备中心的存储设备, 这些写数据请求携带的信息 都被增加了与当前时间片编号相同的编号。 复制任务触发可以是由定时器 触发, 也可以是人为触发, 在此不做限定。 复制的目的是将存储设备 50接
收的写数据请求携带的待写入数据发送给灾备中心的存储设备存储, 使得 存储设备 50故障时, 灾备中心的存储设备可以接替存储设备 50工作。 可以 理解的是, 写数据请求携带的地址信息(例如 LBA )也需要发送给灾备中 心的存储设备, 所述 LBA用于指示灾备中心的存储设备存储待写入数据的 地址。 灾备中心的存储设备具有与存储设备 50相同的物理结构, 因此适用 于存储设备 50的 LBA, 也适用于灾备中心的存储设备。
在本发明实施例中, 复制任务是针对存储设备 50的一个数据卷而言 的, 当存储设备 50包含多个数据卷时, 每个数据卷对应的一个复制任务。
当前时间片编号管理器 503 , 用于修改所述当前时间片编号以标识后 续写数据请求携带的信息。
当复制任务触发时, 当前时间片编号管理器 503需要修改当前时间片 编号, 当接收到后续写数据请求时, 则后续的写数据请求携带的信息则需 增加另一个编号, 所述另一个编号是由修改后的当前时间片编号赋值给它 的。 由此, 可以在緩存中将需要发送给灾备中心的存储设备的写数据请求 携带的信息, 与存储设备 50正在接收的写数据请求中携带的信息区分开 来。
发送模块 504, 用于将所述待写入数据和地址信息发送给灾备中心的 存储设备。
存储设备 50将从緩存中读取的第一编号对应的待写入数据和地址信 息, 发送给灾备中心的存储设备。
具体的, 存储设备 50可以将读取到的所有的待写入数据和地址信息直 接发送给灾备中心的存储设备; 也可以在获得灾备中心的存储设备的数据 卷的 ID后, 根据每一个写数据请求携带的待写入数据和地址信息, 以及灾 备中心的存储设备的数据卷的 ID, 分别生成新的写数据请求, 再发送给灾 备中心的存储设备。
本发明实施例中, 存储设备 50在接收主机发送的写数据请求后, 所
述写数据携带的信息包括待写入数据和地址信息, 在待写入数据和地址信 息中增加第一编号, 写入緩存, 第一编号为当前时间片编号, 在复制任务 触发时, 从緩存中读取第一编号对应的待写入数据和地址信息, 发送给灾 备中心的存储设备, 另外, 在复制任务触发时, 修改当前时间片编号, 使 得存储设备 50在后续接收写数据请求时, 在其携带的信息中增加与修改 后的当前时间片编号相同的编号, 由此在緩存中将需要发送给灾备中心的 存储设备的写数据请求携带的信息, 与存储设备 50正在接收的写数据请 求携带的信息区分开来, 实现了直接从緩存中将写数据请求携带的信息发 送给灾备中心的存储设备, 由于信息是直接从緩存中发送的, 不需要从数 据卷中读取数据, 因此数据复制的时间较短, 提高了数据复制的效率。 请参考图 6 , 图 6是本发明实施例一种存储设备 60的结构示意图, 如 图 6所示, 所述存储设备 60包括: 接收模块 601、 查找模块 602和写入模 块 604。
接收模块 601 , 用于接收存储设备 50发送的地址信息。
具体的, 存储设备 60可以接收存储设备 50发送的待写入数据和地址信 息; 也可以接收存储设备 50发送的写数据请求, 其中, 所述写数据请求包 括待写入数据和地址信息, 所述地址信息可以是逻辑块地址 (Logic Unit Address, LBA ) 。 当存储设备 60包括多个数据卷时, 所述地址信息还可以 包括存储设备 60的数据卷的 ID。 可以理解的是, 这里的地址信息可以不止 一条。
存储设备 60在接收到待写入数据和地址信息后, 在待写入数据和地址 信息中增加与当前时间片编号相同的编号, 写入緩存, 使得在緩存中保存 与当前时间片编号相同的编号、 待写入数据和地址信息。
需要说明的是, 存储设备 60 中也可以包含有一个当前时间片编号管 理器 603 , 所述当前时间片编号管理器 603中保存有当前时间片编号, 当
前时间片编号可以用数值表示, 例如 0、 1、 2, 可以用字母表示, 例如 a、 b、 c, 在此不做限定。 这里的当前时间片编号可以和存储设备 50 中的当 前时间片编号没有联系。
查找模块 602, 用于当确定所述存储设备 50故障时, 所述存储设备 60根据所述地址信息, 获取第一编号对应的待写入数据, 所述第一编号对 应的地址信息与所述接收到的地址信息相同, 所述第一编号为当前时间片 编号之前的编号。
通常情况下, 若存储设备 50和存储设备 60均运行正常, 那么存储设备 50发送多少条写数据请求携带的信息, 存储设备 60就可以接收多少条写数 据请求携带的信息, 并且将每个写数据请求携带的信息都加上与当前时间 片编号相同的编号, 保存在緩存中。 然而, 如果存储设备 50发生故障, 那 么存储设备 60可能只接收到了存储设备 50的当前时间片编号的对应的部 分待写入数据, 在这种情况下, 存储设备 60保存的数据可能是不真实的数 据, 如果直接接替存储设备 50工作, 数据一致性就不能得到保证。 举例来 说, 若此时主机向存储设备 60发送一个读数据请求, 要求读取所述地址信 息 (例如, LBA )上保存的数据, 存储设备 60会查找与所述地址信息对应 的最新的编号, 然后将当前时间片编号对应的待写入发送给主机, 然而该 数据是不真实的。 因此, 此时需要将存储设备 60的緩存中的数据恢复成存 储设备 60的当前时间片编号之前的编号对应的数据。
具体的, 确定存储设备 50发生故障的方式可以是控制中心向存储设备
60发送一个信号, 该信号用于指示存储设备 50故障, 存储设备 60需接替存 储设备 50处理主机业务。
通常情况下, 当一个复制任务完成时, 控制中心可以分别向存储设备 50和存储设备 60发送复制成功的指示。 如果存储设备 60没有接收该指示, 则说明当前复制任务没有完成。 复制任务完成是指, 存储设备 50将当前时 间片编号对应的所有写数据请求携带的信息都发送给了存储设备 60, 并且
存储设备 60也接收完成。
当存储设备 60确定存储设备 50发生故障时, 若当前的复制任务已完 成, 则存储设备 60可以直接接替存储设备 50工作, 数据一致性可以得到保 证。 这种情况不在本发明实施例的讨论范围之内。
然而, 若当前的复制任务没有完成, 则需要将存储设备 60的緩存中的 数据恢复成其当前时间片编号之前的编号对应的数据。
具体的恢复方式可以是, 根据接收到的地址信息, 在当前时间片编号 的上一个编号对应的地址信息中查找是否有与所述地址信息相同的地址 信息, 如果没有, 则继续在再上一个编号对应的地址信息中查找, 直至找 到所述地址信息, 然后获得该编号对应的待写入数据。
写入模块 604, 用于将所述第一编号对应的待写入数据和地址信息增 加第二编号, 写入緩存。
其中, 第二编号是对当前时间片编号进行修改后的编号, 在本实施例 中是緩存中保存的最新的编号。 当主机向存储设备 60发送一个读数据请 求, 要求读取所述地址信息 (例如, LBA )上保存的数据时, 存储设备 60 经过查找得知与所述地址信息对应的最新的编号是第二编号, 将第二编号 对应的待写入数据发送给主机。 由此, 保证了数据的一致性。
本发明实施例中, 存储设备 60接收存储设备 50发送的地址信息, 在 存储设备 50发生故障时, 根据该地址信息获得当前时间片编号之前的编 号对应的待写入数据, 并且将当前时间片编号之前的编号对应的待写入数 据和地址信息增加第二编号, 保存在緩存中。 由此, 保证了数据一致性。 请参考图 7, 本发明实施例提供了一种存储设备 700的示意图。 存储设 备 700可以包括当前技术已知的存储设备, 本发明具体实施例并不对存储设 备 700的具体实现做限定。 存储设备 700包括:
处理器 (processor)710, 通信接口(Communications Interface)720, 存储器
(memory)730, 通信总线 740。
处理器 710, 通信接口 720, 存储器 730通过通信总线 740完成相互间的 通信。
通信接口 720, 用于与网元通信, 比如与主机或者交换机等通信。
处理器 710, 用于执行程序 732。
具体地, 程序 732可以包括程序代码, 所述程序代码包括计算机操作指 令。
处理器 710 可能是一个中央处理器 CPU, 或者是特定集成电路 ASIC ( Application Specific Integrated Circuit ) , 或者是被配置成实施本发明实施例 的一个或多个集成电路。
存储器 730,用于存放程序 732。存储器 730可能包含高速 RAM存储器, 也可能还包括非易失性存储器(non-volatile memory ) , 例如至少一个磁盘存 储器。
程序 732具体可以包括:
接收模块 501 ,用于接收主机发送的第一写数据请求, 所述第一写数据 请求携带待写入数据和地址信息。
读写模块 502, 用于将所述待写入数据和地址信息增加第一编号, 写 入緩存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第 一编号对应的所述待写入数据和地址信息。
当前时间片编号管理器 503 , 用于修改所述当前时间片编号以标识后 续写数据请求携带的信息。
发送模块 504, 用于将所述待写入数据和地址信息发送给灾备中心的 存储设备。
程序 732中各模块的具体实现可以参见图 5所示实施例中的相应模块, 在此不赞述。
请参考图 8, 本发明实施例提供了一种存储设备 800的示意图。 存储设 备 800可以包括当前技术已知的存储设备, 本发明具体实施例并不对存储设 备 800的具体实现做限定。 存储设备 800包括:
处理器 (processor)810, 通信接口(Communications Interface)720, 存储器 (memory)830, 通信总线 840。
处理器 810, 通信接口 820, 存储器 830通过通信总线 840完成相互间的 通信。
通信接口 820, 用于与网元通信, 比如与主机或者交换机等通信。
处理器 810, 用于执行程序 832。
具体地, 程序 832可以包括程序代码, 所述程序代码包括计算机操作指 令。
处理器 810 可能是一个中央处理器 CPU, 或者是特定集成电路 ASIC ( Application Specific Integrated Circuit ) , 或者是被配置成实施本发明实施例 的一个或多个集成电路。
存储器 830,用于存放程序 832。存储器 830可能包含高速 RAM存储器, 也可能还包括非易失性存储器(non-volatile memory ) , 例如至少一个磁盘存 储器。
程序 832具体可以包括:
接收模块 601 , 用于接收存储设备 50发送的地址信息。
查找模块 602, 用于当确定所述存储设备 50故障时, 所述存储设备
60根据所述地址信息, 获取第一编号对应的待写入数据, 所述第一编号对 应的地址信息与所述接收到的地址信息相同, 所述第一编号为当前时间片 编号之前的编号。
写入模块 604, 用于将所述第一编号对应的待写入数据和地址信息增 加第二编号, 写入緩存。
程序 832中各模块的具体实现可以参见图 6所示实施例中的相应模块,
在此不赘述。
所属领域的技术人员可以清楚地了解到, 为描述的方便和筒洁, 上述描 述的设备和模块的具体工作过程, 可以参考前述方法实施例中的对应过程描 述, 在此不再赘述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的设备和方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示意性的, 例如, 所述模块的划分, 仅仅为一种逻辑功能划分, 实际实现时可以有另外 的划分方式, 例如多个模块或组件可以结合或者可以集成到另一个设备中, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间的耦合 或直接耦合或通信连接可以是通过一些通信接口, 装置或模块的间接耦合或 通信连接, 可以是电性, 机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的, 作 为模块显示的部件可以是或者也可以不是物理子模块,即可以位于一个地方, 或者也可以分布到多个网络子模块上。 可以根据实际的需要选择其中的部分 或者全部, 模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中, 也可以是各个模块单独物理存在, 也可以两个或两个以上模块集成在一个模 块中。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可 以通过硬件来完成, 也可以通过程序来指令相关的硬件完成, 所述的程序 可以存储于一种计算机可读存储介质中, 上述提到的存储介质可以是只读 存储器, 磁盘或光盘等。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并
不使相应技术方案的本质脱离本发明各实施例技术方案的范围。
Claims
1、 一种数据发送方法, 其特征在于, 包括:
第一存储设备接收主机发送的第一写数据请求, 所述第一写数据请求 携带待写入数据和地址信息;
将所述待写入数据和地址信息增加第一编号, 写入緩存, 其中所述第 一编号为当前时间片编号;
从所述緩存中读取所述第一编号对应的所述待写入数据和地址信息; 修改所述当前时间片编号以标识后续写数据请求携带的信息; 将所述待写入数据和地址信息发送给第二存储设备。
2、 根据权利要求 1 所述的方法, 其特征在于, 所述第一编号用于标 识当前的复制任务; 所述方法还包括:
记录第二编号, 所述第二编号是当前的复制任务之前, 最近一次已完 成的复制任务对应的编号。
3、 根据权利要求 2所述的方法, 其特征在于, 还包括:
从所述緩存中读取所述第二编号之后, 所述第一编号之前的编号对应 的待写入数据和地址信息;
将所述第二编号之后, 所述第一编号之前的编号对应的待写入数据和 地址信息发送给所述第二存储设备。
4、 根据权利要求 1所述的方法, 其特征在于, 还包括:
记录当前时间片编号, 所述当前时间片编号用于生成所述第一编号。
5、 根据权利要求 1 所述的方法, 其特征在于, 所述将所述待写入数
据和地址信息发送给所述第二存储设备包括:
根据所述待写入数据和地址信息生成第二写数据请求;
将所述第二写数据请求发送给所述第二存储设备。
6、 根据权利要求 5 所述的方法, 其特征在于, 所述地址信息包括逻 辑块地址 LBA;
所述根据所述待写入数据和地址信息生成第二写数据请求包括: 获得所述第二存储设备的数据卷的 ID;
生成所述第二写数据请求, 所述第二写数据请求包括所述待写入数 据、 LB A以及所述数据卷的 ID。
7、 一种数据接收方法, 其特征在于, 包括:
第二存储设备接收第一存储设备发送的地址信息;
当确定所述第一存储设备故障时, 所述第二存储设备根据所述地址信 息, 获取第一编号对应的待写入数据, 所述第一编号对应的地址信息与所 述接收到的地址信息相同, 所述第一编号为当前时间片编号之前的编号; 将所述第一编号对应的待写入数据和地址信息增加第二编号, 写入緩 存。
8、 根据权利要求 7所述的方法, 其特征在于, 还包括:
记录所述当前时间片编号, 所述当前时间片编号用于生成所述第二编
9、 根据权利要求 7所述的方法, 其特征在于, 还包括:
接收主机发送的读数据请求, 所述读数据请求包含所述接收到的地址信
确定所述接收到的地址信息对应的最新的编号是所述第二编号; 将所述第二编号对应的待写入数据发送给所述主机。
10、 一种存储设备, 其特征在于, 包括:
接收模块, 用于接收主机发送的第一写数据请求, 所述第一写数据请 求携带待写入数据和地址信息;
读写模块, 用于将所述待写入数据和地址信息增加第一编号, 写入緩 存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第一编 号对应的所述待写入数据和地址信息;
当前时间片编号管理器, 用于修改所述当前时间片编号以标识后续写 数据请求携带的信息;
发送模块, 用于将所述待写入数据和地址信息发送给第二存储设备。
11、 根据权利要求 10所述的存储设备, 其特征在于, 所述第一编号 用于标识当前的复制任务;
所述当前时间片编号管理器, 还用于记录第二编号, 所述第二编号是 当前的复制任务之前, 最近一次已完成的复制任务对应的编号。
12、 根据权利要求 11所述的存储设备, 其特征在于,
所述读写模块, 还用于从所述緩存中读取所述第二编号之后, 所述第 一编号之前的编号对应的待写入数据和地址信息;
所述发送模块, 还用于将所述第二编号之后, 所述第一编号之前的编 号对应的待写入数据和地址信息发送给所述第二存储设备。
13、 根据权利要求 10所述的存储设备, 其特征在于,
所述当前时间片编号管理器, 还用于记录当前时间片编号, 所述当前
时间片编号用于生成所述第一编号。
14、 根据权利要求 10所述的存储设备, 其特征在于,
所述发送模块, 具体用于根据所述待写入数据和地址信息生成第二写 数据请求; 将所述第二写数据请求发送给所述第二存储设备。
15、 一种存储设备, 其特征在于, 包括:
接收模块, 用于接收第一存储设备发送的地址信息;
查找模块, 用于当确定所述第一存储设备故障时, 所述第二存储设备 根据所述地址信息, 获取第一编号对应的待写入数据, 所述第一编号对应 的地址信息与所述接收到的地址信息相同, 所述第一编号为当前时间片编 号之前的编号;
写入模块, 用于将所述第一编号对应的待写入数据和地址信息增加第 二编号, 写入緩存。
16、 根据权利要求 15所述的存储设备, 其特征在于, 还包括: 当前时间片编号管理器, 用于记录所述当前时间片编号, 所述当前时 间片编号用于生成所述第二编号。
17、 根据权利要求 15所述的存储设备, 其特征在于, 包括: 所述接收模块, 还用于接收主机发送的读数据请求, 所述读数据请求包 含所述接收到的地址信息;
所述查找模块,还用于确定所述接收到的地址信息对应的最新的编号是 所述第二编号;
所述存储设备还包括发送模块, 所述发送模块用于将所述第二编号对应 的待写入数据发送给所述主机。
18、 一种存储设备, 其特征在于, 包括: 处理器、 存储器和通信总线; 其中, 所述处理器和所述存储器通过所述通信总线进行通信;
所述存储器用于保存程序;
所述处理器用于执行所述程序, 以实现:
接收主机发送的第一写数据请求, 所述第一写数据请求携带待写入数 据和地址信息; 将所述待写入数据和地址信息增加第一编号, 写入緩存, 其中所述第一编号为当前时间片编号; 从所述緩存中读取所述第一编号对 应的所述待写入数据和地址信息; 修改所述当前时间片编号以标识后续写 数据请求携带的信息; 将所述待写入数据和地址信息发送给第二存储设 备。
19、 根据权利要求 18所述的存储设备, 其特征在于, 所述第一编号用 于标识当前的复制任务; 所述处理器还用于:
记录第二编号, 所述第二编号是当前的复制任务之前, 最近一次已完 成的复制任务对应的编号。
20、 根据权利要求 19所述的存储设备, 其特征在于, 所述处理器还用 于: 从所述緩存中读取所述第二编号之后, 所述第一编号之前的编号对应 的待写入数据和地址信息; 将所述第二编号之后, 所述第一编号之前的编 号对应的待写入数据和地址信息发送给所述第二存储设备。
21、 根据权利要求 18所述的存储设备, 其特征在于, 所述处理器还用 于: 记录当前时间片编号, 所述当前时间片编号用于生成所述第一编号。
22、 根据权利要求 18所述的存储设备, 其特征在于, 所述处理器具体
用于: 根据所述待写入数据和地址信息生成第二写数据请求; 将所述第二 写数据请求发送给所述第二存储设备。
23、 根据权利要求 22所述的存储设备, 其特征在于, 所述地址信息包 括逻辑块地址 LBA; 所述处理器具体用于: 获得所述第二存储设备的数据 卷的 ID; 生成所述第二写数据请求, 所述第二写数据请求包括所述待写入 数据、 LB A以及所述数据卷的 ID。
24、 一种存储设备, 其特征在于, 包括: 处理器、 存储器和通信总线; 其中, 所述处理器和所述存储器通过所述通信总线进行通信;
所述存储器用于保存程序;
所述处理器用于执行所述程序, 以实现:
接收第一存储设备发送的地址信息;
当确定所述第一存储设备故障时, 所述第二存储设备根据所述地址信 息, 获取第一编号对应的待写入数据, 所述第一编号对应的地址信息与所 述接收到的地址信息相同, 所述第一编号为当前时间片编号之前的编号; 将所述第一编号对应的待写入数据和地址信息增加第二编号, 写入緩 存。
25、 根据权利要求 24所述的存储设备, 其特征在于, 所述处理器还用于 记录所述当前时间片编号, 所述当前时间片编号用于生成所述第二编号。
26、 根据权利要求 24所述的存储设备, 其特征在于, 所述处理器还用于 接收主机发送的读数据请求, 所述读数据请求包含所述接收到的地址信息; 确定所述接收到的地址信息对应的最新的编号是所述第二编号; 将所述第二 编号对应的待写入数据发送给所述主机。
Priority Applications (21)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2868247A CA2868247C (en) | 2013-07-26 | 2013-07-26 | Data sending method, data receiving method, and storage device |
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
CN201380001270.8A CN103649901A (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
JP2015527787A JP6344798B2 (ja) | 2013-07-26 | 2013-11-15 | データ送信方法、データ受信方法、及びストレージデバイス |
DK16177686.9T DK3179359T3 (en) | 2013-07-26 | 2013-11-15 | PROCEDURE FOR SENDING DATA, PROCEDURE FOR RECEIVING DATA AND STORAGE UNIT |
KR1020147029051A KR101602312B1 (ko) | 2013-07-26 | 2013-11-15 | 데이터 송신 방법, 데이터 수신 방법, 및 저장 장치 |
EP16177686.9A EP3179359B1 (en) | 2013-07-26 | 2013-11-15 | Data sending method, data receiving method, and storage device |
HUE16177686A HUE037094T2 (hu) | 2013-07-26 | 2013-11-15 | Adatküldési eljárás, adatvételi eljárás és tárolóeszköz |
AU2013385792A AU2013385792B2 (en) | 2013-07-26 | 2013-11-15 | Data sending method, data receiving method, and storage device |
ES16177686.9T ES2666580T3 (es) | 2013-07-26 | 2013-11-15 | Método de envío de datos, método de recepción de datos y dispositivo de almacenamiento |
CN201710215599.4A CN107133132B (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
NO16177686A NO3179359T3 (zh) | 2013-07-26 | 2013-11-15 | |
ES13878530.8T ES2610784T3 (es) | 2013-07-26 | 2013-11-15 | Método de envío de datos, método de recepción de datos y dispositivo de almacenamiento |
EP13878530.8A EP2849048B1 (en) | 2013-07-26 | 2013-11-15 | Data sending method, data receiving method and storage device |
RU2014145359/08A RU2596585C2 (ru) | 2013-07-26 | 2013-11-15 | Способ отправки данных, способ приема данных и устройство хранения данных |
PCT/CN2013/087229 WO2015010394A1 (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
CN201380042349.5A CN104520802B (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
US14/582,556 US9311191B2 (en) | 2013-07-26 | 2014-12-24 | Method for a source storage device sending data to a backup storage device for storage, and storage device |
US15/064,890 US10108367B2 (en) | 2013-07-26 | 2016-03-09 | Method for a source storage device sending data to a backup storage device for storage, and storage device |
AU2016203273A AU2016203273A1 (en) | 2013-07-26 | 2016-05-19 | A method for a source storage device sending data to a backup storage device for storage, and storage device |
JP2017233306A JP2018041506A (ja) | 2013-07-26 | 2017-12-05 | データ送信方法、データ受信方法、及びストレージデバイス |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015010327A1 true WO2015010327A1 (zh) | 2015-01-29 |
Family
ID=50253404
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
PCT/CN2013/087229 WO2015010394A1 (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/087229 WO2015010394A1 (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
Country Status (13)
Country | Link |
---|---|
US (2) | US9311191B2 (zh) |
EP (2) | EP2849048B1 (zh) |
JP (2) | JP6344798B2 (zh) |
KR (1) | KR101602312B1 (zh) |
CN (1) | CN103649901A (zh) |
AU (2) | AU2013385792B2 (zh) |
CA (1) | CA2868247C (zh) |
DK (1) | DK3179359T3 (zh) |
ES (2) | ES2666580T3 (zh) |
HU (1) | HUE037094T2 (zh) |
NO (1) | NO3179359T3 (zh) |
RU (1) | RU2596585C2 (zh) |
WO (2) | WO2015010327A1 (zh) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015010327A1 (zh) * | 2013-07-26 | 2015-01-29 | 华为技术有限公司 | 数据发送方法、数据接收方法和存储设备 |
CN103488431A (zh) * | 2013-09-10 | 2014-01-01 | 华为技术有限公司 | 一种写数据方法及存储设备 |
US9552248B2 (en) * | 2014-12-11 | 2017-01-24 | Pure Storage, Inc. | Cloud alert to replica |
US10545987B2 (en) * | 2014-12-19 | 2020-01-28 | Pure Storage, Inc. | Replication to the cloud |
CN106407040B (zh) | 2016-09-05 | 2019-05-24 | 华为技术有限公司 | 一种远程数据复制方法及系统 |
CN107844259B (zh) * | 2016-09-18 | 2020-06-16 | 华为技术有限公司 | 数据访问方法、路由装置和存储系统 |
CN106528338B (zh) * | 2016-10-28 | 2020-08-14 | 华为技术有限公司 | 一种远程数据复制方法、存储设备及存储系统 |
CN108076090B (zh) * | 2016-11-11 | 2021-05-18 | 华为技术有限公司 | 数据处理方法和存储管理系统 |
CN106598768B (zh) * | 2016-11-28 | 2020-02-14 | 华为技术有限公司 | 一种处理写请求的方法、装置和数据中心 |
CN106776369B (zh) * | 2016-12-12 | 2020-07-24 | 苏州浪潮智能科技有限公司 | 一种缓存镜像的方法及装置 |
CN108449277B (zh) * | 2016-12-12 | 2020-07-24 | 华为技术有限公司 | 一种报文发送方法及装置 |
WO2018107460A1 (zh) * | 2016-12-16 | 2018-06-21 | 华为技术有限公司 | 对象复制方法、装置及对象存储设备 |
CN106776147B (zh) * | 2016-12-29 | 2020-10-09 | 华为技术有限公司 | 一种差异数据备份方法和差异数据备份装置 |
CN107122261B (zh) * | 2017-04-18 | 2020-04-07 | 杭州宏杉科技股份有限公司 | 一种存储设备的数据读写方法及装置 |
CN107577421A (zh) * | 2017-07-31 | 2018-01-12 | 深圳市牛鼎丰科技有限公司 | 智能设备扩容方法、装置、存储介质和计算机设备 |
AU2018359378B2 (en) * | 2017-10-31 | 2021-09-09 | Ab Initio Technology Llc | Managing a computing cluster using durability level indicators |
CN108052294B (zh) * | 2017-12-26 | 2021-05-28 | 郑州云海信息技术有限公司 | 一种分布式存储系统的修改写方法和修改写系统 |
US11216370B2 (en) * | 2018-02-20 | 2022-01-04 | Medtronic, Inc. | Methods and devices that utilize hardware to move blocks of operating parameter data from memory to a register set |
US10642521B2 (en) * | 2018-05-11 | 2020-05-05 | International Business Machines Corporation | Scaling distributed queues in a distributed storage network |
CN109032527B (zh) * | 2018-07-27 | 2021-07-27 | 深圳华大北斗科技有限公司 | 数据处理方法、存储介质及计算机设备 |
US10942725B2 (en) * | 2018-07-30 | 2021-03-09 | Ford Global Technologies, Llc | Over the air Ecu update |
US11038961B2 (en) * | 2018-10-26 | 2021-06-15 | Western Digital Technologies, Inc. | Ethernet in data storage device |
CN109697035B (zh) * | 2018-12-24 | 2022-03-29 | 深圳市明微电子股份有限公司 | 级联设备的地址数据的写入方法、写入设备及存储介质 |
US11636040B2 (en) * | 2019-05-24 | 2023-04-25 | Texas Instruments Incorporated | Methods and apparatus for inflight data forwarding and invalidation of pending writes in store queue |
US11119862B2 (en) * | 2019-10-11 | 2021-09-14 | Seagate Technology Llc | Delta information volumes to enable chained replication of data by uploading snapshots of data to cloud |
EP4054140A4 (en) | 2019-11-22 | 2022-11-16 | Huawei Technologies Co., Ltd. | METHOD OF PROCESSING A NON-BUFFER DATA WRITE REQUEST, BUFFER AND NODE |
US11755230B2 (en) * | 2021-04-22 | 2023-09-12 | EMC IP Holding Company LLC | Asynchronous remote replication of snapshots |
US12008018B2 (en) * | 2021-04-22 | 2024-06-11 | EMC IP Holding Company LLC | Synchronous remote replication of snapshots |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6675177B1 (en) * | 2000-06-21 | 2004-01-06 | Teradactyl, Llc | Method and system for backing up digital data |
CN101126998A (zh) * | 2006-08-15 | 2008-02-20 | 英业达股份有限公司 | 群聚式计算机系统高速缓存数据备份处理方法及系统 |
CN101634968A (zh) * | 2008-01-17 | 2010-01-27 | 四川大学 | 一种用于备份系统的海量数据高速缓存器的构造方法 |
CN101901173A (zh) * | 2010-07-22 | 2010-12-01 | 上海骊畅信息科技有限公司 | 一种灾备系统及灾备方法 |
Family Cites Families (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0128271B1 (ko) | 1994-02-22 | 1998-04-15 | 윌리암 티. 엘리스 | 재해회복을 위한 일관성 그룹 형성방법 및 레코드갱싱의 섀도잉 방법, 주시스템, 원격데이타 섀도잉 시스템과 비동기 원격데이타 복제 시스템 |
US5758359A (en) * | 1996-10-24 | 1998-05-26 | Digital Equipment Corporation | Method and apparatus for performing retroactive backups in a computer system |
US6081875A (en) | 1997-05-19 | 2000-06-27 | Emc Corporation | Apparatus and method for backup of a disk storage system |
JP2000137638A (ja) | 1998-10-29 | 2000-05-16 | Hitachi Ltd | 情報記憶システム |
US6526418B1 (en) * | 1999-12-16 | 2003-02-25 | Livevault Corporation | Systems and methods for backing up data files |
US6988165B2 (en) * | 2002-05-20 | 2006-01-17 | Pervasive Software, Inc. | System and method for intelligent write management of disk pages in cache checkpoint operations |
JP2004013367A (ja) * | 2002-06-05 | 2004-01-15 | Hitachi Ltd | データ記憶サブシステム |
US7761421B2 (en) | 2003-05-16 | 2010-07-20 | Hewlett-Packard Development Company, L.P. | Read, write, and recovery operations for replicated data |
JP2005309550A (ja) * | 2004-04-19 | 2005-11-04 | Hitachi Ltd | リモートコピー方法及びリモートコピーシステム |
JP4267421B2 (ja) * | 2003-10-24 | 2009-05-27 | 株式会社日立製作所 | リモートサイト及び/又はローカルサイトのストレージシステム及びリモートサイトストレージシステムのファイル参照方法 |
US7054883B2 (en) * | 2003-12-01 | 2006-05-30 | Emc Corporation | Virtual ordered writes for multiple storage devices |
DE602004026823D1 (de) * | 2004-02-12 | 2010-06-10 | Irdeto Access Bv | Verfahren und System zur externen Speicherung von Daten |
JP4455927B2 (ja) * | 2004-04-22 | 2010-04-21 | 株式会社日立製作所 | バックアップ処理方法及び実施装置並びに処理プログラム |
CN100359476C (zh) * | 2004-06-03 | 2008-01-02 | 华为技术有限公司 | 一种快照备份的方法 |
JP4519563B2 (ja) * | 2004-08-04 | 2010-08-04 | 株式会社日立製作所 | 記憶システム及びデータ処理システム |
JP4377790B2 (ja) * | 2004-09-30 | 2009-12-02 | 株式会社日立製作所 | リモートコピーシステムおよびリモートコピー方法 |
US7519851B2 (en) * | 2005-02-08 | 2009-04-14 | Hitachi, Ltd. | Apparatus for replicating volumes between heterogenous storage systems |
US8127174B1 (en) * | 2005-02-28 | 2012-02-28 | Symantec Operating Corporation | Method and apparatus for performing transparent in-memory checkpointing |
US8005795B2 (en) * | 2005-03-04 | 2011-08-23 | Emc Corporation | Techniques for recording file operations and consistency points for producing a consistent copy |
US7310716B2 (en) * | 2005-03-04 | 2007-12-18 | Emc Corporation | Techniques for producing a consistent copy of source data at a target location |
JP2007066154A (ja) * | 2005-09-01 | 2007-03-15 | Hitachi Ltd | データをコピーして複数の記憶装置に格納するストレージシステム |
CA2632935C (en) * | 2005-12-19 | 2014-02-04 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7761663B2 (en) * | 2006-02-16 | 2010-07-20 | Hewlett-Packard Development Company, L.P. | Operating a replicated cache that includes receiving confirmation that a flush operation was initiated |
JP2007323507A (ja) * | 2006-06-02 | 2007-12-13 | Hitachi Ltd | 記憶システム並びにこれを用いたデータの処理方法 |
US8150805B1 (en) * | 2006-06-30 | 2012-04-03 | Symantec Operating Corporation | Consistency interval marker assisted in-band commands in distributed systems |
US7885923B1 (en) * | 2006-06-30 | 2011-02-08 | Symantec Operating Corporation | On demand consistency checkpoints for temporal volumes within consistency interval marker based replication |
US8726242B2 (en) * | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
GB0616257D0 (en) * | 2006-08-16 | 2006-09-27 | Ibm | Storage management system for preserving consistency of remote copy data |
US8145865B1 (en) * | 2006-09-29 | 2012-03-27 | Emc Corporation | Virtual ordered writes spillover mechanism |
KR20080033763A (ko) | 2006-10-13 | 2008-04-17 | 삼성전자주식회사 | 와이브로 네트워크에서의 상호인증을 통한 핸드오버 방법및 그 시스템 |
US8768890B2 (en) * | 2007-03-14 | 2014-07-01 | Microsoft Corporation | Delaying database writes for database consistency |
JP4964714B2 (ja) * | 2007-09-05 | 2012-07-04 | 株式会社日立製作所 | ストレージ装置及びデータの管理方法 |
US8073922B2 (en) * | 2007-07-27 | 2011-12-06 | Twinstrata, Inc | System and method for remote asynchronous data replication |
US8140772B1 (en) * | 2007-11-06 | 2012-03-20 | Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | System and method for maintaining redundant storages coherent using sliding windows of eager execution transactions |
CN103645953B (zh) | 2008-08-08 | 2017-01-18 | 亚马逊技术有限公司 | 向执行中的程序提供对非本地块数据存储装置的可靠访问 |
US8250031B2 (en) | 2008-08-26 | 2012-08-21 | Hitachi, Ltd. | Low traffic failback remote copy |
US8767934B2 (en) | 2008-09-03 | 2014-07-01 | Avaya Inc. | Associating a topic with a telecommunications address |
US8762642B2 (en) * | 2009-01-30 | 2014-06-24 | Twinstrata Inc | System and method for secure and reliable multi-cloud data replication |
US8793288B2 (en) * | 2009-12-16 | 2014-07-29 | Sap Ag | Online access to database snapshots |
CN101751230B (zh) * | 2009-12-29 | 2011-11-09 | 成都市华为赛门铁克科技有限公司 | 标定i/o数据的时间戳的设备及方法 |
US9389892B2 (en) * | 2010-03-17 | 2016-07-12 | Zerto Ltd. | Multiple points in time disk images for disaster recovery |
JP5170169B2 (ja) * | 2010-06-18 | 2013-03-27 | Necシステムテクノロジー株式会社 | ディスクアレイ装置間のリモートコピー処理システム、処理方法、及び処理用プログラム |
US8443149B2 (en) * | 2010-09-01 | 2013-05-14 | International Business Machines Corporation | Evicting data from a cache via a batch file |
US8255637B2 (en) * | 2010-09-27 | 2012-08-28 | Infinidat Ltd. | Mass storage system and method of operating using consistency checkpoints and destaging |
US8667236B2 (en) * | 2010-09-29 | 2014-03-04 | Hewlett-Packard Development Company, L.P. | Host based write ordering for asynchronous replication |
US9792941B2 (en) * | 2011-03-23 | 2017-10-17 | Stormagic Limited | Method and system for data replication |
CN102306115B (zh) * | 2011-05-20 | 2014-01-08 | 华为数字技术(成都)有限公司 | 异步远程复制方法、系统及设备 |
CN103092526B (zh) | 2011-10-31 | 2016-03-30 | 国际商业机器公司 | 在存储设备间进行数据迁移的方法和装置 |
US8806281B1 (en) * | 2012-01-23 | 2014-08-12 | Symantec Corporation | Systems and methods for displaying backup-status information for computing resources |
JP6183876B2 (ja) * | 2012-03-30 | 2017-08-23 | 日本電気株式会社 | レプリケーション装置、レプリケーション方法及びプログラム |
US20130339569A1 (en) * | 2012-06-14 | 2013-12-19 | Infinidat Ltd. | Storage System and Method for Operating Thereof |
US10318495B2 (en) * | 2012-09-24 | 2019-06-11 | Sandisk Technologies Llc | Snapshots for a non-volatile device |
US9311014B2 (en) * | 2012-11-29 | 2016-04-12 | Infinidat Ltd. | Storage system and methods of mapping addresses of snapshot families |
WO2015010327A1 (zh) * | 2013-07-26 | 2015-01-29 | 华为技术有限公司 | 数据发送方法、数据接收方法和存储设备 |
-
2013
- 2013-07-26 WO PCT/CN2013/080203 patent/WO2015010327A1/zh active Application Filing
- 2013-07-26 CN CN201380001270.8A patent/CN103649901A/zh active Pending
- 2013-07-26 CA CA2868247A patent/CA2868247C/en active Active
- 2013-11-15 NO NO16177686A patent/NO3179359T3/no unknown
- 2013-11-15 AU AU2013385792A patent/AU2013385792B2/en active Active
- 2013-11-15 ES ES16177686.9T patent/ES2666580T3/es active Active
- 2013-11-15 RU RU2014145359/08A patent/RU2596585C2/ru active
- 2013-11-15 EP EP13878530.8A patent/EP2849048B1/en active Active
- 2013-11-15 ES ES13878530.8T patent/ES2610784T3/es active Active
- 2013-11-15 KR KR1020147029051A patent/KR101602312B1/ko active IP Right Grant
- 2013-11-15 DK DK16177686.9T patent/DK3179359T3/en active
- 2013-11-15 HU HUE16177686A patent/HUE037094T2/hu unknown
- 2013-11-15 WO PCT/CN2013/087229 patent/WO2015010394A1/zh active Application Filing
- 2013-11-15 EP EP16177686.9A patent/EP3179359B1/en active Active
- 2013-11-15 JP JP2015527787A patent/JP6344798B2/ja active Active
-
2014
- 2014-12-24 US US14/582,556 patent/US9311191B2/en active Active
-
2016
- 2016-03-09 US US15/064,890 patent/US10108367B2/en active Active
- 2016-05-19 AU AU2016203273A patent/AU2016203273A1/en not_active Abandoned
-
2017
- 2017-12-05 JP JP2017233306A patent/JP2018041506A/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6675177B1 (en) * | 2000-06-21 | 2004-01-06 | Teradactyl, Llc | Method and system for backing up digital data |
CN101126998A (zh) * | 2006-08-15 | 2008-02-20 | 英业达股份有限公司 | 群聚式计算机系统高速缓存数据备份处理方法及系统 |
CN101634968A (zh) * | 2008-01-17 | 2010-01-27 | 四川大学 | 一种用于备份系统的海量数据高速缓存器的构造方法 |
CN101901173A (zh) * | 2010-07-22 | 2010-12-01 | 上海骊畅信息科技有限公司 | 一种灾备系统及灾备方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2018041506A (ja) | 2018-03-15 |
DK3179359T3 (en) | 2018-06-14 |
RU2014145359A (ru) | 2016-05-27 |
JP6344798B2 (ja) | 2018-06-20 |
HUE037094T2 (hu) | 2018-08-28 |
US9311191B2 (en) | 2016-04-12 |
US10108367B2 (en) | 2018-10-23 |
AU2016203273A1 (en) | 2016-06-09 |
EP3179359A1 (en) | 2017-06-14 |
AU2013385792A1 (en) | 2015-02-12 |
EP2849048A1 (en) | 2015-03-18 |
CN103649901A (zh) | 2014-03-19 |
JP2015527670A (ja) | 2015-09-17 |
NO3179359T3 (zh) | 2018-08-04 |
WO2015010394A1 (zh) | 2015-01-29 |
EP2849048A4 (en) | 2015-05-27 |
EP2849048B1 (en) | 2016-10-19 |
US20150113317A1 (en) | 2015-04-23 |
ES2666580T3 (es) | 2018-05-07 |
ES2610784T3 (es) | 2017-05-03 |
CA2868247C (en) | 2017-04-04 |
KR101602312B1 (ko) | 2016-03-21 |
EP3179359B1 (en) | 2018-03-07 |
US20160188240A1 (en) | 2016-06-30 |
RU2596585C2 (ru) | 2016-09-10 |
AU2013385792B2 (en) | 2016-04-14 |
CA2868247A1 (en) | 2015-01-26 |
KR20150035507A (ko) | 2015-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015010327A1 (zh) | 数据发送方法、数据接收方法和存储设备 | |
US11734306B2 (en) | Data replication method and storage system | |
US10467246B2 (en) | Content-based replication of data in scale out system | |
WO2018040591A1 (zh) | 一种远程数据复制方法及系统 | |
US8473462B1 (en) | Change tracking for shared disks | |
WO2015054897A1 (zh) | 数据存储方法、数据存储装置和存储设备 | |
WO2015085529A1 (zh) | 数据复制方法、数据复制装置和存储设备 | |
CN107133132B (zh) | 数据发送方法、数据接收方法和存储设备 | |
WO2014079028A1 (zh) | 数据处理方法和存储设备 | |
WO2014190501A1 (zh) | 数据恢复方法、存储设备和存储系统 | |
WO2018076633A1 (zh) | 一种远程数据复制方法、存储设备及存储系统 | |
WO2019080370A1 (zh) | 一种数据读写方法、装置和存储服务器 | |
JP2007149085A (ja) | 接続された装置を構成するための初期設定コードの実行 | |
US10740189B2 (en) | Distributed storage system | |
WO2022033269A1 (zh) | 数据处理的方法、设备及系统 | |
WO2012081058A1 (en) | Storage subsystem and its logical unit processing method | |
JP2017208113A (ja) | データ格納方法、データストレージ装置、及びストレージデバイス | |
US10656867B2 (en) | Computer system, data management method, and data management program | |
US20210026780A1 (en) | Methods for using extended physical region page lists to improve performance for solid-state drives and devices thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2868247 Country of ref document: CA |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13890146 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13890146 Country of ref document: EP Kind code of ref document: A1 |