EP1760590B1 - Storage controller system having a splitting command for paired volumes and method therefor - Google Patents

Storage controller system having a splitting command for paired volumes and method therefor Download PDF

Info

Publication number
EP1760590B1
EP1760590B1 EP06024150A EP06024150A EP1760590B1 EP 1760590 B1 EP1760590 B1 EP 1760590B1 EP 06024150 A EP06024150 A EP 06024150A EP 06024150 A EP06024150 A EP 06024150A EP 1760590 B1 EP1760590 B1 EP 1760590B1
Authority
EP
European Patent Office
Prior art keywords
volumes
storage device
storage
device controller
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP06024150A
Other languages
German (de)
French (fr)
Other versions
EP1760590A1 (en
Inventor
Susumu c/o Hitachi Ltd. Intel. Prop. Grp. Suzuki
Masanori c/o Hitachi Ltd. Intel. Prop. Grp. Nagaya
Takao c/o Hitachi Ltd. Intel. Prop. Grp. Sato
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of EP1760590A1 publication Critical patent/EP1760590A1/en
Application granted granted Critical
Publication of EP1760590B1 publication Critical patent/EP1760590B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2064Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2087Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring with a common controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention relates to a method for controlling a storage device controller, a storage device controller, and a storage system.
  • the function manages primary volume data in duplicate by copying data from a primary volume to a secondary volume in real time.
  • the primary (master) volume that is a source of copy and the secondary (sub) volume that is a destination of copy are paired.
  • US 5,692,155 discloses a data storage system which atomically suspends multiple duplex pairs across either a single storage subsystem or multiple storage subsystems.
  • the duplex pairs are suspended such that the data on the secondary DASDs of the duplex pairs is maintained in a sequence consistent order.
  • a host processor in the data storage system running an application generates records and record updates to be written to the primary DASDs of the duplex pairs.
  • the storage controller directs copies of the records and record updates to the secondary DASDs of the duplex pairs. Sequence consistency is maintained on the secondary DASDs by quiescing the duplex pairs and then suspending the duplex pairs with change recording. Quiescing the duplex pairs allows any current write I/O in progress to complete to the primary DASD.
  • the storage controller then locks out any subsequent write I/O from the host processor by raising a long busy signal to such subsequent write requests.
  • US 6,301,643 disclosed a system for maintaining consistency of data across storage devices.
  • a cut-off time value is provided to the system.
  • the system then obtains information on data writes to a first storage device, including information on time stamp values associated with the data writes indicating an order of the data writes to the first storage device.
  • At least one group of data writes having time stamp values earlier in time than the cut-off time value is then formed.
  • the system then transfers the data writes in the groups to a second storage device for storage therein.
  • the storage device and the storage device controller may be included in a disk array unit.
  • the information processing apparatus and the disk array unit are included in the storage system.
  • Storage volumes are storage resources provided in the disk array unit or storage device and they are divided into physical volumes and logical volumes.
  • a physical volume is a physical storage area provided in a disk drive of the disk array unit or storage device and a logical volume is a storage area allocated logically in a physical volume.
  • the "paired" means a state in which two storage volumes are brought into correspondence with each other as described above.
  • An information processing apparatus 100 is a computer provided with a CPU (Central Processing Unit), a memory, etc.
  • the CPU of the information processing apparatus 100 executes various types of programs to realize various functions of the apparatus 100.
  • the information processing apparatus 100 is used, for example, as a core computer in an automatic teller machine in a bank, a flight ticket reservation system, or the like.
  • the information processing apparatus 100 is connected to a storage device controller 200 to communicate with the controller 200.
  • the information processing apparatus 100 issues data input/output commands (requests) to the storage device controller 200 to read/write data from/to the storage devices 300.
  • the information processing apparatus 100 also sends/receives various commands to/from the storage device controller 200 to manage the storage devices 300. For example, the commands are used for managing copies of data stored in the storage volumes provided in the storage devices 300.
  • Fig.2 shows a block diagram of the information processing apparatus 100.
  • the information processing apparatus 100 is configured by a CPU 110, a memory 120, a port 130, a media reader 140, an input device 150, and an output device 160.
  • the CPU 110 controls the whole information processing apparatus 100 and executes the programs stored in the memory 120 to realize various functions of the apparatus 100.
  • the media reader 140 reads programs and data recorded on the recording medium 170.
  • the memory 120 stores the programs and data read by the reader 140. Consequently, the media reader 170 can be used to read a storage device management program 121 and an application program 122 recorded in the medium 170 and store them in the memory 120.
  • the recording medium 170 may be any of flexible disks, CD-ROM disks, semiconductor memories, etc.
  • the media reader 140 may also be built in the information processing apparatus 100 or provided as an external device.
  • the input device 150 is used by the operator to input data addressed to the information processing apparatus 100.
  • the input device 150 may be any of keyboards, mice, etc.
  • the output device 160 outputs information to external.
  • the output device 160 may be any of displays, printers, etc.
  • the port 130 is used to communicate with the storage device controller 200. In that connection, the storage device management program 121 and the application program 122 may be received from another information processing apparatus 100 through the port 130 and stored in the memory 120.
  • the storage device management program 121 manages copies of data stored in the storage volumes provided in the storage devices 300.
  • the storage device controller 200 manages copies of data with use of various copy management commands received from the information processing apparatus 100.
  • the application program 122 realizes the functions of the information processing apparatus 100.
  • the program 122 realizes functions of an automatic teller machine of a bank and functions of a flight ticket reservation system as described above.
  • the storage device controller 200 controls the storage devices 300 according to the commands received from the information processing apparatus 100. For example, when receiving a data input/output request from the information processing apparatus 100, the storage device controller 200 inputs/outputs data to/from a storage volume provided in a storage device 300.
  • the storage device controller 200 is configured by a channel adapter 210, a cache memory 220, a shared storage 230, a disk adapter 240, a management terminal (SVP: Service Processor) 260, and a connection unit 250.
  • a channel adapter 210 a cache memory 220
  • a shared storage 230 a shared storage 230
  • a disk adapter 240 a management terminal (SVP: Service Processor) 260
  • SVP Service Processor
  • the channel adapter 210 provided with a communication interface with the information processing apparatus 100 exchanges data input/output commands, etc. with the information processing apparatus 100.
  • Fig.3 shows a block diagram of the channel adapter 210.
  • the channel adapter 210 is configured by a CPU 211, a cache memory 212, a control memory 213, a port 215, and a bus 216.
  • the CPU 211 controls the whole channel adapter 210 by executing a control program 214 stored in the control memory 213.
  • the control program 214 stored in the control memory 213 thus enables data copies to be managed in this embodiment.
  • the cache memory 212 stores data, commands, etc. to be exchanged with the information processing apparatus 100 temporarily.
  • the port 215 is a communication interface used for the communication with the information processing apparatus 100 and other devices provided in the storage device controller 200.
  • the bus 216 enables the mutual connection among those devices.
  • the cache memory 220 stores data to be exchanged between the channel adapter 210 and the disk adapter 240 temporarily.
  • the channel adapter 210 receives a write command as a data input/output command from the information processing apparatus 100
  • the channel adapter 210 writes the command in the shared storage 230 and the target data received from the information processing apparatus 100 in the cache memory 220 respectively.
  • the disk adapter 240 then reads the target data from the cache memory 220 according to the write command written in the shared storage and writes the read data in a storage device 300.
  • the management terminal 260 is a kind of information processing apparatus used for the maintenance/management of the storage device controller 200 and the storage devices 300.
  • the management terminal 260 changes the control program 214 executed in the channel adapter 210 to another.
  • the management terminal 260 may be built in the storage device controller 200 or may be separated.
  • the management terminal 260 may also be dedicated to the maintenance/management of the storage device controller 200 and the storage devices 300 or may be configured as a general information processing apparatus for maintenance/management.
  • the configuration of the management terminal 260 is the same as that of the information processing apparatus 100 shown in Fig.2.
  • the management terminal 260 is configured by a CPU 110, a memory 120, a port 130, a recording media reader 140, an input device 150, and an output device 160. Consequently, the control program to be executed in the channel adapter 210 may be read from the recording medium 170 through the media reader 140 of the management terminal 260 or received from the information processing apparatus 100 connected thereto through the port 130 of the management terminal 260.
  • the disk adapter 240 controls the storage devices 300 according to the commands received from the channel adapter 210.
  • Each of the storage devices 300 is provided with a storage volume to be used by the information processing apparatus 100.
  • Storage volumes are storage resources provided in the storage devices 300 and divided into physical volumes that are physical storage areas provided in disk drives of the storage devices 300 and logical volumes that are storage areas allocated logically in those physical volumes.
  • the disk drives may be any of, for example, hard disk drives, flexible disk drives, semiconductor storage devices, etc.
  • the disk adapter 240 and each of the storage devices 300 may be connected to each other directly as shown in Fig.1 or through a network.
  • the storage devices 300 may also be united with the storage device controller 200 into one.
  • the shared storage 230 can be accessed from both of the channel adapter 210 and the disk adapter 240.
  • the shared storage is used to receive/send data input/output requests/commands and store management information, etc. of the storage device controller 200 and the storage devices 300.
  • the shared storage 230 stores a consistency group management table 231 and a pair management table 232 as shown in Fig.4.
  • the pair management table 232 is used to manage copies of data stored in the storage devices 300.
  • the table 232 has columns of "pair”, “primary volume”, “sub volume”, “pair state”, and "consistency group”.
  • the "pair" column holds pair names.
  • a pair means a combination of two storage volumes.
  • Fig.5 shows an example of paired storage volumes. In Fig.5, two pairs, that is, pairs A and B are denoted. One of paired volumes and the other of the paired volumes are managed as a primary volume and a secondary volume. In Fig.5, a primary volume is described as a master volume and a secondary volume is described as a sub volume. A plurality of secondary volumes can be combined with one primary volume.
  • the "primary” column describes primary volumes paired with secondary volumes while the "secondary” column describes secondary volumes paired with primary volumes.
  • the "pair state” column describes the state of each pair of volumes.
  • the "pair state” is classified into “paired”, “split”, and "re-sync”.
  • the "paired" denotes that data in a secondary volume is updated with the data in its corresponding primary volume written by the information processing apparatus 100.
  • the consistency of the data stored in a pair of primary and secondary volumes is assured with such correspondence set between those primary and secondary volumes.
  • the "split" denotes that data in a secondary volume is not updated with the data in its corresponding primary volume written by the information processing apparatus 100. Concretely, while primary and secondary volumes are in such a "split" state, the correspondence between those volumes is reset. Consequently, the data consistency is not assured between those primary and secondary volumes. However, because data in any secondary volume that is in the "split" state is not updated, the data in secondary volumes can be backed up during the while; for example, data stored in secondary volumes can be saved in a magnetic tape or the like. This makes it possible to back up data while the data in primary volumes is used continuously during the backup operation for a job that has been executed by the information processing apparatus 100.
  • the "re-sync” denotes a transition state of a pair of volumes, for example, from “split” to "paired". More concretely, the "re-sync” means a state in which data in a secondary volume is being updated with the data written in its corresponding primary volume while the pair is in the "split” state. When the data in the secondary volume is updated, the state of the pair is changed to "paired".
  • the operator instructs the information processing apparatus 100 in which the storage device management program 121 is executed through the input device 150.
  • a command from the operator is then sent to the channel adapter 210 of the storage device controller 200.
  • the channel adapter 210 executes the control program 214 to form a pair of storage volumes or change the state of the pair according to the command.
  • the channel adapter 210 controls the object storage volumes, for example, updating a secondary volume with a copy of data updated in its corresponding primary volume when those volumes are "paired".
  • the channel adapter 210 changes the states of pairs one by one sequentially. This is because one primary volume can be paired with a plurality of secondary volumes as described above and if the states of a plurality of pairs are changed simultaneously, the management of primary volumes comes to become complicated.
  • Forming a pair of volumes and changing the state of each pair of volumes can also be made automatically at a predetermined time or according to a command received from another information processing apparatus 100 connected through the port 130 independently of instructions from the operator.
  • the "consistency group” column describes the number of each consistency group (pair group) consisting of pairs of volumes.
  • a consistency group means a group of a plurality of storage volume pairs to be controlled so that the states of those pairs are changed to the "split" together. Concretely, a plurality of pairs in a consistency group are controlled so that their states are changed to the "split" simultaneously (hereinafter, this processing will be referred to as the synchronism among the state changes to the "split") while the states of a plurality of paired volumes are changed one by one sequentially as described above.
  • the information processing apparatus 100 writes data in a storage volume while the pair states of a plurality of paired volumes in a consistency group are changed sequentially from "paired" to "split". If no consistency group is formed and the data is written in a paired primary volume after the pair state is changed to the "split", the data is not written in its corresponding secondary volume. If the data is written in a paired primary volume of which state is not changed to the "split” yet, the data is also written in the secondary volume. If the paired primary volume belongs to a consistency group at that time, however, the data is not written in its corresponding secondary volume regardless of the pair state of the primary volume (whether it is in the "split” or not). This is because the data is written in the primary volume after pair splitting (resetting of the correspondence between primary and secondary volumes) is started in the consistency group.
  • Forming a consistency group with a plurality of pairs such way is effective for a case in which data is to be stored in a plurality of storage volumes, for example, when write data is too large to be stored in one storage volume and when it is controlled so that one file data is stored in a plurality of storage volumes.
  • Such assured synchronism of the pair state changes of volumes to the "split" in a consistency group is also effective for writing/reading of data in/from secondary volumes requested from the information processing apparatus 100.
  • data can be written/read in/from any paired secondary volume after the pair state is changed to the "split" while it is inhibited to write/read data in/from any secondary volume of which pair state is not changed to the "split".
  • a batch split receiving flag (ID information) of the consistency group management table 231 is used to assure the synchronism of such pair state changes of volumes to the "split" in the above consistency group.
  • control program 214 program consisting of codes for realizing various operations.
  • the channel adapter 210 receives a pair splitting request (split command) addressed to a consistency group from the information processing apparatus 100(S1000).
  • the channel adapter 210 then turns on the batch split receiving flag in the consistency group management table 231 stored in the shared storage 230 (S1001).
  • the channel adapter 210 begins to change the pair state of a not-split pair of volumes in the consistency group to the "split" (S1003).
  • the channel adapter 210 resets the correspondence between the primary volume and the secondary volume in the pair and stops updating of the data in the secondary volume with the data written in the primary volume.
  • the channel adapter 210 then changes the description for the pair in the "paired" column in the pair management table 232 to "split" (S1004). Those processings are repeated for each pair in the consistency group. When the states of all the pairs in the consistency group are changed to the "split" (S1005), the channel adapter 210 turns off the batch split flag, then exits the processing.
  • the channel adapter 210 checks whether or not the request is addressed to a not-split storage volume, that is, a "paired" storage volume (for which the correspondence to its secondary volume is not reset)(S1006). If the check result is YES (addressed), the adapter 210 changes the pair state of the volume to the "split" (S1007). The adapter 210 then changes the description of the pair in the pair state column in the pair management table 232 to the "split" (S1008) and executes the data read/write processing (input/output processing)(S1009).
  • a not-split storage volume that is, a "paired" storage volume (for which the correspondence to its secondary volume is not reset)(S1006). If the check result is YES (addressed), the adapter 210 changes the pair state of the volume to the "split" (S1007). The adapter 210 then changes the description of the pair in the pair state column in the pair management table 232 to the "
  • the adapter 210 checks whether or not the request is addressed to a not-split pair of volumes (S1006) to execute the read/write processing (S1009). However, according to the claimed invention the adapter 210 suppresses the execution of the read/write processing requested from the information processing apparatus 100 while the adapter 210 splits paired volumes in a consistency group sequentially. In that connection, the adapter 210 can execute the read/write processing after the adapter 210 completes splitting of all the paired volumes in the consistency group and turns off the batch split flag.
  • Fig.7 shows a flowchart for those processings by the channel adapter 210 in detail.
  • the channel adapter 210 forms a consistency group for both pairs A and B according to a command received from the information processing apparatus 100(S2000 to S2002).
  • the command is inputted, for example, by the operator through the input device 150 of the information processing apparatus 100.
  • the command, inputted to the information processing apparatus 100 is sent to the channel adapter 210 by the storage device management program 121.
  • the "paircreate -g GRP0" shown in Fig.7 is such a command.
  • the channel adapter 210 forms a consistency group, then records predetermined data in the pair management table 232 and the consistency group management table 231 stored in the shared storage 230 respectively.
  • Fig.4 shows how the predetermined data is recorded in those tables 231 and 232.
  • the channel adapter 210 when receiving a read/write request (R/W1) for the storage volume 1 in the pair A from the information processing apparatus 100(S2008), executes the read/write processing as usually (S2009). This is because "OFF" is described in the batch split receiving flag column for the consistency group 0 in the consistency group management table 231.
  • the information processing apparatus 100 instructs the channel adapter 210 to split the pair B in the consistency group 0 with a command (S2003).
  • the "pairsplit -g GRP0" shown in Fig.7 is an example of the command issued at that time. This command may also be inputted by the operator through the input device 150 of the information processing apparatus 100.
  • the channel adapter 210 then turns on the batch split receiving flag for the consistency group 0 in the consistency group management table 231 stored in the shared storage 230 (S2004) to start splitting of each pair sequentially (S2005, S2006).
  • Fig.4 shows the pair management table 232 in which the pair A is split. Completing splitting of all the target pairs, the channel adapter 210 turns OFF the batch split receiving flag and exits the processing (S2007).
  • the channel adapter 210 If the channel adapter 210 receives a read/write request (R/W2) addressed to the storage volume 3 of the pair B from the information processing apparatus 100 (S2010) while the channel adapter 210 turns ON the batch split receiving flag (S2004) after receiving a split command addressed to the consistency group 0 from the information processing apparatus 100, the channel adapter 210 executes the read/write processing as usually (S2011). This is because "OFF" is still set in the batch split receiving column for the consistency group 0 in the consistency group management table 231.
  • the channel adapter 210 receives a read/write request (R/W3) addressed to the storage volume 3 of the pair B from the information processing apparatus 100 (S2012) after turning ON the batch split receiving flag (S2004), the channel adapter 210 splits the pair B (S2013), then executes the read/write processing (S2014).
  • the channel adapter 210 when receiving a read/write request from the information processing apparatus 100, refers to the batch split receiving flag to check whether or not it is after resetting of the pair state of each pair in the consistency group is started that the read/write command has been issued.
  • the channel adapter 210 If the channel adapter 210 receives the read/write request (R/W4) after completing splitting of the pair A in (S2005), the channel adapter 210 executes the read/write processing (S2016). This is because "split" is set for the pair A in the pairing column in the pair management table 232 and the channel adapter 210 knows that "split" denotes that the pair A is split.
  • the batch split receiving flag is provided as described above, the synchronism among the pair state changes of all the pairs in a consistency group to the "split" is assured.
  • each split starting time is recorded in the consistency group management table 231 as shown in Fig.8.
  • splitting of pairs in the consistency group 0 is started at 12:00.
  • the description in the split starting time column is changed to "-".
  • a split starting time is specified with a command received from the information processing apparatus 100. Such split starting may also be specified so as to be started immediately with a command; no concrete time is specified in such an occasion. In that connection, the current time is recorded in the split starting time column.
  • the channel adapter 210 when receiving a read/write command from the information processing apparatus 100, compares the read/write command issued time recorded in the read/write command (request) with the time described in the split starting time column of the consistency group management table 231. If the command issued time is later, the channel adapter 210 executes the read/write processing after the end of the splitting.
  • the processings are executed by the CPU 211 of the channel adapter 210 with use of the control program 214 consisting of codes for realizing various operations
  • the channel adapter 210 receives a pair splitting request (split command) addressed to a consistency group from the information processing apparatus 100 (S3000). The channel adapter 210 then records the split starting time recorded in the split command in the split starting time column of the consistency group management table 231 stored in the shared storage 230 (S3001). After that, the channel adapter 210 compares the split starting time with the current time to check whether or not the split starting time is passed (S3003). If the check result is YES (passed), the channel adapter 210 begins the state change of a not-split pair in the consistency group to the "split" (S3004).
  • the channel adapter 210 resets the correspondence between primary and secondary volumes of the pair and suppresses updating of the data in the secondary volume with the data written in the primary volume.
  • the channel adapter 210 then changes the description for the pair in the pair state column in the pair management table 232 to "split" (S3005).
  • the above processings are repeated for all of the pairs in the consistency group.
  • the channel adapter 210 changes the description for the pair in the split starting time column to "-" and exits the processing (S3007).
  • the channel adapter 210 checks whether or not the request is addressed to a not-split pair, that is, a "paired" storage volume (the correspondence is not reset)(S3008). If the check result is YES (addressed), the channel adapter 210 compares the command issued time recorded in the command with the split starting time (S3010). If the command issued time is later, the channel adapter 210 changes the pair state to the "split" (S3011), then changes the description for the pair in the pair state column in the pair management table 232 to "split" (S3012). After that, the channel adapter 210 executes the read/write processing (input/output processing) (S3013).
  • the channel adapter 210 reads/writes data from/in the storage volume (S3009).
  • the channel adapter 210 if the channel adapter 210 receives a read/write request from the information processing apparatus 100 while splitting pairs in a consistency group sequentially, the channel adapter 210 checks whether or not the request is addressed to a not-split storage volume (S3008) and executes the read/write processing (S3009, S3013). However, according to the invention the channel adapter 210 suppresses execution of the read/write processing even when receiving a read/write request from the information processing apparatus 100 while splitting pairs in a consistency group sequentially as described above. The channel adapter 210 executes the read/write processing after completing splitting of all the pairs in the consistency group and changing the description for the pair in the split starting time column to "-".
  • consistency groups are formed by storage devices 300 connected to the same storage device controller respectively.
  • consistency groups should preferably be formed by storage devices 300 connected to a plurality of storage device controllers respectively.
  • a consistency group may be formed over a plurality of storage device controllers 200 that come to communicate with each another to create the consistency group management table 231 and the pair management table 232.
  • the consistency group management table 231 and the pair management table 232 may be managed by one of the storage device controllers 200 and shared with other storage device controllers 200 or each of those storage device controllers manages the same table.
  • volumes controlled by a plurality of storage device controllers 200 should preferably be paired in this embodiment.
  • a pair might be formed over a plurality of storage device controllers 200 and those storage device controllers 200 come to communicate with each another to create the consistency group management table 231 and the pair management table 232.
  • the consistency group management table 231 and the pair management table 232 may be managed by one of the storage device controllers 200 and shared with other storage device controllers 200 or those storage device controllers manages the same table respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Maintenance And Inspection Apparatuses For Elevators (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Communication Control (AREA)
  • Color Television Systems (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

The invention provides a storage device controller (200) adapted to be connected to a plurality of storage devices (300) provided with a plurality of storage volumes for storing data, said storage device controller being connectable to an information processing apparatus (100) for sending write requests to the storage volumes, wherein the storage device controller (200) comprises means for forming pairs of storage volumes, each pair bringing one of the plurality of storage volumes into correspondence with another storage volume. The storage device controller (200) is adapted to start to split pairs of a set of paired volumes upon receiving a split command for the set of paired volumes, wherein the storage device controller (200) is adapted to execute a write request being received during splitting the pairs of the set of paired volumes and being addressed to a volume in the set of paired volumes, after a completion of splitting a pair related to the volume in the set of paired volumes.

Description

  • The present invention relates to a method for controlling a storage device controller, a storage device controller, and a storage system.
  • 2. Description of the Related Arts
  • There is a well-known copy management function used in a storage system that includes an information processing apparatus and a disk array unit connected to each other for communications. The function manages primary volume data in duplicate by copying data from a primary volume to a secondary volume in real time. The primary (master) volume that is a source of copy and the secondary (sub) volume that is a destination of copy are paired.
  • In such a storage system, however, data often overflows one primary volume into other primary volumes during communications between the information processing apparatus and the disk array unit. If an attempt is made to back up the data in such an occasion, a plurality of pairs (of primary and secondary volumes) must be reset from the paired state. If data in a primary volume for which the pair is already reset is updated during sequential resetting of paired states, the data is not updated in its corresponding secondary volume while data in a primary volume of which pair state is not reset is updated in its corresponding secondary volume sometimes.
  • US 5,692,155 discloses a data storage system which atomically suspends multiple duplex pairs across either a single storage subsystem or multiple storage subsystems. The duplex pairs are suspended such that the data on the secondary DASDs of the duplex pairs is maintained in a sequence consistent order. A host processor in the data storage system running an application generates records and record updates to be written to the primary DASDs of the duplex pairs. The storage controller directs copies of the records and record updates to the secondary DASDs of the duplex pairs. Sequence consistency is maintained on the secondary DASDs by quiescing the duplex pairs and then suspending the duplex pairs with change recording. Quiescing the duplex pairs allows any current write I/O in progress to complete to the primary DASD. The storage controller then locks out any subsequent write I/O from the host processor by raising a long busy signal to such subsequent write requests.
  • US 6,301,643 disclosed a system for maintaining consistency of data across storage devices. A cut-off time value is provided to the system. The system then obtains information on data writes to a first storage device, including information on time stamp values associated with the data writes indicating an order of the data writes to the first storage device. At least one group of data writes having time stamp values earlier in time than the cut-off time value is then formed. The system then transfers the data writes in the groups to a second storage device for storage therein.
  • SUMMARY OF THE INVENTION
  • It is an object of the present invention to provide a method for controlling a storage, device controller, a storage device controller, and a storage system capable of managing copies of data while keeping the consistency among the data stored in a plurality of storage volumes.
  • These objects are accomplished by a storage system according to claim 1 or 6, a storage device controller according to claim 11 and a method for controlling a storage device controller according to claim 12.
  • The storage device and the storage device controller may be included in a disk array unit. The information processing apparatus and the disk array unit are included in the storage system.
  • Storage volumes are storage resources provided in the disk array unit or storage device and they are divided into physical volumes and logical volumes. A physical volume is a physical storage area provided in a disk drive of the disk array unit or storage device and a logical volume is a storage area allocated logically in a physical volume.
  • The "paired" means a state in which two storage volumes are brought into correspondence with each other as described above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiments of the present invention will now be described in conjunction with the accompanying drawings, in which:
    • Fig.1 is an overall block diagram of a storage system in an embodiment of the present invention;
    • Fig.2 is a block diagram of an information processing apparatus in the embodiment of the present invention;
    • Fig.3 is a block diagram of a channel adapter provided in a storage device controller in the embodiment of the present invention;
    • Fig.4 is a table stored in a shared storage provided in the storage device controller in the embodiment of the present invention;
    • Fig.5 is pairs of storage volumes in the embodiment of the present invention;
    • Fig.6 is a flowchart of the processings of the storage device controller for splitting a pair for explaining the present invention;
    • Fig.7 is a flowchart of the processings of the storage device controller for splitting a pair and inputting/outputting the split pair data items for explaining the present invention;
    • Fig.8 is a table stored in the shared storage provided in the storage device controller in the embodiment of the present invention; and
    • Fig.9 is a flowchart of the processings of the storage device controller for splitting a pair.
    DETAILED DESCRIPTION OF THE INVENTION
  • Hereunder, the preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
  • === overall Configuration ===
  • At first, the storage system in an embodiment of the present invention will be described with reference to the block diagram shown in Fig.1.
  • An information processing apparatus 100 is a computer provided with a CPU (Central Processing Unit), a memory, etc. The CPU of the information processing apparatus 100 executes various types of programs to realize various functions of the apparatus 100. The information processing apparatus 100 is used, for example, as a core computer in an automatic teller machine in a bank, a flight ticket reservation system, or the like.
  • The information processing apparatus 100 is connected to a storage device controller 200 to communicate with the controller 200. The information processing apparatus 100 issues data input/output commands (requests) to the storage device controller 200 to read/write data from/to the storage devices 300. The information processing apparatus 100 also sends/receives various commands to/from the storage device controller 200 to manage the storage devices 300. For example, the commands are used for managing copies of data stored in the storage volumes provided in the storage devices 300.
  • Fig.2 shows a block diagram of the information processing apparatus 100.
  • The information processing apparatus 100 is configured by a CPU 110, a memory 120, a port 130, a media reader 140, an input device 150, and an output device 160.
  • The CPU 110 controls the whole information processing apparatus 100 and executes the programs stored in the memory 120 to realize various functions of the apparatus 100. The media reader 140 reads programs and data recorded on the recording medium 170. The memory 120 stores the programs and data read by the reader 140. Consequently, the media reader 170 can be used to read a storage device management program 121 and an application program 122 recorded in the medium 170 and store them in the memory 120. The recording medium 170 may be any of flexible disks, CD-ROM disks, semiconductor memories, etc. The media reader 140 may also be built in the information processing apparatus 100 or provided as an external device. The input device 150 is used by the operator to input data addressed to the information processing apparatus 100. The input device 150 may be any of keyboards, mice, etc. The output device 160 outputs information to external. The output device 160 may be any of displays, printers, etc. The port 130 is used to communicate with the storage device controller 200. In that connection, the storage device management program 121 and the application program 122 may be received from another information processing apparatus 100 through the port 130 and stored in the memory 120.
  • The storage device management program 121 manages copies of data stored in the storage volumes provided in the storage devices 300. The storage device controller 200 manages copies of data with use of various copy management commands received from the information processing apparatus 100.
  • The application program 122 realizes the functions of the information processing apparatus 100. For example, the program 122 realizes functions of an automatic teller machine of a bank and functions of a flight ticket reservation system as described above.
  • Next, the storage device controller 200 will be described with reference to Fig.1 again. The storage device controller 200 controls the storage devices 300 according to the commands received from the information processing apparatus 100. For example, when receiving a data input/output request from the information processing apparatus 100, the storage device controller 200 inputs/outputs data to/from a storage volume provided in a storage device 300.
  • The storage device controller 200 is configured by a channel adapter 210, a cache memory 220, a shared storage 230, a disk adapter 240, a management terminal (SVP: Service Processor) 260, and a connection unit 250.
  • The channel adapter 210 provided with a communication interface with the information processing apparatus 100 exchanges data input/output commands, etc. with the information processing apparatus 100.
  • Fig.3 shows a block diagram of the channel adapter 210.
  • The channel adapter 210 is configured by a CPU 211, a cache memory 212, a control memory 213, a port 215, and a bus 216.
  • The CPU 211 controls the whole channel adapter 210 by executing a control program 214 stored in the control memory 213. The control program 214 stored in the control memory 213 thus enables data copies to be managed in this embodiment. The cache memory 212 stores data, commands, etc. to be exchanged with the information processing apparatus 100 temporarily. The port 215 is a communication interface used for the communication with the information processing apparatus 100 and other devices provided in the storage device controller 200. The bus 216 enables the mutual connection among those devices.
  • Return to Fig.1 again. The cache memory 220 stores data to be exchanged between the channel adapter 210 and the disk adapter 240 temporarily. In other words, if the channel adapter 210 receives a write command as a data input/output command from the information processing apparatus 100, the channel adapter 210 writes the command in the shared storage 230 and the target data received from the information processing apparatus 100 in the cache memory 220 respectively. The disk adapter 240 then reads the target data from the cache memory 220 according to the write command written in the shared storage and writes the read data in a storage device 300.
  • The management terminal 260 is a kind of information processing apparatus used for the maintenance/management of the storage device controller 200 and the storage devices 300. For example, the management terminal 260 changes the control program 214 executed in the channel adapter 210 to another. The management terminal 260 may be built in the storage device controller 200 or may be separated. The management terminal 260 may also be dedicated to the maintenance/management of the storage device controller 200 and the storage devices 300 or may be configured as a general information processing apparatus for maintenance/management. The configuration of the management terminal 260 is the same as that of the information processing apparatus 100 shown in Fig.2. Concretely, the management terminal 260 is configured by a CPU 110, a memory 120, a port 130, a recording media reader 140, an input device 150, and an output device 160. Consequently, the control program to be executed in the channel adapter 210 may be read from the recording medium 170 through the media reader 140 of the management terminal 260 or received from the information processing apparatus 100 connected thereto through the port 130 of the management terminal 260.
  • The disk adapter 240 controls the storage devices 300 according to the commands received from the channel adapter 210.
  • Each of the storage devices 300 is provided with a storage volume to be used by the information processing apparatus 100. Storage volumes are storage resources provided in the storage devices 300 and divided into physical volumes that are physical storage areas provided in disk drives of the storage devices 300 and logical volumes that are storage areas allocated logically in those physical volumes. The disk drives may be any of, for example, hard disk drives, flexible disk drives, semiconductor storage devices, etc. The disk adapter 240 and each of the storage devices 300 may be connected to each other directly as shown in Fig.1 or through a network. The storage devices 300 may also be united with the storage device controller 200 into one.
  • The shared storage 230 can be accessed from both of the channel adapter 210 and the disk adapter 240. The shared storage is used to receive/send data input/output requests/commands and store management information, etc. of the storage device controller 200 and the storage devices 300. In this embodiment, the shared storage 230 stores a consistency group management table 231 and a pair management table 232 as shown in Fig.4.
  • === Pair Management Table ===
  • The pair management table 232 is used to manage copies of data stored in the storage devices 300. The table 232 has columns of "pair", "primary volume", "sub volume", "pair state", and "consistency group".
  • The "pair" column holds pair names. A pair means a combination of two storage volumes. Fig.5 shows an example of paired storage volumes. In Fig.5, two pairs, that is, pairs A and B are denoted. One of paired volumes and the other of the paired volumes are managed as a primary volume and a secondary volume. In Fig.5, a primary volume is described as a master volume and a secondary volume is described as a sub volume. A plurality of secondary volumes can be combined with one primary volume.
  • Return to the pair management table 232 shown in Fig.4. The "primary" column describes primary volumes paired with secondary volumes while the "secondary" column describes secondary volumes paired with primary volumes.
  • The "pair state" column describes the state of each pair of volumes. The "pair state" is classified into "paired", "split", and "re-sync".
  • The "paired" denotes that data in a secondary volume is updated with the data in its corresponding primary volume written by the information processing apparatus 100. The consistency of the data stored in a pair of primary and secondary volumes is assured with such correspondence set between those primary and secondary volumes.
  • The "split" denotes that data in a secondary volume is not updated with the data in its corresponding primary volume written by the information processing apparatus 100. Concretely, while primary and secondary volumes are in such a "split" state, the correspondence between those volumes is reset. Consequently, the data consistency is not assured between those primary and secondary volumes. However, because data in any secondary volume that is in the "split" state is not updated, the data in secondary volumes can be backed up during the while; for example, data stored in secondary volumes can be saved in a magnetic tape or the like. This makes it possible to back up data while the data in primary volumes is used continuously during the backup operation for a job that has been executed by the information processing apparatus 100.
  • The "re-sync" denotes a transition state of a pair of volumes, for example, from "split" to "paired". More concretely, the "re-sync" means a state in which data in a secondary volume is being updated with the data written in its corresponding primary volume while the pair is in the "split" state. When the data in the secondary volume is updated, the state of the pair is changed to "paired".
  • To form a pair of storage volumes or to change the state of the pair from "paired"/"split" to "split"/"paired", the operator instructs the information processing apparatus 100 in which the storage device management program 121 is executed through the input device 150. A command from the operator is then sent to the channel adapter 210 of the storage device controller 200. After that, the channel adapter 210 executes the control program 214 to form a pair of storage volumes or change the state of the pair according to the command. According to the state of the formed pair of storage volumes, the channel adapter 210 controls the object storage volumes, for example, updating a secondary volume with a copy of data updated in its corresponding primary volume when those volumes are "paired".
  • As described above, the channel adapter 210 changes the states of pairs one by one sequentially. This is because one primary volume can be paired with a plurality of secondary volumes as described above and if the states of a plurality of pairs are changed simultaneously, the management of primary volumes comes to become complicated.
  • Forming a pair of volumes and changing the state of each pair of volumes can also be made automatically at a predetermined time or according to a command received from another information processing apparatus 100 connected through the port 130 independently of instructions from the operator.
  • === Consistency Group ===
  • The "consistency group" column describes the number of each consistency group (pair group) consisting of pairs of volumes. A consistency group means a group of a plurality of storage volume pairs to be controlled so that the states of those pairs are changed to the "split" together. Concretely, a plurality of pairs in a consistency group are controlled so that their states are changed to the "split" simultaneously (hereinafter, this processing will be referred to as the synchronism among the state changes to the "split") while the states of a plurality of paired volumes are changed one by one sequentially as described above.
  • For example, assume now that the information processing apparatus 100 writes data in a storage volume while the pair states of a plurality of paired volumes in a consistency group are changed sequentially from "paired" to "split". If no consistency group is formed and the data is written in a paired primary volume after the pair state is changed to the "split", the data is not written in its corresponding secondary volume. If the data is written in a paired primary volume of which state is not changed to the "split" yet, the data is also written in the secondary volume. If the paired primary volume belongs to a consistency group at that time, however, the data is not written in its corresponding secondary volume regardless of the pair state of the primary volume (whether it is in the "split" or not). This is because the data is written in the primary volume after pair splitting (resetting of the correspondence between primary and secondary volumes) is started in the consistency group.
  • Forming a consistency group with a plurality of pairs such way is effective for a case in which data is to be stored in a plurality of storage volumes, for example, when write data is too large to be stored in one storage volume and when it is controlled so that one file data is stored in a plurality of storage volumes.
  • Such assured synchronism of the pair state changes of volumes to the "split" in a consistency group is also effective for writing/reading of data in/from secondary volumes requested from the information processing apparatus 100.
  • Concretely, if no consistency group is already formed, data can be written/read in/from any paired secondary volume after the pair state is changed to the "split" while it is inhibited to write/read data in/from any secondary volume of which pair state is not changed to the "split".
  • A batch split receiving flag (ID information) of the consistency group management table 231 is used to assure the synchronism of such pair state changes of volumes to the "split" in the above consistency group. Next, the processings for assuring such synchronism which are possible generally will be described with reference to the flowchart shown in Fig.6 for explaining the invention.
  • === Processing Flow ===
  • The following processings are executed by the CPU 211 provided in the channel adapter 210 with use of the control program 214 (program) consisting of codes for realizing various operations.
  • At first, the channel adapter 210 receives a pair splitting request (split command) addressed to a consistency group from the information processing apparatus 100(S1000). The channel adapter 210 then turns on the batch split receiving flag in the consistency group management table 231 stored in the shared storage 230 (S1001). After that, the channel adapter 210 begins to change the pair state of a not-split pair of volumes in the consistency group to the "split" (S1003). Concretely, the channel adapter 210 resets the correspondence between the primary volume and the secondary volume in the pair and stops updating of the data in the secondary volume with the data written in the primary volume. The channel adapter 210 then changes the description for the pair in the "paired" column in the pair management table 232 to "split" (S1004). Those processings are repeated for each pair in the consistency group. When the states of all the pairs in the consistency group are changed to the "split" (S1005), the channel adapter 210 turns off the batch split flag, then exits the processing.
  • If the channel adapter 210 receives a read/write request from the information processing apparatus 100 during the above processing, the adapter 210 checks whether or not the request is addressed to a not-split storage volume, that is, a "paired" storage volume (for which the correspondence to its secondary volume is not reset)(S1006). If the check result is YES (addressed), the adapter 210 changes the pair state of the volume to the "split" (S1007). The adapter 210 then changes the description of the pair in the pair state column in the pair management table 232 to the "split" (S1008) and executes the data read/write processing (input/output processing)(S1009).
  • On the other hand, if the check result in (S1006) is NO (not addressed), this means that the command is addressed to a "split" volume. The adapter 210 thus executes the read/write processing for the storage volume (S1009) immediately.
  • Consequently, the synchronism of the pair state changes of "paired" volumes to the "split" in a consistency group is assured.
  • In the flowchart shown in Fig.6, if the channel adapter 210 receives a read/write request from the information processing apparatus 100 while splitting paired volumes in a consistency group sequentially, the adapter 210 checks whether or not the request is addressed to a not-split pair of volumes (S1006) to execute the read/write processing (S1009). However, according to the claimed invention the adapter 210 suppresses the execution of the read/write processing requested from the information processing apparatus 100 while the adapter 210 splits paired volumes in a consistency group sequentially. In that connection, the adapter 210 can execute the read/write processing after the adapter 210 completes splitting of all the paired volumes in the consistency group and turns off the batch split flag.
  • Fig.7 shows a flowchart for those processings by the channel adapter 210 in detail.
  • At first, the channel adapter 210 forms a consistency group for both pairs A and B according to a command received from the information processing apparatus 100(S2000 to S2002). The command is inputted, for example, by the operator through the input device 150 of the information processing apparatus 100. The command, inputted to the information processing apparatus 100 is sent to the channel adapter 210 by the storage device management program 121. The "paircreate -g GRP0" shown in Fig.7 is such a command. Receiving the command, the channel adapter 210 forms a consistency group, then records predetermined data in the pair management table 232 and the consistency group management table 231 stored in the shared storage 230 respectively. Fig.4 shows how the predetermined data is recorded in those tables 231 and 232. However, although the state of the pair A is described as "split" in the pair state column in the pair management table 232 shown in Fig.4, the state of the pair A at that time is actually "paired". Similarly, although "ON" is described in the batch split receiving flag column for the consistency group 0 in the consistency group management table 231, the actual state at that time is actually "OFF".
  • The channel adapter 210, when receiving a read/write request (R/W1) for the storage volume 1 in the pair A from the information processing apparatus 100(S2008), executes the read/write processing as usually (S2009). This is because "OFF" is described in the batch split receiving flag column for the consistency group 0 in the consistency group management table 231.
  • After that, the information processing apparatus 100 instructs the channel adapter 210 to split the pair B in the consistency group 0 with a command (S2003). The "pairsplit -g GRP0" shown in Fig.7 is an example of the command issued at that time. This command may also be inputted by the operator through the input device 150 of the information processing apparatus 100.
  • The channel adapter 210 then turns on the batch split receiving flag for the consistency group 0 in the consistency group management table 231 stored in the shared storage 230 (S2004) to start splitting of each pair sequentially (S2005, S2006). Fig.4 shows the pair management table 232 in which the pair A is split. Completing splitting of all the target pairs, the channel adapter 210 turns OFF the batch split receiving flag and exits the processing (S2007).
  • If the channel adapter 210 receives a read/write request (R/W2) addressed to the storage volume 3 of the pair B from the information processing apparatus 100 (S2010) while the channel adapter 210 turns ON the batch split receiving flag (S2004) after receiving a split command addressed to the consistency group 0 from the information processing apparatus 100, the channel adapter 210 executes the read/write processing as usually (S2011). This is because "OFF" is still set in the batch split receiving column for the consistency group 0 in the consistency group management table 231.
  • However, if the channel adapter 210 receives a read/write request (R/W3) addressed to the storage volume 3 of the pair B from the information processing apparatus 100 (S2012) after turning ON the batch split receiving flag (S2004), the channel adapter 210 splits the pair B (S2013), then executes the read/write processing (S2014).
  • As described above, the channel adapter 210, when receiving a read/write request from the information processing apparatus 100, refers to the batch split receiving flag to check whether or not it is after resetting of the pair state of each pair in the consistency group is started that the read/write command has been issued.
  • If the channel adapter 210 receives the read/write request (R/W4) after completing splitting of the pair A in (S2005), the channel adapter 210 executes the read/write processing (S2016). This is because "split" is set for the pair A in the pairing column in the pair management table 232 and the channel adapter 210 knows that "split" denotes that the pair A is split.
  • In that connection, no splitting processing is done for the pair B in (S2005), since the pair B is already split during the read/write processing in (S2013).
  • Because the batch split receiving flag is provided as described above, the synchronism among the pair state changes of all the pairs in a consistency group to the "split" is assured.
  • === Consistency Group Management Table ===
  • Next, a description will be made for another embodiment of the present invention with respect to the management information in the consistency group management table 231.
  • In this embodiment, each split starting time is recorded in the consistency group management table 231 as shown in Fig.8. In the example shown in Fig.8, splitting of pairs in the consistency group 0 is started at 12:00. When splitting of all the pairs in the consistency group 0 is completed, the description in the split starting time column is changed to "-".
  • A split starting time is specified with a command received from the information processing apparatus 100. Such split starting may also be specified so as to be started immediately with a command; no concrete time is specified in such an occasion. In that connection, the current time is recorded in the split starting time column.
  • In this embodiment, the channel adapter 210, when receiving a read/write command from the information processing apparatus 100, compares the read/write command issued time recorded in the read/write command (request) with the time described in the split starting time column of the consistency group management table 231. If the command issued time is later, the channel adapter 210 executes the read/write processing after the end of the splitting.
  • This is why it is possible to assure the synchronism among the state changes of pairs in a consistency group to the "split".
  • === Processing Flow ===
  • Next, how the above processings are possibly executed will be described in detail with reference to the flowchart shown in Fig.9 for explaining the invention.
  • The processings are executed by the CPU 211 of the channel adapter 210 with use of the control program 214 consisting of codes for realizing various operations
  • At first, the channel adapter 210 receives a pair splitting request (split command) addressed to a consistency group from the information processing apparatus 100 (S3000). The channel adapter 210 then records the split starting time recorded in the split command in the split starting time column of the consistency group management table 231 stored in the shared storage 230 (S3001). After that, the channel adapter 210 compares the split starting time with the current time to check whether or not the split starting time is passed (S3003). If the check result is YES (passed), the channel adapter 210 begins the state change of a not-split pair in the consistency group to the "split" (S3004). Concretely, the channel adapter 210 resets the correspondence between primary and secondary volumes of the pair and suppresses updating of the data in the secondary volume with the data written in the primary volume. The channel adapter 210 then changes the description for the pair in the pair state column in the pair management table 232 to "split" (S3005). The above processings are repeated for all of the pairs in the consistency group. When the states of all the pairs in the consistency group are changed to "split" (S3006), the channel adapter 210 changes the description for the pair in the split starting time column to "-" and exits the processing (S3007).
  • If the channel adapter 210 receives a read/write request from the information processing apparatus 100 during the above processing, the channel adapter 210 checks whether or not the request is addressed to a not-split pair, that is, a "paired" storage volume (the correspondence is not reset)(S3008). If the check result is YES (addressed), the channel adapter 210 compares the command issued time recorded in the command with the split starting time (S3010). If the command issued time is later, the channel adapter 210 changes the pair state to the "split" (S3011), then changes the description for the pair in the pair state column in the pair management table 232 to "split" (S3012). After that, the channel adapter 210 executes the read/write processing (input/output processing) (S3013).
  • On the other hand, if the read/write command is addressed to a split pair in (S3008), that is, a "split" storage volume or the command issued time recorded in the request is earlier than the split starting time, the channel adapter 210 reads/writes data from/in the storage volume (S3009).
  • This is why it is possible to assure the synchronism among the state changes of the pairs in a consistency group to the "split".
  • In the flowchart shown in Fig.9, if the channel adapter 210 receives a read/write request from the information processing apparatus 100 while splitting pairs in a consistency group sequentially, the channel adapter 210 checks whether or not the request is addressed to a not-split storage volume (S3008) and executes the read/write processing (S3009, S3013). However, according to the invention the channel adapter 210 suppresses execution of the read/write processing even when receiving a read/write request from the information processing apparatus 100 while splitting pairs in a consistency group sequentially as described above. The channel adapter 210 executes the read/write processing after completing splitting of all the pairs in the consistency group and changing the description for the pair in the split starting time column to "-".
  • In this embodiment, consistency groups are formed by storage devices 300 connected to the same storage device controller respectively. However, the present invention is not "limited only to that embodiment. In this embodiment, consistency groups should preferably be formed by storage devices 300 connected to a plurality of storage device controllers respectively. In that connection, a consistency group may be formed over a plurality of storage device controllers 200 that come to communicate with each another to create the consistency group management table 231 and the pair management table 232. The consistency group management table 231 and the pair management table 232 may be managed by one of the storage device controllers 200 and shared with other storage device controllers 200 or each of those storage device controllers manages the same table. Furthermore, volumes controlled by a plurality of storage device controllers 200 should preferably be paired in this embodiment. In that connection, a pair might be formed over a plurality of storage device controllers 200 and those storage device controllers 200 come to communicate with each another to create the consistency group management table 231 and the pair management table 232. In that connection, the consistency group management table 231 and the pair management table 232 may be managed by one of the storage device controllers 200 and shared with other storage device controllers 200 or those storage device controllers manages the same table respectively.
  • While the embodiments of the present invention have been described, the description is just for illustrative purposes, and it is to be understood that changes and variations may be made without departing from the scope of the following claims.

Claims (16)

  1. A storage system comprising:
    a storage device controller (200) coupled to an information processing apparatus (100); and
    a plurality of storage devices (300) coupled to the storage device controller (200);
    wherein the storage device controller (200) forms a plurality of volumes using the plurality of storage devices (300), and forms a plurality of pairs using the plurality of volumes,
    characterized in that the storage device controller (200) starts to split pairs of a set of paired volumes upon receiving a split command, created by the information processing apparatus (100), for the set of paired volumes,
    wherein, after receipt of the split command, the storage device controller (200) suppresses an execution of a write request, addressed to a volume in the set of paired volumes, sent from the information processing apparatus (100), until a completion of splitting the pairs of the set of paired volumes.
  2. A storage system according to claim 1, characterized in that the storage device controller (200) executes a write request, sent from the information processing apparatus (100), that is not addressed to a volume in the set of paired volumes during splitting pairs in the set of paired volumes.
  3. A storage system according to claim 1 or 2, characterized in that the storage device controller (200) executes the suppressed write request addressed to the volume in the set of paired volumes after the completion of splitting pairs in the set of paired volumes.
  4. A storage system according to at least one of the preceding claims, characterized in that the pairs of the set of paired volumes are formed by primary volumes to be addressed from the information processing apparatus (100) and secondary volumes to be updated with a copy data of a write data to be written to the primary volumes, respectively.
  5. A storage system according to at least one of the preceding claims, characterized in that the storage device controller (200) sequentially splits the pairs in the set of paired volumes.
  6. A storage system including a plurality of storage device controllers (200) and a plurality of storage devices (300) comprising:
    a first storage device controller (200) of the plurality of storage device controllers (200) coupled to an information processing apparatus (100) configuring a plurality of first volumes; and
    a second storage device controller (200) of the plurality of storage device controllers (200) configuring a plurality of second volumes;
    characterized in that the first and second storage device controllers (200) form a consistency group with the first volumes and the second volumes,
    wherein the first storage device controller (200) starts to split paired volumes in the consistency group upon receiving a group split command, created by the information processing apparatus (100), for the consistency group,
    wherein, after receipt of the group split command, the first storage device controller (200) suppresses an execution of a write request, addressed to a volume in the consistency group, sent from the information processing apparatus (100), until a completion of splitting paired volumes in the consistency group.
  7. A system according to claim 6, wherein the first storage device controller (200) executes a write request that is not addressed to a volume in the consistency group during splitting the paired volumes in the consistency group.
  8. A system according to at least one of the claims 6 to 7, wherein the first storage device controller (200) executes the suppressed write request addressed to the volume in the consistency group after the completion of splitting paired volumes in the consistency group.
  9. A system according to at least one of the claims 6 to 8, wherein the paired volumes in the consistency group are formed by primary volumes to be addressed from the information processing apparatus (100) and secondary volumes to be updated with a copy data of a write data to be written to the primary volumes respectively.
  10. A system according to at least one of the claims 6 to 9, wherein the first storage device controller (200) sequentially splits the paired volumes in the consistency group.
  11. A storage device controller (200) connected to a plurality of storage devices (300) provided with a plurality of storage volumes for storing data, said storage device controller being connectable to an information processing apparatus (100) for sending write requests to the storage volumes, wherein the storage device controller (200) comprises means for forming pairs of storage volumes for bringing one of the plurality of storage volumes into correspondence with another storage volume,
    characterized in that
    the storage device controller (200) starts to split pairs of a set of paired volumes upon receiving a split command for the set of paired volumes,
    wherein the storage device controller (200) suppresses an execution of a write request addressed to a volume in the set of paired volumes until a completion of splitting the pairs of the set of paired volumes.
  12. A method for controlling of at least one storage device controller (200) comprising the step of creating a plurality of pairs of volumes using a plurality of volumes;
    characterized by the steps of:
    forming a consistency group by selecting at least two pairs based upon a group create command from an information processing apparatus (100);
    splitting the at least two pairs in the consistency group after receiving a group split command for the consistency group, the command being created by the information processing apparatus (100); and
    suppressing, after receipt of the group split command, an execution of a write request addressed to a volume of the at least two pairs belonging to the consistency group until completion of splitting the at least two pairs.
  13. A method according to claim 12, further comprising the step of performing a write request addressed to a volume that is not in the consistency group before completion of splitting the at least two pairs in the consistency group.
  14. A method according to claim 12 or 13, characterized by the step of executing the suppressed write request addressed to the volume in the consistency group after the completion of splitting the at least two pairs in the consistency group.
  15. A method according to at least one of the claims 12 to 14, wherein the at least two pairs in the consistency group are formed by primary volumes to be addressed from the information processing apparatus (100) and secondary volumes to be updated with a copy data of a write data to be written to the primary volumes, respectively.
  16. A method according to at least one of the claims 12 to 15, wherein the at least two pairs of volumes in the consistency group are sequentially split.
EP06024150A 2002-12-18 2003-10-01 Storage controller system having a splitting command for paired volumes and method therefor Expired - Lifetime EP1760590B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002366374A JP4704660B2 (en) 2002-12-18 2002-12-18 Storage device control device control method, storage device control device, and program
EP03022254A EP1431876B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
EP03022254A Division EP1431876B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices

Publications (2)

Publication Number Publication Date
EP1760590A1 EP1760590A1 (en) 2007-03-07
EP1760590B1 true EP1760590B1 (en) 2007-12-26

Family

ID=32376261

Family Applications (3)

Application Number Title Priority Date Filing Date
EP03022254A Expired - Lifetime EP1431876B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices
EP06024150A Expired - Lifetime EP1760590B1 (en) 2002-12-18 2003-10-01 Storage controller system having a splitting command for paired volumes and method therefor
EP08009007A Expired - Lifetime EP1956487B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP03022254A Expired - Lifetime EP1431876B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP08009007A Expired - Lifetime EP1956487B1 (en) 2002-12-18 2003-10-01 A method for maintaining coherency between mirrored storage devices

Country Status (5)

Country Link
US (5) US7093087B2 (en)
EP (3) EP1431876B1 (en)
JP (1) JP4704660B2 (en)
AT (3) ATE443290T1 (en)
DE (3) DE60318337T2 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754682B1 (en) * 2000-07-10 2004-06-22 Emc Corporation Method and apparatus for enabling consistent ancillary disk array storage device operations with respect to a main application
JP4136615B2 (en) 2002-11-14 2008-08-20 株式会社日立製作所 Database system and database access method
JP4255699B2 (en) * 2003-01-20 2009-04-15 株式会社日立製作所 Storage device control apparatus control method and storage device control apparatus
US7146462B2 (en) * 2003-05-20 2006-12-05 Hitachi, Ltd. Storage management method
JP2005301880A (en) 2004-04-15 2005-10-27 Hitachi Ltd Data input/output processing method in computer system, storage device, host computer, and computer system
JP4575059B2 (en) * 2004-07-21 2010-11-04 株式会社日立製作所 Storage device
JP4596889B2 (en) 2004-11-08 2010-12-15 株式会社日立製作所 Storage system management method
US7711989B2 (en) * 2005-04-01 2010-05-04 Dot Hill Systems Corporation Storage system with automatic redundant code component failure detection, notification, and repair
US7523350B2 (en) * 2005-04-01 2009-04-21 Dot Hill Systems Corporation Timer-based apparatus and method for fault-tolerant booting of a storage controller
JP4728031B2 (en) * 2005-04-15 2011-07-20 株式会社日立製作所 System that performs remote copy pair migration
JP5207637B2 (en) * 2007-02-23 2013-06-12 株式会社日立製作所 Backup control method for acquiring multiple backups in one or more secondary storage systems
JP2008269374A (en) * 2007-04-23 2008-11-06 Hitachi Ltd Storage system and control method
EP2300921B1 (en) 2008-10-30 2011-11-30 International Business Machines Corporation Flashcopy handling
JP2015076052A (en) * 2013-10-11 2015-04-20 富士通株式会社 Data writing device, data writing program and data writing method
US10152391B2 (en) * 2014-02-28 2018-12-11 Ncr Corporation Self-service terminal (SST) backups and rollbacks
CN115291812B (en) * 2022-09-30 2023-01-13 北京紫光青藤微系统有限公司 Data storage method and device of communication chip

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0484215A (en) 1990-07-26 1992-03-17 Hitachi Ltd Data dual writing method for disk controller
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
JP3016971B2 (en) 1992-09-29 2000-03-06 日本電気株式会社 File system
JP3422370B2 (en) 1992-12-14 2003-06-30 株式会社日立製作所 Disk cache controller
US5692155A (en) * 1995-04-19 1997-11-25 International Business Machines Corporation Method and apparatus for suspending multiple duplex pairs during back up processing to insure storage devices remain synchronized in a sequence consistent order
US6185601B1 (en) 1996-08-02 2001-02-06 Hewlett-Packard Company Dynamic load balancing of a network of client and server computers
US6199074B1 (en) 1997-10-09 2001-03-06 International Business Machines Corporation Database backup system ensuring consistency between primary and mirrored backup database copies despite backup interruption
JPH11163655A (en) * 1997-12-01 1999-06-18 Murata Mfg Co Ltd Surface acoustic wave device and its production
EP0949788B1 (en) * 1998-04-10 2006-03-22 Sun Microsystems, Inc. Network access authentication system
JP3667084B2 (en) 1998-05-14 2005-07-06 株式会社日立製作所 Data multiplexing control method
US6308284B1 (en) 1998-08-28 2001-10-23 Emc Corporation Method and apparatus for maintaining data coherency
US6301643B1 (en) * 1998-09-03 2001-10-09 International Business Machines Corporation Multi-environment data consistency
US6308264B1 (en) * 1998-09-30 2001-10-23 Phoenix Technologies Ltd. Dual use master boot record
JP2000137638A (en) * 1998-10-29 2000-05-16 Hitachi Ltd Information storage system
US6643667B1 (en) * 1999-03-19 2003-11-04 Hitachi, Ltd. System and method for replicating data
US6370626B1 (en) 1999-04-30 2002-04-09 Emc Corporation Method and apparatus for independent and simultaneous access to a common data set
DE60043873D1 (en) * 1999-06-01 2010-04-08 Hitachi Ltd Method for data backup
JP3726559B2 (en) 1999-06-01 2005-12-14 株式会社日立製作所 Direct backup method and storage system
US6539462B1 (en) 1999-07-12 2003-03-25 Hitachi Data Systems Corporation Remote data copy using a prospective suspend command
JP3614328B2 (en) 1999-09-28 2005-01-26 三菱電機株式会社 Mirror disk controller
US6401178B1 (en) * 1999-12-23 2002-06-04 Emc Corporatiion Data processing method and apparatus for enabling independent access to replicated data
US6651075B1 (en) * 2000-02-16 2003-11-18 Microsoft Corporation Support for multiple temporal snapshots of same volume
US6708227B1 (en) * 2000-04-24 2004-03-16 Microsoft Corporation Method and system for providing common coordination and administration of multiple snapshot providers
JP2001318833A (en) 2000-05-09 2001-11-16 Hitachi Ltd Storage device sub-system having volume copying function and computer system using the same
JP2002007304A (en) 2000-06-23 2002-01-11 Hitachi Ltd Computer system using storage area network and data handling method therefor
US6754682B1 (en) 2000-07-10 2004-06-22 Emc Corporation Method and apparatus for enabling consistent ancillary disk array storage device operations with respect to a main application
JP2002189570A (en) * 2000-12-20 2002-07-05 Hitachi Ltd Duplex method for storage system, and storage system
US6799258B1 (en) 2001-01-10 2004-09-28 Datacore Software Corporation Methods and apparatus for point-in-time volumes
US7275100B2 (en) 2001-01-12 2007-09-25 Hitachi, Ltd. Failure notification method and system using remote mirroring for clustering systems
US6708285B2 (en) 2001-03-15 2004-03-16 Hewlett-Packard Development Company, L.P. Redundant controller data storage system having system and method for handling controller resets
WO2002094358A1 (en) * 2001-05-23 2002-11-28 Resmed Ltd. Ventilator patient synchronization
US6697881B2 (en) 2001-05-29 2004-02-24 Hewlett-Packard Development Company, L.P. Method and system for efficient format, read, write, and initial copy processing involving sparse logical units
US20030014523A1 (en) 2001-07-13 2003-01-16 John Teloh Storage network data replicator
US6721851B2 (en) 2001-08-07 2004-04-13 Veritas Operating Corporation System and method for preventing sector slipping in a storage area network
US7433948B2 (en) 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US6826666B2 (en) * 2002-02-07 2004-11-30 Microsoft Corporation Method and system for transporting data content on a storage area network
US6862668B2 (en) 2002-02-25 2005-03-01 International Business Machines Corporation Method and apparatus for using cache coherency locking to facilitate on-line volume expansion in a multi-controller storage system
US7263593B2 (en) 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
US7533167B2 (en) * 2003-06-13 2009-05-12 Ricoh Company, Ltd. Method for efficiently extracting status information related to a device coupled to a network in a multi-protocol remote monitoring system
US7133986B2 (en) * 2003-09-29 2006-11-07 International Business Machines Corporation Method, system, and program for forming a consistency group

Also Published As

Publication number Publication date
EP1956487A1 (en) 2008-08-13
EP1956487B1 (en) 2009-09-16
US20050251636A1 (en) 2005-11-10
EP1431876A3 (en) 2007-02-28
ATE443290T1 (en) 2009-10-15
JP4704660B2 (en) 2011-06-15
EP1760590A1 (en) 2007-03-07
US7962712B2 (en) 2011-06-14
US7334097B2 (en) 2008-02-19
JP2004199336A (en) 2004-07-15
EP1431876A2 (en) 2004-06-23
US20060168412A1 (en) 2006-07-27
US20080288733A1 (en) 2008-11-20
ATE400021T1 (en) 2008-07-15
DE60329341D1 (en) 2009-10-29
US7418563B2 (en) 2008-08-26
US20040133752A1 (en) 2004-07-08
US7089386B2 (en) 2006-08-08
DE60318337T2 (en) 2008-06-05
US20070028066A1 (en) 2007-02-01
DE60321882D1 (en) 2008-08-14
ATE382165T1 (en) 2008-01-15
EP1431876B1 (en) 2008-07-02
US7093087B2 (en) 2006-08-15
DE60318337D1 (en) 2008-02-07

Similar Documents

Publication Publication Date Title
US7418563B2 (en) Method for controlling storage device controller, storage device controller, and program
EP1538528B1 (en) Storage system and replication creation method thereof
US7774542B2 (en) System and method for adaptive operation of storage capacities of RAID systems
EP0789877B1 (en) System and method for on-line, real-time, data migration
US5835954A (en) Target DASD controlled data migration move
US7565503B2 (en) Method and apparatus implementing virtualization for data migration with volume mapping based on configuration information and with efficient use of old assets
US6272571B1 (en) System for improving the performance of a disk storage device by reconfiguring a logical volume of data in response to the type of operations being performed
US7752390B2 (en) Disk array apparatus and control method for disk array apparatus
US5845295A (en) System for providing instantaneous access to a snapshot Op data stored on a storage medium for offline analysis
US6202124B1 (en) Data storage system with outboard physical data transfer operation utilizing data path distinct from host
US20080275926A1 (en) Storage system and method of copying data
US20050283564A1 (en) Method and apparatus for data set migration
EP1434125A2 (en) Raid apparatus and logical device expansion method thereof
JP3610266B2 (en) Method for writing data to log structured target storage
US7685129B1 (en) Dynamic data set migration
US7318168B2 (en) Bit map write logging with write order preservation in support asynchronous update of secondary storage
JPH0667811A (en) Multiplexed disk control device
JP4620134B2 (en) Storage device control apparatus control method and storage device control apparatus
JPH02280222A (en) Electronic computer system
JPH05346879A (en) Data filing device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061121

AC Divisional application: reference to earlier application

Ref document number: 1431876

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

17Q First examination report despatched

Effective date: 20070215

RTI1 Title (correction)

Free format text: STORAGE CONTROLLER SYSTEM HAVING A SPLITTING COMMAND FOR PAIRED VOLUMES AND METHOD THEREFOR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIN1 Information on inventor provided before grant (corrected)

Inventor name: SUZUKI, SUSUMU,C

Inventor name: SATO, TAKAO,C

Inventor name: NAGAYA, MASANORI,C

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

AKX Designation fees paid

Designated state(s): BE DE FR GB IE LU NL

RBV Designated contracting states (corrected)

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AC Divisional application: reference to earlier application

Ref document number: 1431876

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 60318337

Country of ref document: DE

Date of ref document: 20080207

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080326

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

ET Fr: translation filed
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080406

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080526

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: LU

Payment date: 20080924

Year of fee payment: 6

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20081031

Year of fee payment: 6

26N No opposition filed

Effective date: 20080929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080327

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: BE

Payment date: 20081208

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080326

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20081021

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IE

Payment date: 20080731

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

BERE Be: lapsed

Owner name: HITACHI, LTD.

Effective date: 20091031

REG Reference to a national code

Ref country code: NL

Ref legal event code: V1

Effective date: 20100501

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20100630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091102

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100501

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080627

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091001

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091001

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20150930

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20150922

Year of fee payment: 13

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60318337

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20161001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161001

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170503