WO2015162684A1 - ストレージシステムのデータ移行方法 - Google Patents
ストレージシステムのデータ移行方法 Download PDFInfo
- Publication number
- WO2015162684A1 WO2015162684A1 PCT/JP2014/061245 JP2014061245W WO2015162684A1 WO 2015162684 A1 WO2015162684 A1 WO 2015162684A1 JP 2014061245 W JP2014061245 W JP 2014061245W WO 2015162684 A1 WO2015162684 A1 WO 2015162684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- migration
- volume
- migration destination
- destination
- storage
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
Definitions
- the present invention relates to a technology for transferring a volume between storage devices.
- the storage device is replaced with a new generation device after a predetermined number of years.
- Patent Document 1 discloses volume migration from a migration source storage A to a migration destination storage B.
- Patent Literature 1 accepts an I / O request during migration of a volume pair in which a volume is mirrored within a storage device or between storage devices, and during migration of a volume from a migration source device to a migration destination device It is disclosed that it is possible.
- volume pairs are operated as necessary. For example, when using a volume pair for backup purposes, when backing up a volume, the mirroring from the copy source volume (primary volume) to the copy destination volume (secondary volume) is suspended, and the data of the secondary volume whose mirroring has been stopped Are backed up to the backup device, and when the backup is completed, the mirroring is resumed.
- the volume pair can be operated by issuing a command specifying an identifier that can uniquely identify the volume, such as the identification number (for example, manufacturing number) or volume number of the storage device to which the volume pair belongs. ing.
- the identification number for example, manufacturing number
- volume number for example, manufacturing number
- the volume number as well as the storage device serial number may be changed. Then, it is necessary to rewrite all information such as the volume number described in the script file for the backup script file.
- a large-scale computer system manages a large number (for example, several hundred or more) of volume pairs, the amount of change is enormous, and it takes a long time to change the script file. As a result, the downtime of the computer system is prolonged.
- I / O from a computer system is possible even during data migration, but there is no consideration related to changing identifiers such as volume numbers.
- An object of the present invention is to minimize the influence on the computer system when transferring a volume.
- volume migration method of the present invention at least two volumes (migration source P-VOL, migration destination S-VOL) forming a volume pair in the migration source storage system are migrated to the migration destination storage system without interruption. To do.
- a virtual logical volume (migration destination P-VOL) in which the storage area of the replication source volume (migration source P-VOL) of the volume pair in the migration source storage system is mapped to the migration destination storage system.
- a migration destination S-VOL that is a replication destination volume of the migration destination P-VOL is created.
- the migration destination storage system attaches the same identifier as the migration source P-VOL and the migration source S-VOL to the migration destination P-VOL and the migration destination S-VOL, and the migration destination to the host computer Recognize that the P-VOL and the migration destination S-VOL are the same volume as the migration source P-VOL and the migration source S-VOL. Subsequently, the access path from the host computer to the migration source P-VOL is switched to the migration destination P-VOL.
- a volume pair is created between the migration destination P-VOL and the migration destination S-VOL so that a copy of the data of the migration destination P-VOL is stored in the migration destination S-VOL.
- the migration process is completed by moving the storage area of the migration destination P-VOL to the storage area in the migration destination storage system.
- volume pairs can be migrated between storage devices without stopping host I / O, and even if a volume pair migration between storage devices is performed, the identifier of each volume does not change. There is no need to change settings on the host computer that uses the volume.
- FIG. 1 is a schematic configuration diagram of a computer system according to Embodiment 1 of the present invention.
- 2 shows a hardware configuration of a storage apparatus.
- the structural example of a parity group management table is shown.
- the structural example of an external volume group management table is shown.
- the structural example of a logical volume management table is shown.
- An example of the configuration of an LU management table is shown.
- the structural example of a pair management table is shown. It is the figure which showed the example at the time of defining virtual storage in the transfer destination primary storage apparatus.
- the structural example of a V-BOX management table is shown.
- 3 shows an example of the configuration of a virtual LDEV management table.
- An example of a setting file used by the storage manager 101 is shown.
- An example of a setting file used by the storage manager 101 is shown.
- FIG. 1 An example of a setting file used by the storage manager 101 is shown. It is a figure showing the outline
- FIG. 1 is a schematic configuration diagram of a computer system according to Embodiment 1 of the present invention.
- the computer system includes a migration source primary storage device 10a, a migration source secondary storage device 10b, a migration destination primary storage device 20a, a migration destination secondary storage device 20b, a primary server 30a, and a secondary server 30b.
- FIG. 1 shows a configuration during the migration process described below, and the migration destination primary storage device 20a and the migration destination secondary storage device 20b do not exist before the migration process.
- the primary server 30a is connected to the migration source primary storage apparatus 10a via a SAN (Storage Area Network) 50, and accesses the logical volume 130a of the migration source primary storage apparatus 10a.
- the secondary server 30b is connected to the migration source secondary storage apparatus 10b via the SAN 60 and is in a state where the logical volume 130b of the migration source secondary storage apparatus 10b is accessible.
- the migration source primary storage device 10a and the migration source secondary storage device 10b are interconnected by ports (11a, 11b).
- the SAN 50 and the SAN 60 are networks composed of fiber channel cables and fiber channel switches, but may be networks composed of Ethernet.
- the migration destination primary storage device 20a and the migration destination secondary storage device 20b are installed in the computer system. As shown in FIG. 1, the migration destination primary storage device 20a is connected to the primary server 30a via the SAN 50, and is also connected to the migration source primary storage device 10a and the migration destination secondary storage device 20b. The migration destination secondary storage device 20b is connected to the secondary server 30b via the SAN 60.
- the migration source primary storage device 10a and the migration source secondary storage device 10b are collectively referred to as “migration source storage system 10”.
- the migration destination primary storage device 20a and the migration destination secondary storage device 20b are collectively referred to as “migration destination storage system 20”.
- the migration source primary storage device 10a, the migration source secondary storage device 10b, the migration destination primary storage device 20a, and the migration destination secondary storage device 20b are collectively referred to as a “storage device”.
- the primary server 30a and the secondary server 30b are collectively referred to as “host computer”.
- the hardware configuration inside the storage apparatus in the computer system will be described with reference to FIG.
- the hardware configuration of the migration destination primary storage device 20a will be mainly described, but the other storage devices (migration source primary storage device 10a, migration source secondary storage device 10b, migration destination secondary storage device 20b) have similar hardware.
- the migration destination primary storage apparatus 20a is configured by coupling a front-end package (FEPK) 201, a back-end package (BEPK) 202, a processor package (MPPK) 203, and a cache memory package (CMPK) 204 with each other via an interconnection network 205. And a disk unit (DKU) 210 on which a plurality of drives 221 are mounted.
- FEPK front-end package
- BEPK back-end package
- MPPK processor package
- CMPK cache memory package
- DKU disk unit
- the front-end package (FEPK) 201 has a plurality of ports 21 for connecting to a host computer such as the primary server 30a and storage devices (10a, 10b, 20b), and is transmitted and received between the CMPK 204 and the MPPK 203.
- the FEPK 201 has a buffer, a CPU, an internal bus, and an internal port (not shown).
- the buffer is a storage area for temporarily storing control information and data relayed by the FEPK 201, and is configured using various volatile memories and nonvolatile memories, similar to the CMPK 204.
- the internal bus interconnects various components in the FEPK 201.
- the FEPK 201 is connected to the interconnection network 205 via an internal port.
- the number of ports 21 is not limited to the number shown in the figure.
- the back-end package (BEPK) 202 has a plurality of interfaces (I / F) 2021 for connecting to the drive 221, and relays control information and data transmitted and received between the drive 221 and the CMPK 204 or MPPK 203. It is a component.
- the BEPK 202 has a buffer, a CPU, an internal bus, and an internal port (not shown).
- the buffer is a storage area for temporarily storing control information and data relayed by the BEPK 202, and is configured using various volatile memories and non-volatile memories in the same manner as the CMPK 204.
- the internal bus interconnects various components in the FEPK 201.
- the MPPK 203 includes a CPU 2031, a memory (LM) 2032, and an internal bus and an internal port (not shown). Similar to the cache memory 2041, the memory (LM) 2032 can be configured using various types of volatile memory and nonvolatile memory.
- the CPU 2031, memory (LM) 2032 and internal port are interconnected via an internal bus.
- the MPPK 203 is connected to the interconnection network 205 via the MPPK 203 internal port.
- the MPPK 203 is a component for performing various data processing in the storage apparatus.
- the CPU 2031 executes the program on the memory (LM) 2032, various functions such as a volume copy function of the storage apparatus described below are realized.
- LM memory
- the CMPK 204 is a component that includes a cache memory 2041 and a shared memory 2042.
- the cache memory 2041 is a storage area for temporarily storing data received from the primary server 30a or another storage device, and temporarily storing data read from the drive 221.
- the cache memory 2041 is configured using, for example, a volatile memory such as DRAM or SRAM, or a volatile memory such as a flash memory.
- the shared memory 2042 is a storage area for storing management information related to various data processing in the storage apparatus. Similar to the cache memory 2041, the shared memory 2042 can be configured using various volatile memories and nonvolatile memories. As hardware of the shared memory 2042, hardware common to the cache memory 2041 can be used, or hardware that is not common can be used.
- the mutual connection network 205 is a component for interconnecting components and transferring control information and data between the mutually connected components.
- the interconnection network can be configured using switches and buses, for example.
- the drive 221 is a storage device for storing data (user data) and redundant data (parity data) used by various programs on the primary server 30a.
- a storage medium of the drive 221 a nonvolatile semiconductor storage medium such as a NAND flash memory, an MRAM, a ReRAM, and a PRAM can be used in addition to a magnetic storage medium used in an HDD.
- Each storage apparatus has one or more logical volumes (130a, 130b, 230a, 230b) and command devices (135a, 135b, 235a, 235b).
- the logical volumes (130a, 130b, 230a, 230b) are storage devices that store write data from a host computer such as the primary server 30a, and will be described in detail later.
- the command device (135a, 135b, 235a, 235b) is a kind of logical volume, but is not used for storing write data from the host computer. This is a storage device used for receiving various control commands such as a logical volume copy instruction from the storage manager 101 on the primary server 30a or the secondary server 30b.
- each storage apparatus has one command device.
- the command device is created when an administrator issues a command device creation instruction command to each storage device using the management terminal.
- each storage apparatus has a large number of ports, in the following, in order to distinguish each port, identifiers A to R are attached to each port as shown in FIG. explain.
- the migration source primary storage apparatus 10a has ports A to E, ports A and B are used for connection with the primary server 30a, and ports D and E are for connection with the migration source secondary storage apparatus 10b. Used.
- the port C is used to connect to the migration destination primary storage apparatus 20a during the migration process.
- the migration source secondary storage apparatus 10b has ports F to I, and the ports F and G are used to connect to the migration source primary storage apparatus 10a. Ports H and I are used to connect to the secondary server 30b.
- the migration destination primary storage apparatus 20a has ports J to N, the ports J and K are used to connect to the primary server 30a, and the ports M and N are used to connect to the migration destination secondary storage apparatus 20b. It is done.
- the port L is used to connect to the migration source primary storage apparatus 10a.
- the migration destination secondary storage apparatus 20b has ports O to R, and the ports O and P are used to connect to the migration destination primary storage apparatus 20a. Ports Q and R are used to connect to the secondary server 30b.
- each storage device is not limited to the configuration having only the number of ports described above, and may have more ports than the number described above.
- the primary server 30a and the secondary server 30b are computers having a CPU and a memory (not shown). Further, the primary server 30a and the secondary server 30b are connected by a communication network (not shown). A LAN (Local Area Network) or a WAN (Wide Area Network) is used for the communication network. The CPU executes various programs loaded on the memory. Programs that operate on the primary server 30a and the secondary server 30b include an application program (AP) 100, a storage manager 101, an alternate path software 102, and cluster software 103.
- AP application program
- the application program (AP) 100 is a program such as a database management system, and performs access (read, write) to the logical volume 130a.
- a remote copy function for copying data written to the logical volume 130a to the logical volume 130b is operating, and data on the logical volume 130a is always copied to the logical volume 130b.
- the cluster software 103 is a program for performing a so-called disaster recovery operation, and operates on the primary server 30a and the secondary server 30b. For example, when the primary server 30a and the migration source primary storage apparatus 10a are down due to a failure or the like, the cluster software 103 running on the secondary server 30b performs a process of resuming the work performed on the AP 100 of the primary server 30a.
- the alternate path software 102 manages the access path (path) from the host computer such as the primary server 30a to the volume (logical volume) of the storage device, and even if one path is disconnected due to a failure, In order to enable access, the program continues the access to the volume using an alternative path.
- the storage manager 101 is a program for setting and controlling the storage apparatus. Detailed description will be given later.
- the migration destination primary storage apparatus 20a in the embodiment of the present invention manages one or a plurality of drives 221 as one group, and this group is a parity group (elements 22a, 22b, etc. in FIG. 1). Is a parity group).
- the migration destination primary storage apparatus 20a creates redundant data (parity) from data before storing the data in the drive group constituting the parity group 22 by using a so-called RAID technology. Then, the data and the parity are stored in the drive group constituting the parity group 22.
- the information of the parity group 22 managed by the migration destination primary storage apparatus 20a (for example, information on the drives constituting the parity group) is managed by a parity group management table T100 as shown in FIG. This information is stored in the shared memory 2042.
- the parity group management table T100 includes columns of a parity group name T101, a drive name T102, a RAID level T103, a size T104, and a remaining size T105.
- the parity group name T101 stores an identifier of a parity group
- the drive name T102 stores an identifier of a drive constituting the parity group.
- the RAID level T103 stores information on the RAID level that is the redundancy method of the parity group
- the size T104 stores information on the capacity of the parity group.
- the parity group storage area is used when creating a logical volume.
- the remaining size T105 stores information on the size of the parity group capacity that is not yet used for creating the logical volume.
- the migration destination primary storage apparatus 20a has a function of forming one or a plurality of logical volumes (also referred to as LDEVs) using the storage area provided by the parity group. This function is called a volume creation function.
- the migration destination primary storage apparatus 20a manages each logical volume by assigning a unique number in the storage apparatus, which is called a logical volume number (LDEV #).
- Information on the logical volume is managed by the logical volume management table T200 shown in FIG.
- Each row of the logical volume management table T200 represents logical volume information, and LDEV # (T201) is a logical volume identifier (LDEV #) that is an identifier of the logical volume. Is stored.
- the group name T203 and the start address T204 store information on the parity group associated with the storage area of the logical volume and the start address of the storage area of the parity group.
- the migration destination primary storage apparatus 20a has a volume creation function
- other storage apparatuses also have the volume creation function described above.
- the migration destination primary storage apparatus 20a treats the storage area of the volume of another storage apparatus (migration source primary storage apparatus 10a, etc.) as the storage area of the migration destination primary storage apparatus 20a. It has a function of providing a storage area to a host computer such as the primary server 30a. Hereinafter, this function is referred to as “external storage connection function”.
- the storage area managed by the external storage connection function is called an external volume group (24a).
- the information of the external volume group managed by the migration destination primary storage apparatus 20a is managed by an external volume group management table T150 as shown in FIG.
- Each row represents information on the external volume group
- the identifier of the external volume group is stored in the group name T151
- the WWN (T152), LUN (T153), and size (T154) are associated with the external volume group.
- a logical volume whose WWN (T152) is “50060e8005111112b” and LUN (T153) is “0” is associated with an external volume group.
- the operation of associating a logical volume with an external volume group is referred to as “mapping”.
- a plurality of access paths (paths) from the external volume group of the migration destination primary storage apparatus 20a to the logical volume of the external storage can be provided, and the access path can be replaced when a path failure occurs.
- the external volume group 24a is handled in the same manner as the parity group 22a. Therefore, the volume creation function can also create one or more logical volumes using the storage area of the external volume group.
- a row 907 of the logical volume management table T200 in FIG. 5 shows an example of a logical volume (a volume with LDEV # (T201) of “33”) created from the storage area of the external volume group “EG1”.
- the external storage connection function is a function that only the migration destination primary storage apparatus 20a has in the computer system according to the first embodiment of the present invention.
- storage apparatuses other than the migration destination primary storage apparatus 20a may have an external storage connection function.
- LU path setting function In order to allow the host computer such as the primary server 30a to recognize the logical volume described above, a function for attaching LUN (logical unit number) and port identifier is referred to as “logical path creation function”. Or “LU path setting function”.
- logical path creation function a function for attaching LUN (logical unit number) and port identifier
- LU path setting function a function for attaching LUN (logical unit number) and port identifier.
- a WWN World Wide Name
- the LU management table T300 shown in FIG. 6 is a table for managing information on the logical unit number (LUN) and port name associated with each logical volume in the storage apparatus. This table is stored in the shared memory 2042 of the storage device.
- the port name and LUN information associated with the logical volume specified by LDEV # (T303) are stored.
- the port name T301 is “3e243174 aaaaaaaa”
- LUN (T302) is “0”
- LDEV # (T303) is “11”. Is stored.
- the host computer such as the primary server 30a connected to the SAN 50 recognizes that there is a volume whose LUN is 0 under the port.
- a volume recognized by the host computer (primary server 30a or the like) is referred to as a “logical unit” or “LU”.
- a plurality of (port name, LUN) pairs can be associated with one logical volume.
- the information of the logical volume with the LDEV # (T303) No. 11 is stored in the entry (row) 300-5 and also stored in the lower entry (row) 300-6. Yes.
- the host computer recognizes that there is an LU with LUN number 0 under the port (3e243174 aaaaaaa), and if there is an LU with LUN number 0 under the port (3e243174 bbbbbbbbb). recognize.
- the LU path setting function of the storage apparatus has a function of deleting the LUN and port name associated with the logical volume, contrary to the LU path setting.
- the process of deleting the LUN and port name associated with the logical volume is called LU path deletion process.
- the storage apparatus has a function of copying data in the logical device to another volume, and this function is called a volume copy function.
- the volume copy function has a function to copy data to a volume in the same storage device as the storage device to which the copy source volume belongs, and a function to copy data to a volume in a storage device different from the storage device to which the copy source volume belongs. (For example, copying data of a volume in the migration source primary storage apparatus 10a to a volume in the migration source secondary storage apparatus 10b). In the embodiment of the present invention, they are called “local copy function” and “remote copy function”, respectively.
- the local copy function can be realized by using the techniques disclosed in Patent Document 2 and Patent Document 3 mentioned above, for example.
- the remote copy function is a function disclosed in, for example, Patent Document 4 and Patent Document 5.
- the present invention is effective regardless of the type of remote copy function used by the storage device. is there.
- copy data is accumulated as a journal in a volume (journal volume), and the data stored in the journal volume is transferred to a copy destination storage apparatus.
- journal volume volume
- a volume copied by the volume copy function is called a primary volume. It is also expressed as P-VOL.
- S-VOL The copy destination volume of data in the P-VOL is called a secondary volume. It is also expressed as S-VOL.
- (C) Pair The S-VOL in which the data copy in the P-VOL is stored is referred to as a “paired volume” with the P-VOL. Similarly, a P-VOL in which S-VOL copy source data is stored may also be referred to as a volume having a pair relationship with the S-VOL. A set of S-VOLs in which a copy of the P-VOL and the data of the P-VOL is stored is called a “pair” or “volume pair”.
- Pair creation, pair deletion Using the volume copy function, the LDEV (P-VOL) of the storage device (eg, migration destination primary storage device 20a) and the LDEV (S-) of the storage device (eg, migration destination secondary storage device 20b) Instructing the storage apparatus to make a volume in a pair relationship with VOL) is called “pair formation” (or “pair creation”). On the other hand, the operation of canceling the relationship of volumes in a pair relationship (so that the P-VOL data copy is not created in the S-VOL) is called “releasing the pair” (or “deleting the pair”). .
- the storage apparatus receives an instruction such as pair formation or pair deletion from the storage manager 101 or the management terminal.
- the control command for instructing the creation of a pair received by the storage device includes the serial number of the storage device to which the P-VOL belongs, the LDEV # of the P-VOL, the serial number of the storage device to which the S-VOL belongs, S -VOL LDEV # is included.
- the storage apparatus uniquely identifies the P-VOL and S-VOL to be paired.
- (E) Pair status The volume pair takes several states depending on the copy status of the data in the volume.
- the storage apparatus receives the pair formation instruction, the operation of copying all the data in the P-VOL to the S-VOL is started.
- all the data in the P-VOL is not yet reflected in the S-VOL, but this state is called “copy pending”.
- a state in which copying of all data in the P-VOL is completed and the contents of the P-VOL and S-VOL are the same (synchronized) is referred to as a “pair” state.
- the write data is written to the P-VOL and a copy of the write data is also written to the S-VOL.
- VOL and S-VOL are controlled so that the same data is stored. The state after the pair is released is called a “simplex” state.
- a state where the replication process is temporarily stopped is referred to as a “suspended” state. Further, the replication process can be resumed for the volume pair in the “suspended” state, and the “pair” state can be set again. This process is called “resynchronization”.
- the storage device manages each volume pair using the pair management table T2000.
- FIG. 7 shows a configuration example of the pair management table T2000.
- the storage apparatus manages each pair with an identifier, and this identifier is called a pair number (T2001).
- the pair management table T2000 the P-VOL and S-VOL information of each pair (PDKC # (T2003), which is an identification number that can identify the storage apparatus to which the P-VOL belongs), and the LDEV # of the P-VOL.
- P-VOL # (T2004), SDKC # (T2005) that is an identification number that can identify the storage device to which the S-VOL belongs
- S-VOL # (T2006) that is the LDEV # of the S-VOL).
- the pair management table T2000 is stored in both the storage apparatus to which the P-VOL belongs and the storage apparatus to which the S-VOL belongs. Therefore, when a pair volume is created by the migration destination primary storage device 20a and the migration destination secondary storage device 20b, the pair management table T2000 is stored in both the migration destination primary storage device 20a and the migration destination secondary storage device 20b.
- volume migration function is a function for moving data stored in one logical volume in the migration destination primary storage apparatus 20a to another logical volume in the migration destination primary storage apparatus 20a. .
- identifiers such as LDEV # are also moved.
- the state of the logical volume management table T200 is the state shown in FIG. The case will be described.
- the migration destination primary storage apparatus 20a is instructed to move the LDEV # 33 data to the LDEV # 44, the migration destination primary storage apparatus 20a performs a process of copying the LDEV # 33 data to the LDEV # 44.
- the logical volume (LDEV # 44) that is the data migration destination of LDEV # 33 is referred to as "target volume” or "migration target volume”.
- target volume or “migration target volume”.
- the migration destination primary storage apparatus 20a switches the roles of LDEV # 33 and LDEV # 44. Specifically, the contents of LDEV # (T201) in the logical volume management table T200 are exchanged. That is, in the example of FIG. 5, 44 is stored in the LDEV # (T201) of the row 906 and 33 is stored in the LDEV # (T201) of the row 907, but after the volume is moved by the volume migration function, the LDEV of the row 906 is stored. 33 is stored in # (T201), and 44 is stored in LDEV # (T201) in the row 907.
- the correspondence between the LDEV # and the group name and head address that is the data storage destination is swapped between the migration source and the migration destination.
- the LDEV # of the logical volume in which the data is stored (and the LUN or port name associated with the logical volume) is not changed, and the data storage location is transparently transmitted to the host computer or other higher-level device. Can be moved.
- data was read / written from external volume EG1 before replication was completed, but data was read / written from parity group RG A after replication was completed. become.
- the migration destination primary storage device 20a is different from a physical storage device such as the migration destination primary storage device 20a, and one or more virtual storage devices 25a (hereinafter referred to as virtual storage). And has the function of making the primary server 30a appear to exist on the SAN 50 in addition to the physical storage device.
- the migration destination primary storage apparatus 20a will be described as an example.
- the function for defining virtual storage also includes the migration destination secondary storage apparatus 20b
- the migration destination secondary storage apparatus 20b also has the functions described below. I have.
- the migration source primary storage apparatus 10a and / or the migration source secondary storage apparatus 10b may or may not have a function of defining a virtual storage.
- the migration source primary storage device 10a and / or the migration source secondary storage device 10b have a function of defining a virtual storage, and belong to the virtual storage defined in the migration source primary storage device 10a and / or the migration source secondary storage device 10b.
- the migration source storage device is the migration source virtual storage device
- the device serial number of the migration source storage device is the virtual product number
- LDEV # is VLDEV #.
- the virtual storage has a device serial number (S / N) (hereinafter, the device serial number of the virtual storage is referred to as “virtual serial number”), and has a logical volume as a resource in the virtual storage.
- S / N device serial number
- the serial number of the migration source primary storage device 10a is 1, the serial number of the migration destination primary storage device 20a is 2, and the serial number of the defined virtual storage 25a is 1 (that is, the serial number of the migration source primary storage device 10a).
- the migration source primary storage apparatus 10a has a logical volume 130a with an LDEV # of 11
- the virtual storage 25a has a logical volume 230a with an LDEV # of 33.
- the logical volume 230a included in the virtual storage has a virtual identification number different from the LDEV #, and this is referred to as a virtual LDEV # or VLDEV #.
- VLDEV # is No. 11 (that is, the same number as the LDEV # of the logical volume 130a of the migration source primary storage apparatus 10a is assigned).
- the primary server 30a obtains information on the logical volume 230a by issuing a SCSI INQUIRY command or the like to the migration destination primary storage device 20a, the volume number 11 is returned and the device serial number 1 is returned. The That is, the virtual product number and VLDEV # are returned to the primary server 30a.
- the primary server 30a acquires information on the logical volume 130a from the migration source primary storage apparatus 10a, the same volume number (No. 11) and the same apparatus serial number (No. 1) as the logical volume 230a are returned.
- path 1 the alternate path of the primary server 30a to the logical volume 130a (dotted arrow in the figure, hereinafter referred to as “path 2”).
- the alternate path software 102 When the path 1 is deleted, when the alternate path software 102 receives an access request to the logical volume 130a from an application program or the like, the alternate path software 102 issues an access request via the path 2 (that is, to the logical volume 230a). Issue an access request).
- FIG. 9 shows the configuration of the V-BOX management table T1000 that the migration destination primary storage apparatus 20a has.
- the V-BOX management table T1000 is stored in the shared memory 2042 of the migration destination primary storage apparatus 20a.
- virtual storage is also defined in the migration destination secondary storage apparatus 20b during data migration work. Therefore, the migration destination secondary storage apparatus 20b also has a V-BOX management table T1000.
- the migration destination primary storage apparatus 20a generates management information called V-BOX in the storage apparatus 20 when defining the virtual storage.
- V-BOX is management information for managing information on resources such as logical volumes that should belong to a certain virtual storage.
- One row (for example, R1011 and R1012) in the V-BOX management table T1000 in FIG. 9 corresponds to information representing one V-BOX.
- the V-BOX is an ID (T1001) that is an identifier of the V-BOX defined by the migration destination primary storage apparatus 20a, model name information (T1002) of the V-BOX, a serial number (S / N) of the V-BOX, It consists of a virtual serial number (T1003), VLDEV # (T1004) and LDEV # (T1005) assigned to the logical volume belonging to the V-BOX.
- Each of VLDEV # (T1004) and LDEV # (T1005) stores information on virtual LDEV # and LDEV # of the logical volume belonging to the virtual storage.
- V-BOX In the initial state of the migration destination primary storage device 20a, one V-BOX in which the device serial number of the migration destination primary storage device 20a is set as a virtual product number is defined in the migration destination primary storage device 20a.
- this V-BOX is referred to as “virtual storage v0”.
- V-BOX management table T1000 In the initial state, only the information of the row R1011 is stored in the V-BOX management table T1000. When the logical volume is created, all the created logical volumes belong to the virtual storage v0.
- the virtual storage is created in response to the migration destination primary storage apparatus 20a receiving from the storage manager 101 or the management terminal a command for instructing the definition of the virtual storage along with the designation of the model name and the virtual serial number of the virtual storage. .
- a virtual storage to which no logical volume belongs is defined in the virtual storage (that is, no logical volume information is registered in the columns of T1004 and T1005).
- the migration destination primary storage apparatus 20a accepts an instruction to make the logical volume belong to the virtual storage, information on the logical volume (virtual LDEV # and virtual LDEV # and LDEV #) is registered.
- a virtual LDEV management table T1500 used when registering logical volume information in the V-BOX management table 1000 will be described with reference to FIG.
- the virtual LDEV management table T1500 is a table for managing the correspondence between LDEV # and VLDEV # of logical volumes, and includes columns of LDEV # (T1501), VLDEV # (T1502), ID (T1503), and attribute (T1504). .
- VLDEV # (T1502) a virtual LDEV # attached to the logical volume specified by LDEV # (T1501) is stored. In the initial state, the same value is stored in LDEV # (T1501) and VLDEV # (T1502) for all logical volumes. However, VLDEV # (T1502) can be changed by an instruction from the outside (such as the storage manager 101). Also, the value of VLDEV # (T1502) can be set to an invalid value (NULL) according to an instruction from the outside.
- NULL invalid value
- T1503 stores an ID that is an identifier of the V-BOX to which the logical volume belongs.
- the attribute (T1504) stores information indicating whether or not the logical volume is registered (reserved) in the virtual storage. When the logical volume is registered (reserved) in the virtual storage, “reserved” is stored in the attribute (T1504).
- the migration destination primary storage apparatus 20a can perform processing for registering a logical volume in the virtual storage.
- the migration destination primary storage device 20a is in a state where three logical volumes have been created. Even in this state, it is allowed to perform a process of making a logical volume not registered in the logical volume management table T200, for example, a logical volume whose LDEV # is the second belong to the virtual storage.
- the VLDEV # associated with the LDEV # can be changed later, and this work is called “virtualization” in the first embodiment of the present invention.
- the migration destination primary storage apparatus 20a receives an instruction to assign the number 11 as the virtual LDEV # to the logical volume with the LDEV # of 44
- the migration destination primary storage apparatus 20a receives the LDEV # (T1502) of the virtual LDEV management table T1500.
- 11 is stored in VLDEV # (T1501) corresponding to the 44th row
- 11 is stored in VLDEV # (T1004) corresponding to the 44th row in LDEV # (T1005) of the V-BOX management table 1000.
- LDEV # is assigned the second logical volume to virtual storage v1 (virtual storage whose ID (T1001) is v1 in the V-BOX management table T1000) and the fourth VLDEV # is attached by virtualization Outline the flow.
- the logical volume whose LDEV # is n is abbreviated as “LDEV # n”.
- LDEV # 2 belongs to virtual storage v0 and VLDEV # corresponding to LDEV # 2 is 2.
- the migration destination primary storage apparatus 20a sets VLDEV # associated with LDEV # 2 to NULL.
- VLDEV # (T1502) is set to NULL in the second row of LDEV # (T1501) in virtual LDEV management table T1500.
- “reserved” is stored in the attribute T1504 of the virtual LDEV management table T1500.
- the LDEV # (T1005) is 2 in the row R1011 of the V-BOX management table T1000.
- the VLDEV # (T1004) in the row is set to NULL.
- the migration destination primary storage apparatus 20a stores 2 in the LDEV # (T1005) column of the row R1012 of the V-BOX management table T1000. Also, NULL is stored in the corresponding VLDEV # (T1004). “V1” is stored in the ID (T1503) of the row corresponding to LDEV # 2 of the virtual LDEV management table T1500.
- LDEV # 2 is virtualized.
- the migration destination primary storage apparatus 20a stores 4 in VLDEV # (T1004) corresponding to the second row of LDEV # T1005 registered in the V-BOX management table T1000. 4 is also stored in VLDEV # (T1502) in the row corresponding to LDEV # 2 of the virtual LDEV management table T1500.
- virtual information is used as identification information for identifying a P-VOL or S-VOL.
- a control instruction for pair operation is issued to the storage device including information on the product number and virtual LDEV #, and information indicating that the virtual identifier is used in the instruction.
- the storage device migrates the virtual serial number and virtual LDEV # information as the P-VOL or S-VOL identifier
- the virtual LDEV is referred to by referring to the V-BOX management table T1000.
- # Is converted to LDEV #, and the logical volume to be processed is specified.
- the instruction does not include information indicating that the virtual identifier is used, the virtual serial number included in the instruction does not match the device serial number of the storage device that received the instruction
- the storage apparatus rejects the command.
- the storage manager 101 is a program for setting and controlling the storage device from the primary server 30a or the secondary server 30b. All the storage apparatuses of the computer system according to the first embodiment of the present invention are controlled by the storage manager 101. Hereinafter, the function of the storage manager 101 and the setting information used by the storage manager 101 will be described.
- the storage manager 101 is a program used when an administrator or a program such as the cluster software 103 performs a management operation of the storage apparatus.
- the “management operation” here is a setting operation such as LU path setting or a pair operation using a remote copy function.
- the storage manager 101 supports several commands for management operations, and a program such as an administrator or cluster software 103 performs management operations of the storage device by issuing commands to the storage manager 101.
- the storage manager 101 processes the received command and creates a control command to be issued to the storage apparatus. Then, the generated control instruction is issued to the command device defined in the storage apparatus.
- the storage apparatus that has received the control command performs a predetermined process (such as LU path setting) according to the content of the received control command. Therefore, one command device is defined for each storage apparatus existing in the computer system according to the first embodiment of the present invention.
- the commands for management operations supported by the storage manager 101 are roughly classified into two types.
- the first type of command is a command for volume pair operation using the volume copy function (local copy function, remote copy function) and volume migration function (hereinafter referred to as “pair operation command”). It is.
- the second type of command is a command other than the pair operation command, and includes a command for setting the storage device such as LU path setting (hereinafter referred to as “setting command”).
- the storage manager 101 can perform all the setting processes other than the command device creation process among the various storage apparatus setting processes required in the migration process described below.
- a command for creating a volume pair or an LU path setting command For example, a command for creating a volume pair or an LU path setting command.
- the setting command uses LDEV # instead of virtual LDEV # as an identifier for specifying a logical volume.
- a virtual LDEV # is used as an identifier for specifying a logical volume specified by a pair operation command (or a setting file described later).
- the storage manager 101 operates as a resident program (service) of an operating system (OS) that operates on the host computer (primary server 30a or secondary server 30b).
- OS operating system
- this resident program is called an “instance”.
- a plurality of instances can be operated on one host computer.
- starting the operation of the instance is referred to as “activating the instance”.
- Each instance When each instance is started, a number called instance number is specified. Each instance reads one configuration file at startup according to the specified instance number (the configuration file is given a file name that includes the instance number, and the instance is a file that includes the specified instance number in the file name) Read).
- Each instance specifies a command device to which a control command is issued based on the command device information stored in the setting file. For example, when two instances are activated (hereinafter, these two instances are referred to as “instance 0” and “instance 1”, respectively), the command file information of the migration source primary storage apparatus 10a is included in the setting file for instance 0. If the command device information of the migration destination primary storage device 20a is stored in the configuration file for instance 1, instance 0 can be used to perform management operation of the migration source primary storage device 10a. 1 can be used to perform a management operation of the migration destination primary storage apparatus 20a.
- both instances can be used to perform management operations for the same storage device. This is suitable when one instance is used for setting purposes such as LU path setting and the other instance is used for pair operation.
- a host computer that accesses the P-VOL and a host computer (secondary server) that accesses the S-VOL In each of 30b), at least one instance is activated.
- the setting file 3000-1 is a setting file stored in the primary server 30a
- the setting file 3000-2 is a setting file stored in the secondary server 30b.
- the setting file 3000-1 is a setting file for pair operation of the migration source primary storage apparatus 10a
- the setting file 3000-2 is a setting file for pair operation of the migration source secondary storage apparatus 10b.
- the P-VOL LDEV # is 11, the storage device to which the P-VOL belongs (migration source primary storage device 10a) is the serial number 1, the S-VOL LDEV # is 22, and the storage to which the S-VOL belongs. This is an example when the serial number of the device (migration source secondary storage device 10b) is No. 11.
- Each setting file 3000-1, 3000-2 stores three types of information.
- the first information is the command device information described above, and the identifier of the command device from which the instance issues the control command is recorded in the field of the command device name 3001.
- the name of the identifier of the command device is uniquely determined for each storage device, and the name is recorded when the setting file is created.
- the second information is volume pair information
- the pair information field 3002 stores volume pair information to be operated by the remote copy function.
- the description format will be described later.
- the third information is the destination host computer information 3003.
- the instance of the primary server 30a acquires S-VOL information from the instance of the secondary server 30b.
- the destination host computer information 3003 is information used for that purpose, and describes the IP address of the destination host computer.
- the description format of the pair information field 3002, which is the second information, will be described.
- the pair information field 3002 stores a group name 3002-1, device serial number 3002-2, and LDEV # (3002-3).
- LDEV # 3002-3 describes the LDEV # of one logical volume among the volume pairs.
- the device serial number 3002-2 stores the serial number (S / N) of the device where the LDEV exists.
- S / N serial number
- the configuration file of the primary server 30a stores the LDEV # of the P-VOL in the migration source (or migration destination) primary storage device (10a or 20a), and the configuration file of the secondary server 30b contains the migration source ( Alternatively, the LDEV # of the S-VOL in the secondary storage device (10b or 20b) is stored.
- the LDEV # of the S-VOL in the secondary storage device (10b or 20b) is stored.
- at least LDEVs in a pair relationship need to be given the same group name.
- the logical volume with LDEV # (3002-3) of 11 and the logical volume with LDEV # (3002-3) of 22 are in a pair relationship, and the group name (3002-1) is “devg01”. It is described that.
- a group name is used as information for specifying a volume pair to be operated.
- the format of a command for instructing pair creation is as follows. paircreate ⁇ group name>
- the instance running on the primary server 30a is based on the contents of the configuration file, and the LDEV # (3002) corresponding to the group name specified by the command -3) and the device serial number (3002-1) are specified. Further, by communicating with the instance running on the secondary server 30b, the LDEV # (3002-3) and the device serial number (3002-) corresponding to the group name described in the configuration file for the instance running on the secondary server 30b 1) is acquired. As a result, the instance running on the primary server 30a can specify the LDEV # and device serial number of the P-VOL and S-VOL. Based on this information, a control instruction to be issued to the command device is created, and a pair operation control instruction is issued to the command device.
- the same group name may be given to a plurality of volume pairs.
- a pair operation of a plurality of volume pairs is performed.
- This setting file was created by the administrator of the computer system.
- the LDEV # (3002-3) and the device serial number 3002-1 include the LDEV # of the P-VOL or S-VOL in the migration source storage system 10 and The device serial number of the migration source storage system 10 is stored.
- the device serial number and LDEV # are also changed.
- the administrator needs to rewrite this setting file. That is, it is necessary to rewrite the device serial number or LDEV # with the device serial number or LDEV # of the migration destination storage system. This work is time consuming especially when the number of volumes increases.
- the virtual storage is defined in the migration destination storage system
- the virtual LDEV # of the P-VOL is No. 11
- the virtual serial number of the virtual storage to which the P-VOL belongs is No. 1
- the virtual LDEV of the S-VOL This is an example in which # is No. 22 and the virtual serial number of the virtual storage to which the S-VOL belongs is No.
- the virtual serial numbers assigned to the -VOL are the same as the serial numbers of the migration source primary storage apparatus 10a and the migration source secondary storage apparatus 10b, respectively, and the migration destination P-VOL and The virtual LDEV # attached to the migration destination S-VOL is the P-VOL (migration source PV) of the migration source primary storage apparatus 10a.
- S-VOL of LDEV # and the source secondary storage apparatus 10b of L) in LDEV # equal relationship of the migration source S-VOL)).
- a storage device in which a virtual storage is defined receives a pair operation instruction for a volume pair belonging to the virtual storage
- the storage device 101 uses an identifier of P-VOL or S-VOL as an identifier.
- the system operates by receiving information on the virtual product number and virtual LDEV #. Therefore, as the volume pair information described in the setting file, the virtual serial number and the virtual LDEV # are described as information for specifying the P-VOL and S-VOL.
- a file 3000-1 ' is a setting file for the migration destination primary storage apparatus 20a
- a file 3000-2' is a setting file for the migration destination secondary storage apparatus 20b.
- the difference from the setting file of FIG. 11 will be described using the file 3000-1 'as an example.
- a block 3001-1 ' is an identifier of a command device to which the storage manager 101 issues a command.
- the points other than the block 3001 are the same as those in the setting file in FIG. Therefore, when the migration destination storage system according to the embodiment of the present invention is used, there is no need to change the setting file when the volume is migrated from the migration source storage system to the migration destination storage system. Can be reduced.
- FIG. 13 Another example of the setting file is shown in FIG.
- the setting file 3000-1 'shown in FIG. 13 has only a command device name 3001 field.
- an instance used only for setting purposes such as LU path setting is activated, only the field of the command device name 3001 needs to be defined in this way.
- FIG. 14 is a diagram illustrating an overview of the migration process in the computer system according to Embodiment 1 of the present invention.
- the migration source primary storage device 10a and the migration source secondary storage device 10b have a remote copy function. With the remote copy function, the logical volume 130a of the migration source primary storage apparatus 10a and the logical volume 130b of the migration source secondary storage apparatus 10b are in a pair relationship, and the logical volume 130a is P-VOL and the logical volume 130b is S-VOL. It is in.
- the data written from the primary server 30a to the logical volume 130a of the migration source primary storage apparatus 10a is copied to the logical volume 130b of the migration source secondary storage apparatus 10b, and the data of the logical volume 130a is always stored in the logical volume 130b.
- the replica is stored (pair status).
- the logical volume 130a is referred to as “migration source P-VOL”
- the logical volume 130b is referred to as “migration source S-VOL”.
- the logical volume 130a and the logical volume 130b in the pair state are migrated to the migration destination storage system 20 while maintaining the pair state.
- the virtual storage 25a is created in the migration destination primary storage apparatus 20a.
- the logical volume 230a and the logical volume 231a that are the migration destination primary volume are created in the virtual storage 25a.
- a virtual storage 25b is created, and a logical volume 230b that becomes a migration destination secondary volume is created in the virtual storage 25b.
- a virtual LDEV # having the same number as the LDEV # of the migration source P-VOL is assigned to the logical volume 230a
- a virtual LDEV # having the same number as the LDEV # of the migration source S-VOL is assigned to the logical volume 230a.
- the logical volume 230a is a logical volume created using the parity group 22a (the parity group 22a described in FIG. 1) in the migration destination primary storage apparatus 20a as a storage area.
- the logical volume 230a and the logical volume 231a are created as volumes of the same size as the logical volume 130a.
- the logical volume 230a has an entity (storage area in which data is stored) in an external storage apparatus (migration source primary storage apparatus 10a).
- the primary server 30a accesses the logical volume 230a that is the migration destination P-VOL, not the logical volume 130a (migration source P-VOL). Change the settings as follows. Since the logical volume 230a is mapped to the logical volume 130a, when the primary server 30a writes data to the logical volume 230a, the data is transferred from the migration destination primary storage device 20a to the migration source primary storage device 10a, and the logical volume 130a is written.
- the primary server 30a issues a read request to the logical volume 230a
- data is read from the logical volume 130a
- the read data is transferred from the migration source primary storage device 10a to the migration destination primary storage device 20a, and the migration destination. It is returned from the primary storage device 20a to the primary server 30a.
- the same volume number (virtual volume number) as the logical volume 130a is assigned to the logical volume 230a, and the access path of the primary server 30a is changed to the migration destination primary storage apparatus 20a, but the primary server 30a Recognizes that the logical volume 230a is the same volume as the logical volume 130a.
- volume copy is started between the logical volume 230a and the logical volume 230b by the remote copy function of the migration destination storage system 20.
- data is sequentially read from the logical volume 230a (actually the logical volume 130a) and copied to the logical volume 230b.
- the primary server 30a can issue an I / O request to the logical volume 230a.
- write data is written to the logical volume 230a (actually the logical volume 130a) and a copy of the write data is also stored in the logical volume 230b.
- the logical volume of the secondary server 30b is transferred.
- the setting is changed so that the secondary server 30a accesses the logical volume 230b instead of the logical volume 130b.
- the primary server 30a and the migration destination primary storage device 20a are stopped due to a failure, it is possible to continue business using the secondary server 30b and the migration destination secondary storage device 20b.
- the data of the logical volume 230a does not exist in the migration destination primary storage device 21a.
- the migration destination primary storage apparatus 20a performs a process of migrating the contents of the logical volume 230a to the logical volume 231a by using the volume migration function.
- the migration destination primary storage apparatus 20a replicates the contents of the logical volume 230a to the logical volume 231a. At this time, since the actual data of the logical volume 230a is in the logical volume 130a, the data is actually copied from the logical volume 130a to the storage area associated with the logical volume 231a.
- the migration destination primary storage apparatus 20a switches the roles of the logical volume 230a and the logical volume 231a. As a result, the logical volume 231a becomes a volume pair of the logical volume 230b, and the volume number of the logical volume 231a is changed to the volume number previously assigned to the logical volume 230a.
- FIG. 15 shows a state after the roles of the logical volume 230a and the logical volume 231a are switched by the volume migration function.
- the storage manager 101 investigates the configuration of the migration source storage system (S10).
- S10 the migration source storage system
- the following information is mainly obtained.
- S / N Serial number
- the storage manager 101 When investigating such information, the storage manager 101 issues a control command for acquiring configuration information to the migration source primary storage apparatus 10a and the migration source secondary storage apparatus 10b.
- the storage apparatus receives a control command for acquiring configuration information, it returns information such as the serial number of the apparatus and the LDEV # and size of the logical volume defined in the storage apparatus.
- the content of the logical volume management table T200 is returned as information on the logical volume defined in the storage device.
- the administrator can determine that the LDEV # that is not recorded in the LDEV # (T201) is an “unused LDEV #”. .
- the LDEV # of the P-VOL (referred to as the migration source P-VOL) of the migration source primary storage apparatus 10a is No. 11, and the size is 100 GB.
- the LDEV # of the S-VOL (referred to as the migration source S-VOL) of the migration source secondary storage apparatus 10b is No. 22 and the size is 100 GB.
- the serial number of the migration source primary storage device 10a is 1
- the serial number of the migration source secondary storage device 10b is 11
- the serial number of the migration destination primary storage device 20a is 2
- Migration destination The serial number of the secondary storage device 20b is 22
- the migration destination P-VOL, the migration destination S-VOL, and the volume migration function target volume are used as a result of investigating the LDEV # that is not used in the migration source primary storage apparatus 10a and the migration source secondary storage apparatus 10b.
- the migration destination P-VOL, the migration destination S-VOL, and the volume migration function target volume are used as the logical volume.
- the LDEV # of the logical volume selected as the P-VOL (referred to as the migration destination P-VOL) of the migration destination primary storage apparatus 20a is No. 33.
- the volume migration function target in the migration destination primary storage apparatus 20a LDEV # of the logical volume selected as the volume is No. 44 and virtual LDEV # is No. 99.
- the LDEV # of the S-VOL (referred to as the migration source S-VOL) of the migration source secondary storage apparatus 10b is No. 55
- the instance number of the storage manager 101 operating on the primary server 30a and the secondary server 30b and the setting file used by each instance are specified in advance.
- the instance numbers of the instances running on the primary server 30a and the secondary server 30b are 100 and 101, respectively.
- the case where the contents of the setting file are those shown in FIG. 11 will be described.
- the migration destination storage system is installed, and the migration destination storage system is connected to the host computer and other storage devices with a fiber channel cable (S20).
- a transmission line such as a fiber channel cable that connects a storage device and a host computer or between storage devices is called a "physical path” and connects the storage device and the host computer or between storage devices with a physical path.
- the work is called “physical path connection”.
- (A) is a physical path for exchanging an I / O request between the primary server 30a and the migration destination primary storage apparatus 20a and an instruction to the command device of the migration destination primary storage apparatus 20a.
- (A) is a physical path for exchanging an I / O request between the secondary server 30b and the migration destination secondary storage apparatus 20b and an instruction to the command device of the migration destination secondary storage apparatus 20b.
- (C) is a physical path used for data migration, and is used to map the migration source volume (P-VOL) of the migration source primary storage apparatus 10a to the migration destination primary storage apparatus 20a by the external storage connection function. It is done.
- the physical path (c) has been described, but a configuration in which a plurality of physical paths are connected can also be employed.
- (D) is a physical path used for data transmission by the remote copy function in the migration destination storage system.
- the storage manager 101 performs LU path setting for the logical volume of the migration source primary storage apparatus 10a.
- the logical volume for which the LU path is set is a P-VOL to be migrated.
- the LU path setting performed here is performed so that the migration destination primary storage apparatus 20a can recognize the migration target volume (P-VOL) of the migration source primary storage apparatus 10a by the external storage connection function. Therefore, LU path setting is performed for port C of the migration source primary storage apparatus 10a connected to the migration destination primary storage apparatus 20a in S20. It is necessary to specify the LUN when setting the LU path, but any LUN may be specified.
- the migration source primary storage apparatus 10a When the storage manager 101 of the primary server 30a issues an LU path setting instruction to the migration source primary storage apparatus 10a, the migration source primary storage apparatus 10a performs LU path setting for the P-VOL in accordance with the instruction. As another embodiment, LU path setting may be performed from the management terminal.
- the migration destination primary storage apparatus 20a is set.
- the management terminal creates a command device.
- the instance of the storage manager 101 is activated on the primary server 30a.
- the setting file read by the instance activated at this time may be a file as shown in FIG. 13, that is, a setting file in which at least command device identifier information is recorded.
- the instance number of the instance activated here, an instance number different from the instance numbers (100 and 101) of the instances already operating on the primary server 30a and the secondary server 30b is used.
- Each subsequent operation is performed from the storage manager 101.
- a management terminal may be used.
- the attribute of the port (port L) connected to the migration source primary storage apparatus 10a among the ports of the migration destination primary storage apparatus 20a using the activated instance is set as the attribute for the external storage connection function.
- the attributes of the ports (port M, port N) connected to the migration destination secondary storage apparatus 20b are changed to attributes for the remote copy function.
- the attribute of one of the ports M and N is the attribute of the data transmission port from the migration destination primary storage apparatus 20a to the migration destination secondary storage apparatus 20b, and the attributes of the remaining ports are the migration destination.
- the attribute is changed to the data reception port attribute from the secondary storage apparatus 20b to the migration destination primary storage apparatus 20a.
- the attributes of the ports (port O, port P) connected to the migration destination primary storage apparatus 20a are changed to attributes for the remote copy function.
- the storage manager 101 creates a virtual storage in the migration destination primary storage device 20a.
- the virtual storage creation control command issued by the storage manager 101 to the migration destination primary storage apparatus 20a when creating the virtual storage includes the virtual storage identifier, virtual storage serial number, and model name information to be created. Is included.
- the virtual storage identifier included in the control instruction issued in S40c is the identifier stored in the ID (T1001) of the V-BOX management table T1000, and the virtual storage identifier that has been defined (for example, as described above) Thus, an identifier different from the identifier v0 is defined in the initial state) is designated.
- the serial number of the virtual storage included in the control command issued in S40c is designated as the serial number of the migration source primary storage 10a
- the model name of the migration source primary storage 10a is designated as the virtual storage model name.
- the storage manager 101 issues a control command for creating virtual storage to the migration destination primary storage apparatus 20a together with these pieces of information.
- the migration destination primary storage apparatus 20a that has received the control command for creating the virtual storage receives the virtual storage received in the ID (T1001), model name (T1002), and S / N (T1003) in the V-BOX management table T1000.
- the identifier, model name, and serial number information are stored.
- the storage manager 101 issues an instruction to delete the virtual LDEV # attached to the volume used as the migration destination volume to the migration destination primary storage apparatus 20a.
- the LDEV # s of the migration destination volumes (migration destination P-VOL, target volume) prepared in the migration destination primary storage apparatus 20a are Nos. 33 and 44, this case will be described.
- the migration destination primary storage apparatus 20a receives the control command for deleting the virtual LDEV #, among the LDEVs registered in the virtual LDEV management table, the VLDEVs in the rows where the LDEV # (T1501) is the number 44 and the number 33 The contents of # (T1502) are invalidated (a NULL value is stored).
- the storage manager 101 issues an instruction to register the LDEV # of the LDEV from which the virtual LDEV # has been deleted in the virtual storage.
- the migration destination primary storage apparatus 20a registers LDEV # in the LDEV # (T1005) of the V-BOX management table T1000, and stores “reserved” in the attribute T1504 of the virtual LDEV management table T1500.
- the migration destination primary storage apparatus 20a recognizes that the LDEV belongs to the virtual storage, and does not use the LDEV for other purposes.
- the storage manager 101 virtualizes the LDEV registered in the virtual storage, that is, issues an instruction to attach a virtual LDEV # to the LDEV registered in the virtual storage.
- 99 is used for the virtual LDEV # of the target volume.
- a control command to attach the 11th virtual LDEV # to the logical volume of the LDEV # 33 is In addition, a control command for attaching the 99th virtual LDEV # is issued to the logical volume of the 44th LDEV #.
- the LDEV # (T1005) registered in the V-BOX management table T1000 stores 11 in the VLDEV # (T1004) corresponding to the 33rd row, and the LDEV # (T1005) stores 99 in VLDEV # (T1004) corresponding to the 44th row.
- the 11th logical volume with the LDEV # of 33 is assigned as the virtual LDEV #
- the 99th logical volume with the LDEV # of 44 is assigned with the 99th virtual LDEV #.
- the migration source volume (P-VOL) of the migration source primary storage apparatus 10a is mapped to the migration destination primary storage apparatus 20a by the external storage connection function.
- the storage manager 101 issues a control command for registering the migration source volume in the external volume group of the migration destination primary storage apparatus 20a.
- the migration destination primary storage apparatus 20a receives this command, it registers information of the migration source volume in the external volume group management table T150.
- the migration destination primary storage apparatus 20a acquires the size information of the migration source volume from the migration source primary storage apparatus 10a and stores it in the size T154.
- FIG. 4 shows an example of the external volume group management table T150 after mapping the migration source P-VOL.
- the migration source P-VOL is mapped to the external volume group whose external volume group name (T151) is EG1. Since the size of the migration source P-VOL is 100 GB, “100 GB” is stored in the size T154.
- the storage manager 101 issues a control command for creating a logical volume (migration destination P-VOL) from the storage area of the external volume group.
- the migration destination primary storage apparatus 20a uses 33 for LDEV # (T201) in the logical volume management table T200 and 100 GB for size T202 (that is, the migration destination P-VOL using the entire area of the migration source P-VOL).
- VOL is created)
- “EG1” is stored in the group name T203
- 0 is stored in the head address T204
- a target volume used for the volume migration function is created.
- the size of the target volume created here is the same size as the migration destination P-VOL, and the storage area of the parity group in the migration destination primary storage apparatus 20a is used as the storage area.
- any parity group in which the remaining size of the parity group (remaining size T105 of the parity group management table T100) is equal to or larger than the size of the migration destination P-VOL may be selected.
- the storage manager 101 issues a control command for allocating the storage area of the parity group to the logical volume whose LDEV # is 44.
- the migration destination primary storage apparatus 20a registers information on the storage area (parity group) allocated to the LDEV # and LDEV in the logical volume management table T200.
- FIG. 5 shows an example of the contents of the logical volume management table T200 after assignment. In FIG. 5, 100 GB of the area of the parity group RG A is allocated to the LDEV # 44 (line 906), and 100 GB of the area of the external volume group EG1 is allocated to the LDEV # 33 (line 907). It is shown.
- an LU path is set for the migration destination volume.
- the storage manager 101 issues a control command to assign a port name and LUN to the LDEV # 33.
- the migration destination primary storage apparatus 20a registers information in the LU management table T300. For example, when an instruction to assign port name 3e243174aaaaaa, LUN0 to LDEV # 33 is received, port number T301 of LU management table T300 is “3e243174aaaaa” and LUN (T303) is 0 in migration destination primary storage apparatus 20a. "33" is registered in the LDEV # (T304) column of the row.
- command device creation (operation equivalent to S40a), virtual storage creation (operation equivalent to S40c), deletion of virtual LDEV # attached to the volume used as the migration destination volume (S-VOL), and virtual storage of the migration destination volume Registration (operation equivalent to S40d), virtualization of the migration destination volume (operation equivalent to S40e), creation of the migration destination volume (operation equivalent to S40f), and LU path setting to the migration destination volume (operation equivalent to S40h) Do.
- command device creation work is performed from the management terminal, and after the command device is created, the secondary server 30b activates the instance of the storage manager 101, and the subsequent work is performed using the activated instance. , S40.
- an alternate path from the primary server 30a to the migration destination primary storage apparatus 20a is added.
- the primary server 30a executes the command provided by the operating system of the primary server 30a and the alternate path software 102, so that the migration destination P-VOL created in the migration destination primary storage device 20a by the operation of S40 is changed to the primary server 30a.
- Operating system and alternate path software 102 recognize.
- the migration destination P-VOL is recognized from the alternate path software 102
- the migration source P-VOL of the migration source primary storage apparatus 10a and the migration destination P-VOL of the migration destination primary storage apparatus 20a are transferred to the primary server 30a. Since the same attribute information (device serial number, LDEV #) is shown, the alternate path software 102 recognizes that both volumes are the same volume, and an alternate path is constructed.
- the setting of the alternate path software 102 is changed so that the path from the primary server 30a to the migration source volume of the migration source primary storage apparatus 10a is invalidated. This is performed by executing a command provided by the alternate path software 102. Since the alternate path software 102 recognizes that the migration source P-VOL and the migration destination P-VOL are the same volume, when the alternate path is deleted, the alternate path software 102 is executed on the primary server 30a. An I / O request issued by the program or the like to the migration source P-VOL is issued to the migration destination P-VOL.
- the migration source primary storage apparatus 10a deletes the LU path of the migration source P-VOL. This operation is performed by issuing an LU path deletion instruction to the migration source primary storage apparatus 10a from the storage manager 101 or the management terminal.
- the migration destination storage system 10 creates a pair between the migration destination P-VOL and the migration destination S-VOL. Based on the data of the configuration file (example of configuration file shown in FIG. 11) used for volume pair operation of the migration source P-VOL and the migration destination S-VOL in the primary server 30a and the secondary server 30b. As described above, a setting file for issuing a control command to the virtual storage is created. And the instance of the storage manager 101 of the primary server 30a and the secondary server 30b is started using the created setting file. Note that, as the instance number of the instance activated here, an instance number different from the instance numbers (100 and 101) of the instances already operating on the primary server 30a and the secondary server 30b is used. Thereafter, when the storage manager 101 of the primary server 30a issues a pair creation control command, the migration destination storage system 10 starts data replication between the migration destination P-VOL and the migration destination S-VOL.
- the configuration file example of configuration file shown in FIG. 11
- the primary server 30a transfers the data to the migration destination P-VOL (LDEV # 33, virtual LDEV # 11).
- the data is written to the migration source P-VOL (LDEV # 11) of the migration source primary storage apparatus 10a by the external storage connection function, and the written data is transferred to the migration source S-VOL (LDEV # 22).
- LDEV # 22 the migration source S-VOL
- the instance of the storage manager 101 that has been started for the migration source storage system is stopped, and the setting file of the storage manager 101 is rewritten.
- the rewriting of the setting file is the same as that performed in S70.
- the instance activation using the setting file is performed in the primary server 30a and the secondary server 30b (S90).
- the instance number of the instance activated the same instance number as the instance number (100th and 101st) of the instance running on the primary server 30a and the secondary server 30b before the migration process is used.
- the path from the secondary server 30b to the migration source volume of the migration source secondary storage apparatus 10b is invalidated, and the LU path of the migration source volume of the migration source secondary storage apparatus 10b is deleted.
- the same operation as that performed on the migration source P-VOL of the migration source primary storage apparatus 10a is performed on the migration source S-VOL of the migration source secondary storage apparatus 10b.
- LU path setting from the secondary server 30b to the migration destination S-VOL of the migration destination secondary storage apparatus 20b is performed, and the secondary server 30b is made to recognize the migration destination S-VOL.
- the operation of the disaster recovery software such as the cluster software 103 on the primary server 30a and the secondary server 30b that has been stopped in S80 is resumed.
- an operation of migrating the migration destination P-VOL to the target volume is performed.
- an instance (referred to as a second instance) different from the instance of the storage manager 101 activated in S90 is activated on the primary server 30a, and the volume migration instruction to the migration destination primary storage device 20a is the second instance. Done with.
- the migration destination primary storage apparatus 10a that has received the instruction migrates the migration destination P-VOL to the target volume using the volume migration function.
- the primary server 30a When the primary server 30a writes data to the migration destination P-VOL (LDEV # 33, virtual LDEV # 11) while waiting for the completion of migration, that is, while the data of the migration destination P-VOL is completely replicated to the target volume,
- the data is written to the migration source P-VOL (LDEV # 11) of the migration source primary storage apparatus 10a by the external storage connection function.
- the written data is written to the migration destination S-VOL (LDEV # 55, virtual LDEV # 22). Therefore, a redundant configuration is maintained between the migration source P-VOL and the migration destination S-VOL even during data migration between the migration destination P-VOL and the target volume.
- the volume status can be confirmed by issuing a control command for acquiring the migration status from the storage manager 101 to the migration destination primary storage apparatus 20a.
- the migration destination storage system When the migration is completed, all data of the migration source storage system is stored in the migration destination storage system. In addition, the correspondence between the LDEV # and the data storage destination storage area is switched between the migration source and the migration destination by the volume migration function. Therefore, when the migration is completed, the logical volume associated with the storage area of the parity group 22a in the migration destination primary storage apparatus 20a is accessed in response to the access request from the primary server 30a specifying the virtual LDEV # 11. Processing is executed. Since the primary server 30a and the secondary server 30b are not in access to the migration source storage system (because the path is deleted), the administrator removes the migration source storage system (S120). This completes the migration process. By the procedure described so far, the P-VOL and S-VOL volume pair is transferred to the migration source storage system 10 while maintaining the redundant configuration without stopping the reception of access (read, write request, etc.) from the primary server 30a. To the migration destination storage system 20.
- a volume migration instruction in S110 is issued to the migration destination primary storage apparatus 20a.
- volume transfer instruction wait for pair creation to complete.
- the migration source P-VOL and the target volume are switched at the same time by the volume migration function.
- replication from the migration source P-VOL to the migration destination S-VOL by the remote copy function and data migration (duplication) from the migration source P-VOL to the target volume are performed in parallel.
- the time required for data migration can be shortened compared to the data migration processing according to the first embodiment.
- FIG. 19 is a configuration diagram of a computer system according to the second modification of the present invention.
- the computer system according to the second modification of the present invention is the same as the computer system according to the first embodiment, the migration source primary storage device 10a, the migration source secondary storage device 10b, the migration destination primary storage device 20a, the migration destination secondary storage device 20b, the primary The server 30a and the secondary server 30b are configured, and the hardware configuration of each storage device is the same as that described in the first embodiment.
- the difference from the computer system according to the first embodiment is that a remote copy function using a journal is used for the remote copy function.
- the journal is used to temporarily store the replicated data of the P-VOL transmitted from the storage apparatus having the P-VOL to the storage apparatus having the S-VOL. Storage area.
- the migration source primary storage device 10a and the migration source secondary storage device 10b are provided with a journal 13a and a journal 13b, respectively.
- the migration destination primary storage device 20a and the migration destination secondary storage device 20b are also provided with a journal 23a and a journal 23b, respectively.
- journal IDs unique identifiers in the storage device.
- journal volumes 133a, 133b, 233a, and 233b are registered in the journals 13a, 13b, 23a, and 23b, respectively.
- JNL the journal may be abbreviated as “JNL”
- JVOL the journal volume
- the journal volumes are the same logical volumes as the logical volume 130a described in the first embodiment.
- the logical volume 130a described in the first embodiment is a volume that is statically associated with the storage area of the parity group in the storage apparatus when the logical volume is created, but is formed using a so-called Thin Provisioning technology.
- the volume to be used may be used as the journal volume.
- the storage area is dynamically allocated to the accessed area when access to the volume is received, so that the storage area is saved.
- the data in the logical volume 130a that is the migration source P-VOL is always replicated to the logical volume 130b that is the migration source S-VOL by the remote copy function using the journal ( In pair status).
- the flow of data during the migration process is shown in FIG. 20, except that the copy data of the migration source P-VOL passes through the JNLs 13a and 13b before being replicated to the migration source S-VOL. There is no change. That is, when transferring the copy data of the transfer source P-VOL to the transfer source secondary storage apparatus 10b, the copy data is temporarily stored in the JNL 13a (the JVOL 133a). At this time, the copy data is assigned a number called a sequence number that indicates the write order of the copy data. The sequence number is assigned to enable the migration source secondary storage apparatus 10b to write the replicated data to the migration source S-VOL in the same order that the primary server 30a wrote the data to the migration source P-VOL. It is.
- the replicated data stored in the JNL 13a is transferred to the JNL 13b (JVOL 133b) of the migration source secondary storage apparatus 10b. Thereafter, the migration source secondary storage apparatus 10b extracts the copy data of the migration source P-VOL stored in the JNL 13b, and reflects the copy data to the migration source S-VOL in the order of the sequence numbers attached to the copy data.
- the outline of volume duplication in the migration source storage system 10 has been described, but the migration destination storage system 20 also performs the same processing when a volume pair creation of the migration destination P-VOL and the migration destination S-VOL is performed. .
- the computer system according to Modification 2 uses a remote copy function that uses a journal as the remote copy function, and therefore the main difference is that a process for preparing a journal is added to the migration destination storage system 20.
- a process for preparing a journal is added to the migration destination storage system 20.
- information on the journal ID of the journal used in the volume pair of the migration source P-VOL and the migration source S-VOL is acquired.
- a journal is created in the migration destination storage system 20, where journals (23a, 23b) having the same ID as the journal IDs of the journals 13a, 13b of the migration source storage system 10 are created.
- the journal ID of the journal 13a is 0
- the journal ID of the journal 13b is 1 as a result of acquiring the journal ID information.
- the case where the LDEV # of the journal volume 233a is No. 77 and the LDEV # of the journal volume 233b is No. 88 will be described as an example.
- S20 and S30 are the same as those described in the first embodiment.
- journal volume 233a is created and the journal volume 233a is registered in the journal 23a between S40g and S40h.
- the other points are the same as the process of S40 described in the first embodiment.
- the order in which this processing is performed is not limited to the order described above. For example, it may be performed before S40f.
- journal volume 233b is created and the created journal volume 233b is registered in the journal 23b.
- the other points are the same as the processing of S50 described in the first embodiment.
- S60 is the same as the processing described in the first embodiment.
- S70 is the identifier of the migration destination P-VOL and the migration destination S-VOL when the storage manager 101 issues a pair creation control command for the volume pair (that is, the migration destination P-VOL and the migration destination S-VOL). Is issued, and a control command specifying the journal IDs (0 and 1) of the journals 23a and 23b created in S40 is issued.
- the other points are the same as the processing described in the first embodiment. As a result, copying from the migration destination P-VOL to the migration destination S-VOL is performed using the journal 23a and the journal 23b.
- the processing after S80 is the same as the processing described in the first embodiment.
- FIG. 21 is a schematic configuration diagram of a computer system related to Example 2 of the present invention.
- the computer system includes a migration source storage device 10a, a migration destination storage device 20a, a primary server 30a, and a secondary server 30b.
- the hardware configurations of the migration source storage device 10a and the migration destination storage device 20a are the same as those described in the first embodiment.
- the logical volumes 130a and 130a 'in the migration source storage apparatus 10a are operated as a volume pair before migration.
- the logical volume 130a is P-VOL
- the logical volume 130a ' is S-VOL. That is, in the computer system according to the first embodiment, the logical volume 130b in the migration source secondary storage 10b is changed to the same storage apparatus (migration source storage apparatus 10a) as the logical volume 130a in the computer system according to the second embodiment. It is an existing configuration.
- the logical volume 130a ' is a volume used for backup acquisition of the logical volume 130a, and the secondary server 30b backs up the data of the logical volume 130a' to a backup device (not shown) when the backup is acquired.
- the secondary server 30b is running backup software 104 that is a program for backup operation.
- the backup software 104 uses the storage manager 101 to set the volume pair status to “suspended” (or if the volume pair is already in the “suspended” status, resynchronize the volume pair. After that, the status of the volume pair is changed to “suspended”), and the backup software 104 of the secondary server 30b performs a process of copying the data of the logical volume 130a ′ to a backup device (not shown). When the backup process is completed, the secondary server 30b resynchronizes the volume pair and sets the volume pair to the “pair” state again.
- the configuration of the migration source storage apparatus 10a is investigated (S10 ').
- S10 of the first embodiment the investigation of the migration source secondary storage device 10b was performed.
- S10 ′ the migration source secondary storage device 10b is examined. Do not investigate.
- the other points are the same between S10 'and S10 of the first embodiment.
- the administrator installs the migration destination storage apparatus 20a, and performs a physical path connection between the migration destination storage system and the host computer or the migration source storage apparatus 10a (S20 ′).
- the following physical paths are connected.
- the storage manager 101 performs LU path setting for the logical volume of the migration source storage apparatus 10a.
- the LU path setting performed here is the same as the processing performed in S30 of the first embodiment. In other words, this is performed so that the migration destination storage apparatus 20a can recognize the migration target volume (P-VOL) of the migration source primary storage apparatus 10a by the external storage connection function. Therefore, LU path setting is performed for port C of the migration source storage apparatus 10a connected to the migration destination storage apparatus 20a in S20.
- the setting operation of the migration destination storage apparatus 20a is performed.
- This operation is also the same as S40 of the first embodiment except that the process for the migration destination secondary storage apparatus is not performed.
- the process performed in S40 ' will be described with reference to FIG.
- the management terminal creates a command device (S40a). After the command device is created, an instance of the storage manager 101 is activated on the primary server 30a. In the subsequent operations, various settings are made from the storage manager 101. However, as another embodiment, a management terminal may be used.
- the storage manager 101 operating on the primary server 30a sets the attribute of the port (port L) connected to the migration source primary storage device 10a among the ports of the migration destination storage device 20a for the external storage connection function. Change to the attribute of.
- the storage manager 101 creates a virtual storage 25a in the migration destination storage apparatus 20a.
- the storage manager 101 issues a control command for deleting the virtual LDEV # attached to the volume used as the migration destination volume to the migration destination storage apparatus 20a. Subsequently, the storage manager 101 issues a control command to register the LDEV # of the LDEV from which the virtual LDEV # has been deleted in the virtual storage.
- the migration destination volume used in the migration destination storage apparatus 20a includes a migration destination P-VOL and a migration destination S-VOL (which were in the migration destination secondary storage apparatus 20b in the first embodiment). Therefore, in S40d ', the virtual LDEV # attached to the two volumes is deleted, and the two volumes are registered in the virtual storage.
- the storage manager 101 performs virtualization of the LDEV registered in the virtual storage. Similar to S40d ', virtualization is performed on two volumes.
- the storage manager 101 maps the migration source P-VOL of the migration source storage apparatus 10a to the migration destination primary storage apparatus 20a. Further, the storage manager 101 creates a migration destination volume (migration destination P-VOL) from the mapped storage area.
- migration destination P-VOL migration destination volume
- a target volume used for the volume migration function is created.
- the size of the target volume created here is the same size as the migration destination P-VOL, and the storage area of the parity group 22a is used as the storage area.
- the migration destination S-VOL is created.
- the storage area of the parity group 22a is also used as the storage area of the migration destination S-VOL.
- the LU path is set in the migration destination P-VOL.
- the storage manager 101 issues a control command to assign a port name and LUN to the LDEV # 33.
- a pair is created between the migration destination P-VOL and the migration destination S-VOL in the migration destination storage apparatus 20a.
- the storage manager 101 of the primary server 30a issues a pair creation control command
- the migration destination storage apparatus 20a starts data replication between the migration destination P-VOL and the migration destination S-VOL by the local copy function.
- the instance of the storage manager 101 that has been started for the migration source storage apparatus 10a is stopped, and the setting file of the storage manager 101 is rewritten.
- the rewriting contents of the setting file are the same as those in the first embodiment, and the contents of the setting file may be changed so as to control the virtual storage.
- the primary server 30a and the secondary server 30b start the instance using the setting file (S90 ').
- the secondary server 30b invalidates the path to the migration source volume of the migration source storage apparatus 10a and deletes the LU path of the migration source volume of the migration source secondary storage apparatus 10b. Further, the secondary server 30b performs LU path setting to the migration destination S-VOL of the migration destination storage apparatus 20a, and makes the secondary server 30b recognize the migration destination S-VOL. Thereafter, the operation of the backup software 104 that has been stopped in S80 'is resumed.
- an operation of migrating the migration destination P-VOL to the target volume is performed.
- an instance (referred to as a second instance) different from the instance of the storage manager 101 activated in S90 ′ is activated on the primary server 30a, and an instruction for volume migration to the migration destination storage device 20a is issued from the second instance.
- the migration destination storage apparatus 20a that has received the instruction migrates the migration destination P-VOL to the target volume by using the volume migration function.
- the administrator removes the migration source storage system (S120 ') and ends the migration process.
- the P-VOL and the S-VOL are transferred from the migration source storage apparatus 10 to the migration destination storage apparatus while maintaining the logical volume pair in the storage apparatus without stopping the access acceptance from the primary server 30a. 20 can be transferred.
- FIG. 25 is a block diagram of a computer system related to Example 3 of the present invention.
- the migration manager 105 which is a program for performing the migration process
- the primary server 30a the primary server 30a
- the secondary server 30b the management terminals (16a, 16b, 26a, 26b) are connected via a LAN or WAN and can communicate with each other.
- the migration manager 105 executes the migration process all at once. Therefore, the migration manager 105 can issue a command to the storage manager 101, the alternate path software 102, and the cluster software 103 to perform a predetermined process. It is also possible to issue a command to the management terminals 16a and 26a to perform processing such as command device creation. It also has a function of rewriting the setting file used by the storage manager 101.
- the migration sub-manager 106 can issue commands to the storage manager 101, the alternate path software 102, and the cluster software 103 to perform predetermined processing. It also has a function of rewriting the setting file used by the storage manager 101. However, the migration sub-manager 106 issues a command to the storage manager 101, the alternate path software 102, and the cluster software 103 in accordance with an instruction from the migration manager 105, or rewrites the setting file used by the storage manager 101.
- S10 and S20 are the same as the migration process of the first embodiment.
- the administrator investigates information on the volumes (migration source P-VOL, migration source S-VOL) that were operated as volume pairs in the migration source storage system.
- the administrator prepares the following information necessary for instructing the migration manager 105 to perform the migration process. .
- Port name connected to the original primary storage apparatus 10a (7) Port name of the transfer source primary storage apparatus 10a connected to the transfer destination primary storage apparatus 20a (Temporarily, this port name is referred to as "port C") (8) LDEV # of logical volume (migration source P-VOL) for which LU path is set for port C
- the administrator issues a data migration instruction to the migration manager 105 (S20 ′).
- the following information is specified as a parameter.
- (1) Device serial number of migration source primary storage device 10a (2) Device serial number of migration source secondary storage device 10b (3) Device serial number of migration destination primary storage device 20a (4) Device of migration destination secondary storage device 20b Serial number (5) LDEV # of migration source P-VOL, LDEV # of migration source S-VOL (6) LDEV # usable in the migration destination primary storage apparatus 10a and the migration destination secondary storage apparatus 20b (determined based on the unused LDEV # investigated in S10) (7) Group names of parity groups that can be used in the migration destination primary storage apparatus 10a and the migration destination secondary storage apparatus 20b (based on the remaining size of the parity group investigated in S10, the migration destination P-VOL (migration target Volume) and a parity group that can create the migration destination S-VOL) (8) The port name of the migration source primary storage apparatus 10a connected to the migration destination primary storage apparatus 20
- the migration manager 105 When the migration manager 105 receives the instruction, the migration manager 105 starts the migration process (S30 ').
- the contents performed in the migration process are the same as the processes after S30 in the first embodiment. The following description will focus on differences from the migration process of the first embodiment.
- S30 and S40 are the same as S30 and S40 of the first embodiment.
- the migration manager 105 issues an instruction to the migration sub-manager 106 to cause the migration sub-manager 106 to perform setting processing for the migration-destination secondary storage device 20b.
- the contents of the process executed by the migration sub-manager 106 are the same as S50 of the first embodiment.
- the migration manager 105 When the process of S50 is completed, the migration manager 105 performs the processes of S60 and S70. When the process of S70 is completed (that is, the pair creation is completed), the migration manager 105 performs the process of S80. In S80, since it is necessary to stop the cluster software 103 etc. on the secondary server 30b, in addition to the process of stopping the operation of the cluster software on the primary server 30a, the migration manager 105 An instruction to stop the cluster software 103 etc. on the server 30b is given.
- the migration manager 105 rewrites the setting file of the storage manager 101. Since the setting file of the secondary server 30b needs to be rewritten, the migration manager 105 causes the migration sub-manager 106 to rewrite the setting file of the secondary server 30b. Subsequently, in S100, the migration manager 105 instructs the migration sub-manager 106 to invalidate the path from the secondary server 30b to the migration source volume of the migration source secondary storage device 10b, and to change the migration source volume of the migration source secondary storage device 10b. Causes the LU path to be deleted.
- the migration manager 105 performs processing for migrating the migration destination P-VOL to the target volume.
- the migration manager 105 notifies the administrator that the volume migration processing has been completed. After receiving this notification, the administrator removes the migration source storage system (S120), and the migration process is completed.
- the migration process of the third embodiment is not limited to the process described above, and various modifications are possible.
- some processes may be manually performed by the administrator. For example, the administrator may perform rewriting of the setting file.
- the migration manager 105 may automatically determine the selection of the LDEV # of the logical volume created in the migration destination storage system or the selection of the parity group as the creation destination of the logical volume based on the investigation result of the configuration information.
- volume pair migration under the same configuration as that of the computer system according to the first embodiment, that is, migration of a volume pair by the remote copy function has been described.
- the migration method described in the third embodiment is performed.
- the present invention can also be applied to volume pair migration (volume pair migration by the local copy function) in the computer system according to Example 2.
- the above is the content of the migration process according to the embodiment of the present invention.
- the volume number or device that is the identification information recognized by the host computer attached to the migration target logical volume Information such as serial number does not change. For this reason, the volume migration is executed transparently from the host computer, so it is not necessary for the host computer to stop the I / O processing.
- the stop time of the software service for backup and disaster recovery can be limited to an extremely short time.
- the storage system according to the embodiment of the present invention has been described above, but the present invention is not limited to the embodiment described above.
- the migration process described above has been performed in accordance with an instruction from the host computer (primary server). However, there may be a configuration in which the storage apparatus controls the migration by issuing a management operation command. Further, the migration processing according to the embodiment of the present invention is not only used for data migration between devices and for replacement of devices, but can also be used for system expansion.
- a storage control program 150 ′ which is a program equivalent to the storage manager described in the first embodiment, is provided in the storage device 10a ′ (or 10a) in the storage system 10 ′, and the storage device 10a ′ The flow of processing such as acquisition of configuration information, volume setting, pair creation, and volume migration may be controlled.
- migration source primary storage device 10b migration source secondary storage device 20a: migration destination primary storage device 20b: migration destination secondary storage device 30a: primary server 30b: secondary server 50: SAN 60: SAN
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
図1は、本発明の実施例1に係る計算機システムの概略構成図である。計算機システムは、移行元プライマリストレージ装置10a、移行元セカンダリストレージ装置10b、移行先プライマリストレージ装置20a、移行先セカンダリストレージ装置20b、プライマリサーバ30a、セカンダリサーバ30bからなる。
続いて、本発明の実施例における移行方法の説明に必要な、ストレージ装置の機能について説明する。ここでは主に、移行先プライマリストレージ装置20aの持つ機能を中心に説明する。なお、先に述べたとおり、以下で説明するストレージ装置の各機能は、ストレージ装置のCPU2031がメモリ(LM)2032上のプログラムを実行することにより実現される。
本発明の実施例における移行先プライマリストレージ装置20aは、1または複数のドライブ221を1つのグループとして管理しており、このグループをパリティグループ(図1の要素22a、22b等がパリティグループである)と呼ぶ。また移行先プライマリストレージ装置20aは、いわゆるRAID技術を用いて、パリティグループ22を構成するドライブ群にデータを格納する前に、データから冗長データ(パリティ)を作成する。そしてデータとパリティとをパリティグループ22を構成するドライブ群に格納する。
移行先プライマリストレージ装置20aは、他のストレージ装置(移行元プライマリストレージ装置10a等)の有するボリュームの記憶領域を、移行先プライマリストレージ装置20aの有する記憶領域として扱い、当該記憶領域をプライマリサーバ30a等のホスト計算機に提供する機能を有している。以下、この機能のことを「外部ストレージ接続機能」と呼ぶ。また、外部ストレージ接続機能により管理される記憶領域のことを、外部ボリュームグループ(24a)と呼ぶ。
上で説明した論理ボリュームを、プライマリサーバ30a等のホスト計算機が認識できるようにするために、LUN(論理ユニット番号)やポートの識別子を付す機能を「論理パス作成機能」あるいは「LUパス設定機能」と呼ぶ。本発明の実施例では、各ポート21aが有する識別子であるWWN(World Wide Name)をポート識別子として用いる。ただし、その他の識別子を用いてもよい。図6に示すLU管理テーブルT300は、ストレージ装置内の各論理ボリュームに対応付けられている論理ユニット番号(LUN)とポート名の情報を管理するためのテーブルである。このテーブルは、ストレージ装置の共有メモリ2042に格納される。
ストレージ装置は、論理デバイス内のデータを別のボリュームに複製する機能を有し、この機能をボリュームコピー機能と呼ぶ。ボリュームコピー機能には、コピー元ボリュームの属するストレージ装置と同一のストレージ装置内のボリュームにデータをコピーする機能、コピー元ボリュームの属するストレージ装置とは別のストレージ装置内のボリュームにデータをコピーする機能(たとえば移行元プライマリストレージ装置10a内のボリュームのデータを、移行元セカンダリストレージ装置10b内のボリュームへとコピー)とがある。本発明の実施例ではそれぞれを、「ローカルコピー機能」、「リモートコピー機能」と呼ぶ。
ボリュームマイグレーション機能は、移行先プライマリストレージ装置20a内の、1つの論理ボリュームに格納されているデータを、移行先プライマリストレージ装置20a内のその他の論理ボリュームに移動する機能である。また、論理ボリューム内のデータの移動に加えて、LDEV#等の識別子も移動する。
続いて、仮想ストレージについて説明する。本発明の実施例に係る移行先プライマリストレージ装置20aは、移行先プライマリストレージ装置20a等の物理的なストレージ装置とは異なる、1または複数の仮想的なストレージ装置25a(以下、これを仮想ストレージと呼ぶ)を定義し、プライマリサーバ30aに対して、物理的なストレージ装置に加えて仮想ストレージがSAN50上に存在するように見せかける機能を有する。なお、以下では移行先プライマリストレージ装置20aを例にとって説明するが、仮想ストレージを定義する機能は、移行先セカンダリストレージ装置20bも備えているので、移行先セカンダリストレージ装置20bも以下で説明する機能を備えている。また、移行元プライマリストレージ装置10a及び/または移行元セカンダリストレージ装置10bが仮想ストレージを定義する機能を備えていても、備えていなくてもよい。移行元プライマリストレージ装置10a及び/または移行元セカンダリストレージ装置10bが仮想ストレージを定義する機能を備えており、移行元プライマリストレージ装置10a及び/または移行元セカンダリストレージ装置10bに定義された仮想ストレージに属するボリュームペアを移行する場合、以下の説明では、移行元のストレージ装置は移行元の仮想ストレージ装置に、移行元のストレージ装置の装置シリアル番号は仮想製番に、LDEV#はVLDEV#になる。
ストレージマネージャ101は、プライマリサーバ30aまたはセカンダリサーバ30bからストレージ装置の設定、制御を行うためのプログラムである。本発明の実施例1に係る計算機システムの各ストレージ装置はいずれも、ストレージマネージャ101から制御される。以下では、ストレージマネージャ101の機能及びストレージマネージャ101が使用する設定情報について説明する。
paircreate <グループ名>
図14は、本発明の実施例1に係る計算機システムにおける、移行処理の概要を表した図である。移行元プライマリストレージ装置10a、移行元セカンダリストレージ装置10bは、リモートコピー機能を有している。リモートコピー機能により、移行元プライマリストレージ装置10aの論理ボリューム130aと移行元セカンダリストレージ装置10bの論理ボリューム130bとはペア関係にあり、論理ボリューム130aがP-VOL、論理ボリューム130bがS-VOLという関係にある。つまりプライマリサーバ30aから移行元プライマリストレージ装置10aの論理ボリューム130aに書き込まれたデータは、移行元セカンダリストレージ装置10bの論理ボリューム130bへとコピーされ、論理ボリューム130bには常時、論理ボリューム130aのデータの複製が格納されている状態(ペア状態)にある。以下、論理ボリューム130aのことを「移行元P-VOL」と呼び、論理ボリューム130bのことを「移行元S-VOL」と呼ぶ。
続いて、本発明の実施例1に係る計算機システムにおける、ボリューム移行処理のフローを、図16、17、18のフローチャートを用いて説明する。なお、以下では説明の簡単化のため、移行元ストレージシステムの1ボリュームペア(P-VOL、S-VOLのセット)のみの移行処理について説明するが、同時に複数のボリュームペアの移行を行うことも可能である。
(ア) 移行元プライマリストレージ装置10a、移行元セカンダリストレージ装置10bのシリアル番号(S/N)、機種名
(イ) 移行対象のP-VOL、S-VOLのLDEV#及びサイズ
(ウ) 移行元プライマリストレージ装置10a、移行元セカンダリストレージ装置10bで使用されていないLDEV#
(a) 移行元プライマリストレージ装置10aのP-VOL(移行元P-VOLと呼ぶ)のLDEV#が11番、サイズが100GB
(b) 移行元セカンダリストレージ装置10bのS-VOL(移行元S-VOLと呼ぶ)のLDEV#が22番、サイズが100GB
(c) 移行元プライマリストレージ装置10aのシリアル番号が1番
(d) 移行元セカンダリストレージ装置10bのシリアル番号が11番
(e) 移行先プライマリストレージ装置20aのシリアル番号が2番
(f) 移行先セカンダリストレージ装置20bのシリアル番号が22番
(g) 移行先プライマリストレージ装置20aのP-VOL(移行先P-VOLと呼ぶ)として選定される論理ボリュームのLDEV#が33番
(h) 移行先プライマリストレージ装置20aにおいて、ボリュームマイグレーション機能のターゲットボリュームとして選定される論理ボリュームのLDEV#が44番、及び仮想LDEV#が99番
(i) 移行元セカンダリストレージ装置10bのS-VOL(移行元S-VOLと呼ぶ)のLDEV#が55番
(ア) 移行先プライマリストレージ装置20a(ポートJ、ポートK)とプライマリサーバ30a間の物理パス
(イ) 移行先セカンダリストレージ装置20b(ポートQ、ポートR)とセカンダリサーバ30b間の物理パス
(ウ) 移行元プライマリストレージ装置10a(ポートC)と移行先プライマリストレージ装置10b(ポートL)間の物理パス
(エ) 移行先プライマリストレージ装置20a(ポートM、ポートN)と移行先セカンダリストレージ装置20b(ポートO、ポートP)間の物理パス
上で説明した移行処理では、S70でペア作成の指示を行った後、ペア状態が“pair”に変化したことが確認されてから、S80以降の処理が行われていた。ただし本発明の移行処理は、上で説明した処理手順に限定されるものではない。別の実施形態として、S70でペア作成の指示を行った直後に、それ以降の作業を実施するようにしてもよい。その場合の処理の流れについて、上で説明した移行処理との相違点を中心に説明する。
続いて、本発明の変形例2に係る計算機システムについて説明する。図19は、本発明の変形例2に係る計算機システムの構成図である。本発明の変形例2に係る計算機システムは実施例1に係る計算機システムと同じく、移行元プライマリストレージ装置10a、移行元セカンダリストレージ装置10b、移行先プライマリストレージ装置20a、移行先セカンダリストレージ装置20b、プライマリサーバ30a、セカンダリサーバ30bから構成され、各ストレージ装置のハードウェア構成も実施例1で説明したものと同じである。
続いて、ボリューム移行処理のフローを説明する。変形例2に係る計算機システムにおける、ボリューム移行処理の流れは、実施例1で説明したものとほとんど同じであるため、実施例1で用いられた図16~18を用いて説明する。
(ア) 移行先ストレージ装置20a(ポートJ、ポートK)とプライマリサーバ30a間の物理パス
(イ) 移行先ストレージ装置20a(ポートQ、ポートR)とセカンダリサーバ30b間の物理パス
(ウ) 移行元ストレージ装置10a(ポートC)と移行先ストレージ装置20a(ポートL)間の物理パス
続いて、本発明の実施例3に係る計算機システムにおける、ボリューム移行処理の流れを、図26のフローチャートを用いて説明する。なお、以下では説明の簡単化のため、移行元ストレージシステムの1ボリュームペア(P-VOL、S-VOLのセット)のみの移行処理について説明するが、同時に複数のボリュームペアの移行を行うことも可能である。
(1) ストレージ装置10a、10b、20a、20bの装置シリアル番号
(2) ストレージ装置10a、10b、20a、20bで未使用のLDEV#
(3) ストレージ装置20a、20bの、各パリティグループの残サイズ
(4) 移行元P-VOLのLDEV#、移行元S-VOLのLDEV#
(5) 移行元プライマリストレージ装置10aのポートのうち、プライマリサーバ30a、移行元セカンダリストレージ装置10bと接続されているポート名
(6) 移行元セカンダリストレージ装置10bのポートのうち、セカンダリサーバ30b、移行元プライマリストレージ装置10aと接続されているポート名
(7) 移行先プライマリストレージ装置20aと接続される、移行元プライマリストレージ装置10aのポート名(仮にこのポート名を「ポートC」と呼ぶ)
(8) ポートCに対してLUパス設定される論理ボリューム(移行元P-VOL)のLDEV#
(1) 移行元プライマリストレージ装置10aの装置シリアル番号
(2) 移行元セカンダリストレージ装置10bの装置シリアル番号
(3) 移行先プライマリストレージ装置20aの装置シリアル番号
(4) 移行先セカンダリストレージ装置20bの装置シリアル番号
(5) 移行元P-VOLのLDEV#、移行元S-VOLのLDEV#
(6) 移行先プライマリストレージ装置10a及び移行先セカンダリストレージ装置20bで使用可能なLDEV#(S10で調査する、未使用のLDEV#をもとに決定する)
(7) 移行先プライマリストレージ装置10a及び移行先セカンダリストレージ装置20bで使用可能なパリティグループのグループ名(S10で調査する、パリティグループの残サイズをもとに、移行先P-VOL(のマイグレーションターゲットボリューム)、移行先S-VOLを作成可能なパリティグループを決定する)
(8) 移行先プライマリストレージ装置20aと接続される、移行元プライマリストレージ装置10aのポート名(仮にこのポート名を「ポートC」と呼ぶ)
(9) プライマリサーバ30aと接続される移行先プライマリストレージ装置20aのポート名
(10) セカンダリサーバ30bと接続される移行先セカンダリストレージ装置20bのポート名
(11) 移行先セカンダリストレージ装置20bと接続される、移行先プライマリストレージ装置20aのポート名及び移行先セカンダリストレージ装置20bのポート名
(12) プライマリサーバ30a及びセカンダリサーバ30bで稼働しているインスタンスのインスタンス番号、及びインスタンスが読み込む設定ファイルのファイル名
10b: 移行元セカンダリストレージ装置
20a: 移行先プライマリストレージ装置
20b: 移行先セカンダリストレージ装置
30a: プライマリサーバ
30b: セカンダリサーバ
50:SAN
60:SAN
Claims (12)
- 移行元ストレージシステムと、移行先ストレージシステムと、前記移行元ストレージシステムと移行先ストレージシステムに接続されるサーバとを有する計算機システムにおけるボリューム移行方法において、
前記移行元ストレージシステムは、移行元プライマリボリュームと移行元セカンダリボリュームを有し、前記移行元プライマリボリュームと前記移行元セカンダリボリュームは、前記移行元セカンダリボリュームに前記移行元プライマリボリュームのデータの複製が常時格納されたペア状態において、
(1)前記移行先ストレージシステムは、前記移行元プライマリボリュームを記憶領域とする移行先プライマリボリュームと、前記移行先ストレージシステムの有する記憶デバイスを記憶領域とする移行先セカンダリボリュームを作成し、
(2)前記サーバは、前記移行元プライマリボリュームへのアクセス要求の発行先を、前記移行先プライマリボリュームへと切り替え、
(3)前記アクセスパスが前記移行先プライマリボリュームに切り替えられたのち、前記移行先ストレージシステムは、前記移行先プライマリボリュームのデータを前記移行先セカンダリボリュームへと複製することによって、前記移行先プライマリボリュームと前記移行先セカンダリボリュームとをペア状態にする、
手順を実行することを特徴とする、ボリューム移行方法。 - 前記移行先ストレージシステムは、
(4)前記移行先プライマリボリュームのデータを、前記移行先ストレージシステムの有する記憶デバイスを記憶領域とするターゲットボリュームへ移行する、
ことを特徴とする、請求項1に記載のボリューム移行方法。 - 前記移行先プライマリボリュームは、前記移行元プライマリボリュームと同一の識別子を有し、
前記サーバは、前記識別子に基づいて、前記移行先プライマリボリュームへのアクセスパスを、前記移行元プライマリボリュームの交替パスと認識するサーバであって、
前記サーバは、前記移行元プライマリボリュームへのアクセスパスが削除されたことを契機に、前記移行元プライマリボリュームへのアクセス要求の発行先を、前記移行先プライマリボリュームへと切り替える、
ことを特徴とする、請求項1に記載のボリューム移行方法。 - 前記識別子は、前記移行元プライマリストレージシステム内で一意なボリューム番号及び前記移行元プライマリストレージシステムのシリアル番号であり、
前記移行先プライマリボリュームは、前記移行先プライマリストレージシステム内で一意なボリューム番号及び前記移行先プライマリストレージシステムのシリアル番号で構成される識別情報に加え、前記識別子を仮想識別子として有し、
前記サーバには前記仮想識別子を前記移行先プライマリボリュームの識別子として提供する、
ことを特徴とする、請求項3に記載のボリューム移行方法。 - 前記移行元ストレージシステムは、前記移行元プライマリボリュームを有する移行元プライマリストレージ装置と、前記移行元セカンダリボリュームを有する移行元セカンダリストレージ装置とから構成され、
前記移行先ストレージシステムは、移行先プライマリストレージ装置と移行先セカンダリストレージ装置とから構成され、
前記移行先ストレージシステムは、前記移行先プライマリストレージ装置に前記移行先プライマリボリュームを作成し、前記移行先セカンダリストレージ装置に前記移行先セカンダリボリュームを作成する、
ことを特徴とする、請求項1に記載のボリューム移行方法。 - 前記アクセスパスが前記移行先プライマリボリュームに切り替えられたのち、前記移行先ストレージシステムは、前記移行先プライマリボリュームのデータの前記移行先セカンダリボリュームへの複製を開始し、
前記複製の完了前に、前記移行先プライマリボリュームのデータの前記移行先セカンダリボリュームへの複製を開始する、
ことを特徴とする、請求項5に記載のボリューム移行方法。 - 前記手順の実行中、
前記移行元プライマリボリュームと前記移行元セカンダリボリュームは、前記移行元セカンダリボリュームに前記移行元プライマリボリュームのデータの複製が常時格納されたペア状態が維持されている、
ことを特徴とする、請求項1に記載のボリューム移行方法。 - 移行元ストレージシステムと、移行先ストレージシステムと、前記移行元ストレージシステムと移行先ストレージシステムに接続されるサーバとを有する計算機システムにおいて、
前記移行元ストレージシステムは、移行元プライマリボリュームと移行元セカンダリボリュームを有し、前記移行元プライマリボリュームと前記移行元セカンダリボリュームは、前記移行元セカンダリボリュームに前記移行元プライマリボリュームのデータの複製が常時格納されたペア状態であって、
前記計算機システムは、
(1)前記移行先ストレージシステムに、前記移行元プライマリボリュームを記憶領域とする移行先プライマリボリュームと、前記移行先ストレージシステムの有する記憶デバイスを記憶領域とする移行先セカンダリボリュームを作成させ、
(2)前記サーバに、前記移行元プライマリボリュームへのアクセス要求の発行先を、前記移行先プライマリボリュームへと切り替えさせ、
(3)前記アクセスパスが前記移行先プライマリボリュームに切り替えられたのち、前記移行先ストレージシステムに、前記移行先プライマリボリュームのデータを前記移行先セカンダリボリュームへと複製させることによって、前記移行先プライマリボリュームと前記移行先セカンダリボリュームとをペア状態にする、
ことを特徴とする、計算機システム。 - 前記計算機システムは、前記移行先ストレージシステムに、
(4)前記移行先プライマリボリュームのデータを、前記移行先ストレージシステムの有する記憶デバイスを記憶領域とするターゲットボリュームへ移行させる、
ことを特徴とする、請求項8に記載の計算機システム。 - 前記移行先プライマリボリュームは、前記移行元プライマリボリュームと同一の識別子を有し、
前記サーバは、前記識別子に基づいて、前記移行先プライマリボリュームへのアクセスパスを、前記移行元プライマリボリュームの交替パスと認識するサーバであって、
前記サーバは、前記移行元プライマリボリュームへのアクセスパスが削除されたことを契機に、前記移行元プライマリボリュームへのアクセス要求の発行先を、前記移行先プライマリボリュームへと切り替える、
ことを特徴とする、請求項8に記載の計算機システム。 - 前記移行元ストレージシステムは、前記移行元プライマリボリュームを有する移行元プライマリストレージ装置と、前記移行元セカンダリボリュームを有する移行元セカンダリストレージ装置とから構成され、
前記移行先ストレージシステムは、移行先プライマリストレージ装置と移行先セカンダリストレージ装置とから構成され、
前記移行先ストレージシステムは、前記移行先プライマリストレージ装置に前記移行先プライマリボリュームを作成し、前記移行先セカンダリストレージ装置に前記移行先セカンダリボリュームを作成する、
ことを特徴とする、請求項8に記載の計算機システム。 - 前記アクセスパスが前記移行先プライマリボリュームに切り替えられたのち、前記移行先ストレージシステムは、前記移行先プライマリボリュームのデータの前記移行先セカンダリボリュームへの複製を開始し、
前記複製の完了前に、前記移行先プライマリボリュームのデータの前記移行先セカンダリボリュームへの複製を開始する、
ことを特徴とする、請求項11に記載の計算機システム。
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1615619.2A GB2539340B (en) | 2014-04-22 | 2014-04-22 | Data migration method of storage system |
DE112014006156.5T DE112014006156B4 (de) | 2014-04-22 | 2014-04-22 | Speichersystem und Datenmigrationsverfahren |
PCT/JP2014/061245 WO2015162684A1 (ja) | 2014-04-22 | 2014-04-22 | ストレージシステムのデータ移行方法 |
CN201480076359.5A CN106030500B (zh) | 2014-04-22 | 2014-04-22 | 存储系统的数据迁移方法 |
JP2014534683A JP5718533B1 (ja) | 2014-04-22 | 2014-04-22 | ストレージシステムのデータ移行方法 |
US14/453,823 US8904133B1 (en) | 2012-12-03 | 2014-08-07 | Storage apparatus and storage apparatus migration method |
US14/543,932 US9152337B2 (en) | 2012-12-03 | 2014-11-18 | Storage apparatus and storage apparatus migration method |
US14/870,885 US9846619B2 (en) | 2012-12-03 | 2015-09-30 | Storage apparatus and storage apparatus migration method |
US15/828,732 US10394662B2 (en) | 2012-12-03 | 2017-12-01 | Storage apparatus and storage apparatus migration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2014/061245 WO2015162684A1 (ja) | 2014-04-22 | 2014-04-22 | ストレージシステムのデータ移行方法 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/081262 Continuation WO2014087465A1 (ja) | 2012-12-03 | 2012-12-03 | ストレージ装置及びストレージ装置移行方法 |
PCT/JP2012/081262 Continuation-In-Part WO2014087465A1 (ja) | 2012-12-03 | 2012-12-03 | ストレージ装置及びストレージ装置移行方法 |
PCT/JP2014/081262 Continuation-In-Part WO2016084168A1 (ja) | 2014-11-26 | 2014-11-26 | 釣竿 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015162684A1 true WO2015162684A1 (ja) | 2015-10-29 |
Family
ID=53277436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/061245 WO2015162684A1 (ja) | 2012-12-03 | 2014-04-22 | ストレージシステムのデータ移行方法 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP5718533B1 (ja) |
CN (1) | CN106030500B (ja) |
DE (1) | DE112014006156B4 (ja) |
GB (1) | GB2539340B (ja) |
WO (1) | WO2015162684A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017081785A1 (ja) * | 2015-11-12 | 2017-05-18 | 株式会社日立製作所 | 計算機システム |
JP2020013227A (ja) * | 2018-07-13 | 2020-01-23 | 株式会社日立製作所 | ストレージシステム |
JP2022149305A (ja) * | 2021-03-25 | 2022-10-06 | 株式会社日立製作所 | ストレージシステム、ストレージシステムの移行方法 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10320906B2 (en) * | 2016-04-29 | 2019-06-11 | Netapp, Inc. | Self-organizing storage system for asynchronous storage service |
WO2017208319A1 (ja) * | 2016-05-31 | 2017-12-07 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの管理方法 |
US10380096B2 (en) * | 2016-11-30 | 2019-08-13 | Ncr Corporation | Data migration |
US10416905B2 (en) * | 2017-02-09 | 2019-09-17 | Hewlett Packard Enterprise Development Lp | Modifying membership of replication groups via journal operations |
CN107193489A (zh) * | 2017-05-15 | 2017-09-22 | 郑州云海信息技术有限公司 | 一种基于存储虚拟网关的存储级数据迁移方法和装置 |
CN107704206B (zh) * | 2017-10-09 | 2020-09-18 | 苏州浪潮智能科技有限公司 | 在线迁移异构系统数据的方法、装置、设备和存储介质 |
CN107656705B (zh) * | 2017-10-25 | 2020-10-23 | 苏州浪潮智能科技有限公司 | 一种计算机存储介质和一种数据迁移方法、装置及系统 |
CN108388599B (zh) * | 2018-02-01 | 2022-08-02 | 平安科技(深圳)有限公司 | 电子装置、数据迁移及调用方法及存储介质 |
CN110413213B (zh) * | 2018-04-28 | 2023-06-27 | 伊姆西Ip控股有限责任公司 | 存储卷在存储阵列之间的无缝迁移 |
CN111338941B (zh) * | 2020-02-21 | 2024-02-20 | 北京金堤科技有限公司 | 信息处理方法和装置、电子设备和存储介质 |
CN111930707B (zh) * | 2020-07-10 | 2022-08-02 | 江苏安超云软件有限公司 | 一种windows云迁移的盘符修正方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007042008A (ja) * | 2005-08-05 | 2007-02-15 | Hitachi Ltd | 記憶制御方法及び記憶制御システム |
JP2007066154A (ja) * | 2005-09-01 | 2007-03-15 | Hitachi Ltd | データをコピーして複数の記憶装置に格納するストレージシステム |
JP2008015984A (ja) * | 2006-07-10 | 2008-01-24 | Nec Corp | データ移行装置及び方法並びにプログラム |
JP2008134986A (ja) * | 2006-10-30 | 2008-06-12 | Hitachi Ltd | 情報システム、データ転送方法及びデータ保護方法 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5544347A (en) | 1990-09-24 | 1996-08-06 | Emc Corporation | Data storage system controlled remote data mirroring with respectively maintained data indices |
US6101497A (en) | 1996-05-31 | 2000-08-08 | Emc Corporation | Method and apparatus for independent and simultaneous access to a common data set |
JP4124348B2 (ja) | 2003-06-27 | 2008-07-23 | 株式会社日立製作所 | 記憶システム |
JP2005190259A (ja) | 2003-12-26 | 2005-07-14 | Hitachi Ltd | 複数世代のバックアップデータの管理 |
JP4387261B2 (ja) * | 2004-07-15 | 2009-12-16 | 株式会社日立製作所 | 計算機システム、および、記憶装置システムの移行方法 |
JP4955996B2 (ja) | 2005-09-20 | 2012-06-20 | 株式会社日立製作所 | ボリューム移行方法およびストレージネットワークシステム |
JP2007122531A (ja) * | 2005-10-31 | 2007-05-17 | Hitachi Ltd | 負荷分散システム及び方法 |
JP4930934B2 (ja) * | 2006-09-29 | 2012-05-16 | 株式会社日立製作所 | データマイグレーション方法及び情報処理システム |
JP2009093316A (ja) * | 2007-10-05 | 2009-04-30 | Hitachi Ltd | ストレージシステム及び仮想化方法 |
JP2009104421A (ja) * | 2007-10-23 | 2009-05-14 | Hitachi Ltd | ストレージアクセス装置 |
US8166264B2 (en) * | 2009-02-05 | 2012-04-24 | Hitachi, Ltd. | Method and apparatus for logical volume management |
JP2012027829A (ja) * | 2010-07-27 | 2012-02-09 | Hitachi Ltd | スケールアウト型ストレージシステムを含んだストレージシステム群及びその管理方法 |
JP5595530B2 (ja) * | 2010-10-14 | 2014-09-24 | 株式会社日立製作所 | データ移行システム及びデータ移行方法 |
EP2583162A1 (en) * | 2010-12-22 | 2013-04-24 | Hitachi, Ltd. | Storage system comprising multiple storage apparatuses with both storage virtualization function and capacity virtualization function |
CN104603774A (zh) * | 2012-10-11 | 2015-05-06 | 株式会社日立制作所 | 迁移目的地文件服务器和文件系统迁移方法 |
-
2014
- 2014-04-22 JP JP2014534683A patent/JP5718533B1/ja active Active
- 2014-04-22 DE DE112014006156.5T patent/DE112014006156B4/de active Active
- 2014-04-22 GB GB1615619.2A patent/GB2539340B/en active Active
- 2014-04-22 CN CN201480076359.5A patent/CN106030500B/zh active Active
- 2014-04-22 WO PCT/JP2014/061245 patent/WO2015162684A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007042008A (ja) * | 2005-08-05 | 2007-02-15 | Hitachi Ltd | 記憶制御方法及び記憶制御システム |
JP2007066154A (ja) * | 2005-09-01 | 2007-03-15 | Hitachi Ltd | データをコピーして複数の記憶装置に格納するストレージシステム |
JP2008015984A (ja) * | 2006-07-10 | 2008-01-24 | Nec Corp | データ移行装置及び方法並びにプログラム |
JP2008134986A (ja) * | 2006-10-30 | 2008-06-12 | Hitachi Ltd | 情報システム、データ転送方法及びデータ保護方法 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017081785A1 (ja) * | 2015-11-12 | 2017-05-18 | 株式会社日立製作所 | 計算機システム |
JP2020013227A (ja) * | 2018-07-13 | 2020-01-23 | 株式会社日立製作所 | ストレージシステム |
JP2022149305A (ja) * | 2021-03-25 | 2022-10-06 | 株式会社日立製作所 | ストレージシステム、ストレージシステムの移行方法 |
JP7212093B2 (ja) | 2021-03-25 | 2023-01-24 | 株式会社日立製作所 | ストレージシステム、ストレージシステムの移行方法 |
Also Published As
Publication number | Publication date |
---|---|
GB201615619D0 (en) | 2016-10-26 |
DE112014006156T5 (de) | 2016-11-24 |
GB2539340A (en) | 2016-12-14 |
GB2539340B (en) | 2021-03-24 |
CN106030500B (zh) | 2019-03-12 |
CN106030500A (zh) | 2016-10-12 |
JP5718533B1 (ja) | 2015-05-13 |
JPWO2015162684A1 (ja) | 2017-04-13 |
DE112014006156B4 (de) | 2023-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5718533B1 (ja) | ストレージシステムのデータ移行方法 | |
US10394662B2 (en) | Storage apparatus and storage apparatus migration method | |
US7415506B2 (en) | Storage virtualization and storage management to provide higher level storage services | |
US7558916B2 (en) | Storage system, data processing method and storage apparatus | |
JP4175764B2 (ja) | 計算機システム | |
JP5461216B2 (ja) | 論理ボリューム管理の為の方法と装置 | |
JP2007226347A (ja) | 計算機システム、計算機システムの管理装置、及びデータのリカバリー管理方法 | |
JP2007279845A (ja) | ストレージシステム | |
US20100036896A1 (en) | Computer System and Method of Managing Backup of Data | |
US11579983B2 (en) | Snapshot performance optimizations | |
JP5352490B2 (ja) | サーバイメージ容量の最適化 | |
JP2007122432A (ja) | 仮想ボリュームを識別する情報を引き継ぐ方法及びその方法を用いたストレージシステム | |
JP6663478B2 (ja) | データ移行方法及び計算機システム | |
JP2007115221A (ja) | ボリューム移行方法およびストレージネットワークシステム | |
US20160364170A1 (en) | Storage system | |
JP6557785B2 (ja) | 計算機システム及びストレージ装置の制御方法 | |
WO2015198390A1 (ja) | ストレージシステム | |
WO2014091600A1 (ja) | ストレージ装置及びストレージ装置移行方法 | |
JP6000391B2 (ja) | ストレージシステムのデータ移行方法 | |
JP2019124983A (ja) | ストレージシステム及び記憶制御方法 | |
JP2010079624A (ja) | 計算機システム及びストレージシステム | |
JP6343716B2 (ja) | 計算機システム及び記憶制御方法 | |
JP5947974B2 (ja) | 情報処理装置及び情報処理装置の交換支援システム並びに交換支援方法 | |
WO2014087465A1 (ja) | ストレージ装置及びストレージ装置移行方法 | |
JP7413458B2 (ja) | 情報処理システム及び方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2014534683 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14890379 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112014006156 Country of ref document: DE |
|
ENP | Entry into the national phase |
Ref document number: 201615619 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20140422 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1615619.2 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14890379 Country of ref document: EP Kind code of ref document: A1 |