WO2012017493A1 - 計算機システム及びデータ移行方法 - Google Patents
計算機システム及びデータ移行方法 Download PDFInfo
- Publication number
- WO2012017493A1 WO2012017493A1 PCT/JP2010/004982 JP2010004982W WO2012017493A1 WO 2012017493 A1 WO2012017493 A1 WO 2012017493A1 JP 2010004982 W JP2010004982 W JP 2010004982W WO 2012017493 A1 WO2012017493 A1 WO 2012017493A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- logical unit
- storage device
- storage
- logical
- path
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Definitions
- the present invention relates to a computer system and a data migration method, and is particularly suitable for application to data migration when switching between storage apparatuses.
- data is managed using a large-capacity storage device that is provided separately from the host device.
- Patent Document 1 after setting a logical unit as an external volume in an existing storage device, the access destination of the host computer is migrated to a new storage device using an alternate path program, and thereafter
- a technology for transferring data by copying data stored in a logical unit included in an existing storage device to a logical unit included in the new storage device using a copy function is disclosed. Yes.
- data migration method disclosed in Patent Document 1 data can be transferred without using a special function of an existing storage device or network, and without stopping data exchange between the host computer and the storage device. It is possible to migrate between storage devices.
- the alternate path program installed in the host computer sets the path to the existing storage apparatus and the path to the new storage apparatus. It will be used exclusively. However, depending on the type of operating system installed in the host computer, the alternate path program may not be able to use the path exclusively. In such a case, the data migration method disclosed in Patent Document 1 cannot be used. There was a problem.
- the data in the existing storage device and the data in the new storage device may be accessed in parallel.
- a method for maintaining data consistency in such a situation there is a method in which an existing storage device and a new storage device perform remote copy with each other, but as a result, a special function of the existing storage device is used. Therefore, there was no data migration method that did not depend on the special functions of existing storage devices.
- the present invention has been made in consideration of the above points, and a computer system and a data migration method that can perform data migration between storage devices without stopping the exchange of data and without using special functions of existing devices Is to try to propose.
- the host computer has one or a plurality of first storage devices, and the storage area of the first storage device is used as the first logical unit.
- a first storage device provided to a computer and a second storage device having one or more second storage devices are provided, and the second storage device is the first storage device in the first storage device.
- the configuration information of each first logical unit is collected from the first storage device, and each collected first logical unit is collected.
- Logical unit configuration information is set in the corresponding second logical unit, and the host computer replaces the path to the second logical unit with an alternate path.
- the path to the first logical unit is deleted from the target of the alternate path, and the second storage device stores data stored in the first logical unit of the first storage device Is copied to a storage area provided by the second storage device, and the storage area is associated with the second logical unit.
- the present invention also includes a host computer, a first storage device having one or a plurality of first storage devices, and providing a storage area of the first storage device to the host computer as a first logical unit.
- a data migration method for migrating data from a first storage device to a second storage device in a computer system having a second storage device having one or a plurality of second storage devices the second storage device Virtualizes each of the first logical units in the first storage device and provides the first logical unit to the host computer as a second logical unit, and each of the first logical units from the first storage device.
- the configuration information is collected, and the collected configuration information of each first logical unit is set in the corresponding second logical unit.
- the host computer adds the path to the second logical unit to the target of the alternate path, deletes the path to the first logical unit from the target of the alternate path, and A device copies data stored in the first logical unit of the first storage device to a storage area provided by the second storage device, and associates the storage area with the second logical unit. 2 steps are provided.
- data migration between storage devices can be performed without using special functions of existing devices and without stopping data exchange.
- FIG. 3 is a conceptual diagram for explaining a hierarchical structure of storage areas in a migration source storage apparatus.
- FIG. It is a conceptual diagram which shows notionally the data structure in the memory of a transfer origin storage apparatus.
- 3 is a conceptual diagram for explaining a hierarchical structure of storage areas in a migration destination storage apparatus.
- FIG. It is a conceptual diagram which shows notionally the data structure in the memory of a transfer destination storage apparatus. It is a conceptual diagram with which it uses for description of an access destination transfer process.
- FIG. 1 indicates a computer system according to this embodiment as a whole.
- the computer system 1 includes a host computer 2, a management computer 3, two storage devices 4A and 4B, a SAN (Storage Area Network) 5, and a LAN (Local Area Network) 6.
- the host computer 2 is connected to each storage device 4A, 4B via a SAN (Storage Area Network) 5, and the management computer 3 is connected to the host computer 2 and each storage device 4A via a LAN (Local Area Network) 6. , 4B, respectively.
- SAN Storage Area Network
- LAN Local Area Network
- the host computer 2 includes a CPU 10, a memory 11, a storage device 12, an input device 13, a display device 14, a plurality of ports 15, and an interface control unit 16.
- the CPU 10 is a processor that controls operation of the entire host computer 2, and reads various programs stored in the storage device 12 into the memory 11 and executes them.
- the memory 11 is used not only for storing various programs read from the storage device 12 by the CPU 10 when the host computer 2 is started up, but also used as a work memory for the CPU 10.
- the storage device 12 includes, for example, a hard disk device or an SSD (Solid State Drive), and is used to store and hold various programs and control data.
- the input device 13 is composed of, for example, a keyboard switch, a pointing device, a microphone, and the like, and the display device 14 is composed of, for example, a liquid crystal display.
- Each port 15 is an adapter for connecting the host computer 2 to the SAN 5
- the interface control unit 16 is an adapter for connecting the host computer 2 to the LAN 6.
- the management computer 3 is a computer device for managing the host computer 2 and the storage devices 4A and 4B, and includes a CPU 20, a memory 21, a storage device 22, an input device 23, a display device 24, and an interface control unit 25. Is done.
- the CPU 20 is a processor that controls the operation of the entire management computer 3, and reads various programs stored in the storage device 22 into the memory 21 and executes them.
- the memory 21 is used not only for storing various programs read from the storage device 22 by the CPU 20 when the management computer 3 is started up, but also used as a work memory for the CPU 20.
- the storage device 22 is composed of, for example, a hard disk device or an SSD, and is used for storing and holding various programs and control data.
- the input device 23 is composed of, for example, a keyboard switch, a pointing device, a microphone, and the like, and the display device 24 is composed of, for example, a liquid crystal display.
- the interface control unit 25 is an adapter for connecting the management computer 3 to the LAN 6.
- the storage devices 4A and 4B are composed of a plurality of storage devices 30A and 30B and control units 31A and 31B that control input and output of data to and from the storage devices 30A and 30B.
- the storage devices 30A and 30B are composed of, for example, an expensive disk such as a SCSI (Small Computer System Interface) disk or an inexpensive disk such as a SATA (Serial AT Attachment) disk or an optical disk.
- a plurality of storage devices 30A and 30B constitute one RAID (Redundant Array Of Inexpensive Disks) group, and one or a plurality of logical units are set on a physical storage area provided by one or a plurality of RAID groups. .
- Data from the host computer 2 is stored in this logical unit in units of a predetermined size block.
- the control unit 31 includes CPUs 40A and 40B, memories 41A and 41B, cache memories 42A and 42B, a plurality of host side ports 43A and 43B, a plurality of storage device side ports 44A and 44B, and interface control units 45A and 45B.
- the CPUs 40A and 40B are processors that control the overall operation of the storage devices 4A and 4B, and read various programs stored in the storage devices 30A and 30B into the memories 41A and 41B and execute them.
- the memories 41A and 41B are used for storing various programs read from the specific storage devices 30A and 30B by the CPUs 40A and 40B when the storage devices 4A and 4B are activated, and also used as work memories for the CPUs 40A and 40B. It is done.
- the cache memories 42A and 42B are constituted by semiconductor memories, and are mainly used for temporarily storing data exchanged between the host computer 2 and the storage devices 30A and 30B.
- the host side ports 43A and 43B are adapters for connecting the storage apparatuses 4A and 4B to the SAN 5, and the storage apparatus side ports 44A and 44B are adapters for the storage apparatuses 30A and 30B.
- the interface controllers 45A and 45B are adapters for connecting the storage apparatuses 4A and 4B to the LAN 6.
- one of the two storage apparatuses 4A and 4B is an existing storage apparatus that is currently used (hereinafter referred to as the migration source storage apparatus 4A), and the other is , A new storage device introduced in place of the migration source storage device 4A (hereinafter referred to as the migration destination storage device 4B). Therefore, in the case of this computer system 1, data stored in the migration source storage apparatus 4A is migrated to the migration destination storage apparatus 4B by the method described later, and then the migration source storage apparatus 4A is removed.
- the migration destination storage apparatus 4B is equipped with a so-called external connection function that virtualizes the logical units in the external storage apparatus (here, the migration source storage apparatus 4A) and provides them to the host computer 2. ing.
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B receives a read request for the logical unit in the virtualized migration source storage apparatus 4A, the migration destination storage apparatus 4B transfers the read request to the migration source storage apparatus 4A, thereby transferring the requested data to the migration source. Read from the storage device 4A and transfer the read data to the host computer 2. When the migration destination storage apparatus 4B receives a write request for the logical unit, the migration destination storage apparatus 4B transfers the write request and the write target data to the migration source storage apparatus 4A, thereby transferring the data in the logical volume. Write to the corresponding address location.
- FIG. 2 shows a data structure in the memory 11 of the host computer 2.
- the memory 11 of the host computer 2 stores a path management table 50, an alternate path program 51, and a plurality of application programs 52.
- the path management table 50 is a table for managing a path connected to a logical volume that the host computer 2 recognizes as a storage area, and one or more path management provided corresponding to each individual logical volume. It consists of entries 53.
- a logical volume number 54 that is identification information of the corresponding logical volume and a path number 55 that is identification information of each path connected to the logical volume, as will be described later. Accordingly, when a plurality of paths are set with a redundant configuration, a plurality of path numbers are registered in the path management entry 53.
- the path managed by the path management table 50 may be a path to a logical unit of a different storage device. However, these logical units need to return the same response to the inquiry based on the inquiry request defined by the SCSI standard. This is because a storage device with a different interface may not be consistent with the interface command and access may be denied.
- the alternate path program 51 is a program that issues an I / O request to the migration source storage apparatus 4A or the migration destination storage apparatus 4B based on various information registered in the path management table 50. With this alternate path program 51, the logical unit of the migration source storage apparatus 4A or the migration destination storage apparatus 4B can be provided to the application program 52.
- the alternate path program 51 issues an I / O request to the migration source storage apparatus 4A or the migration destination storage apparatus 4B
- the alternate path program 51 refers to the path management table 50 and selects one of the paths associated with the corresponding logical volume. One or more paths are selected, and an I / O request is issued to the migration source storage apparatus 4A or the migration destination storage apparatus 4B via the selected path.
- the application program 52 is a program for executing processing according to the user's business, and is associated with the logical volume in the migration source storage apparatus 4A or the migration destination storage apparatus 4B via the logical volume assigned to itself. Read / write data required for the logical unit (connected to the logical volume via a path).
- FIG. 3 shows a data structure in the memory 21 of the management computer 3.
- a logical unit migration instruction program 60 is stored in the memory 21 of the management computer 3.
- the logical unit migration instruction program 60 is a program for controlling data migration between the migration source storage device 4A and the migration destination storage device 4B, and the host computer 2 at the time of data migration between the migration source storage device 4A and the migration destination storage device 4B. Necessary instructions are given to the migration source storage apparatus 4A and the migration destination storage apparatus 4B.
- FIG. 4 shows the hierarchical structure of the storage area in the migration source storage apparatus 4A.
- the migration source storage apparatus 4A provides the storage area provided by the storage device 30A to the host computer 2 as a logical unit (hereinafter referred to as a migration source logical unit) 72A.
- a plurality of intermediate storage hierarchies for associating the storage device 30A and the migration source logical unit 72A are provided between the storage device 30A and the migration source logical unit 72A.
- the intermediate storage hierarchy can include, for example, a virtual device 70A and a logical device 71A.
- the virtual device 70A is an intermediate storage hierarchy that connects the storage device 30A, which is the lower storage hierarchy, and the logical device 71A, which is the upper storage hierarchy.
- the virtual device 70A is defined on a storage area provided by each storage device 30A constituting the RAID group.
- the logical device 71A is an intermediate storage hierarchy that connects the virtual device 70A, which is a lower storage hierarchy, and the migration source logical unit 72A, which is an upper storage hierarchy, and is all or part of the storage area of one or more virtual devices 70A. Is a storage area formed by extracting a part of the storage area of the virtual device 70A.
- FIG. 5 shows a data structure in the memory 41A of the migration source storage apparatus 4A.
- the memory 41A of the migration source storage apparatus 4A stores a storage tier management program 84, a logical unit management table 80, a logical device management table 81, and a virtual device management table 82.
- the memory 41A of the migration source storage apparatus 4A also stores a cache directory 83 for managing data temporarily stored in the cache memory 42A in the migration source storage apparatus 4A.
- the storage hierarchy management program 84 is a program for managing the correspondence relationship between the lower storage apparatus and the upper storage apparatus in the migration source storage apparatus 4A, and includes a logical unit management table 80, a logical device management table 81, and a virtual device management table 82. Various processing described later is executed based on various information stored in the.
- the logical unit management table 80 is a table used by the storage hierarchy management program 84 to manage the migration source logical unit 72A set in the migration source storage apparatus 4A, and is associated with each migration source logical unit 72A.
- One or more logical unit management entries 85 are provided.
- the logical unit management entry 85 includes LUN (Logical Unit Number) 86, which is identification information of the corresponding migration source logical unit 72A, and identification information of the logical device 71A (FIG. 4) constituting the migration source logical unit 72A.
- LUN Logical Unit Number
- the logical device number 87 and inquiry information 88 including configuration information such as the mounting state and the preparation state of the migration source logical unit 72A are registered.
- Inquiry information 88 can include, for example, information such as a vendor identifier and a product identifier in addition to information such as a mounting state and a preparation state of the migration source logical unit 72A.
- the logical device management table 81 is a table for managing the logical device 71A set in the migration source storage apparatus 4A, and is provided corresponding to each logical device 71A in the migration source storage apparatus 4A. It consists of one or more logical device management entries 90. Registered in the logical device management entry 90 are a logical device number 91 of the corresponding logical device 71A and a virtual device number 92 which is identification information of the virtual device 70A (FIG. 4) constituting the logical device 71A.
- the virtual device management table 82 is a table for managing the virtual device 70A set in the migration source storage apparatus 4A, and is provided corresponding to each virtual device 70A in the migration source storage apparatus 4A. It is composed of one or more virtual device management entries 93. In the virtual device management entry 93, a virtual device number of the corresponding virtual device 70A and a storage device number 95 that is identification information of each storage device 30A that provides a storage area to the virtual device 70A are registered.
- the cache directory 83 is information for managing data temporarily stored in the cache memory 42A (FIG. 1), and is provided corresponding to each piece of data stored in the cache memory 42A.
- the directory entry 96 is configured as described above.
- a cache address 97 of corresponding data stored in the cache memory 42A and data identification information 98 are registered.
- the cache address 97 represents the head address of the storage area where the corresponding data in the cache memory 42A is stored.
- the data identification information 98 is identification information of such data, and is generated from, for example, a combination of LUN and LBA (Logical Block Address).
- FIG. 6 shows the hierarchical structure of the storage area in the migration destination storage apparatus 4B.
- the migration destination storage device 4B has an external connection function as described above, and the storage source provided by the storage device 30A or the migration source logical unit 72A in the migration source storage device 4A connected externally is logically stored in the local storage device. It is provided to the host computer 2 as a unit (hereinafter referred to as a migration destination logical unit) 72B.
- a hierarchy is provided.
- the virtual device 70B and the logical device 71B can be included in the intermediate storage hierarchy, but the virtual device 70B and the logical device 71B are not necessarily required, and one or both of the virtual device 70B and the logical device 71B are omitted. Also good.
- the virtual device 70B is an intermediate storage hierarchy that connects the storage device 30B or the migration source logical unit 72A, which is the lower storage hierarchy, and the logical device 71B, which is the upper storage hierarchy.
- the virtual device 70B is defined on the storage area provided by each storage device 30B constituting the RAID group.
- the virtual device 70B transfers the read request or write request from the host computer 2 to the migration source storage device 4A, and sends it to the migration source storage device 4A.
- the migration source logical unit 72A is virtualized as if it were a logical unit (migration destination logical unit 72B) in the migration destination storage apparatus 4B.
- the logical device 71B is an intermediate storage tier that connects the virtual device 70B that is the lower storage tier and the migration destination logical unit 72B that is the upper storage tier, and is all or part of the storage area of the one or more virtual devices 70B. Or a storage area obtained by extracting a part of the storage area of the virtual device 70B.
- FIG. 7 shows a data configuration in the memory 41B of the migration destination storage apparatus 4B.
- the memory 41B of the migration destination storage apparatus 4B has a storage tier management program 105 and a logical device copy program 106, a logical unit management table 100, a logical device management table 101, and a virtual device management table. 102 and the logical device copy management table 103 are stored.
- the memory 41B of the migration destination storage apparatus 4B also stores a cache directory 104 for managing the data temporarily stored in the cache memory 42B (FIG. 1) in the migration destination storage apparatus 4B.
- the storage hierarchy management program 105 is a program for managing the connection between the lower storage apparatus and the upper storage apparatus in the migration destination storage apparatus 4B, and is similar to the storage hierarchy management program 84 of the migration source storage apparatus 4A described above with reference to FIG. It has the function of.
- the logical device copy program 106 is a program for controlling data migration from the migration source storage apparatus 4A to the migration destination storage apparatus 4B. Based on the logical device copy program 106, the migration destination storage apparatus 4B copies the data stored in the logical device 71A of the migration source storage apparatus 4A to the corresponding logical device 71B in the migration destination storage apparatus 4B.
- the logical unit management table 100 is a table used by the storage hierarchy management program 105 to manage the migration destination logical unit 72B set in the migration destination storage apparatus 4B. Since this logical unit management table 100 has the same configuration as the logical unit management table 80 of the migration source storage apparatus 4A described above with reference to FIG. 5, the description thereof is omitted here.
- the logical device management table 101 is a table for managing the logical device 71B set in the migration destination storage apparatus 4B, and one or more logical device management entries provided corresponding to the individual logical devices 71B. 111. Registered in the logical device management entry 111 are the logical device number 112 of the corresponding logical device 71B and the virtual device number 113 of the virtual device 70B constituting the logical device 71B. In the logical device management entry 111, a read cache mode flag 114 and a write cache mode flag 115 for the corresponding logical device 71B are also registered.
- the read cache mode flag 114 is a flag indicating whether or not the read cache mode is set for the corresponding logical device 71B
- the write cache mode flag 115 is whether or not the write cache mode is set for the corresponding logical device. Is a flag indicating Each of the read cache mode flag 114 and the write cache mode flag 115 takes a value of “on” or “off”.
- the read cache mode flag 114 When the read cache mode flag 114 is “ON”, it indicates that the read cache mode is set to “ON”. In this case, when processing a read request from the host computer 2, the read data is temporarily stored in the cache memory 42B. Further, when the read cache mode flag 114 is “off”, it indicates that the read cache mode is set to “off”. In this case, read data is not temporarily stored in the cache memory 42B.
- the write cache mode flag 115 when the write cache mode flag 115 is “ON”, this indicates that the write cache mode is set to “ON”. In this case, when processing the write request from the host computer 2, the write data is temporarily stored in the cache memory 42A. Further, when the write cache mode flag 115 is “off”, this indicates that the write cache mode is set to “off”. In this case, the write data is not temporarily stored in the cache memory 42B.
- the virtual device management table 102 is a table for managing the virtual devices 70B in the migration source storage apparatus 4A, and one or more virtual devices 70B provided corresponding to the individual virtual devices 70B in the migration source storage apparatus 4A. It consists of a virtual device management entry 116. Registered in the virtual device management entry 116 are the virtual device number 117 of the corresponding virtual device 70B and lower storage hierarchy identification information 118, which is identification information of the lower storage device associated with the virtual device 70B.
- the identification information of the storage device 30B is registered as the lower-level storage hierarchy identification information 118, and the lower-level storage device becomes the source logical In the case of the unit 72A, the network address (fiber channel address) and LUN of the migration source logical unit 72A are registered as the lower storage hierarchy identification information 118.
- the logical device copy management table 103 is a table for the logical device copy program 106 to manage the progress status of data copying (data migration) between the migration source storage apparatus 4A and the migration destination storage apparatus 4B. It consists of one or more logical device copy management entries 119 provided in correspondence with the stored data.
- the cache directory 104 is information for managing data temporarily stored in the cache memory 42B. Since the cache directory 104 has the same configuration as the cache directory 83 of the migration source storage apparatus 4A described above with reference to FIG. 5, the description thereof is omitted here.
- (1-2) Data Migration Processing in the Computer System (1-2-1) Outline of Data Migration Processing in the Computer System
- the migration source storage device 4A is replaced with the migration destination storage device 4B. The outline of the data migration process executed when the data stored in the migration source storage apparatus 4A is migrated to the migration destination storage apparatus 4B will be described.
- the data migration processing includes access destination migration processing for migrating the access destination of the host computer 2 from the migration source logical unit 72A in the migration source storage apparatus 4A to the migration destination logical unit 72B in the migration destination storage apparatus 4B.
- the data copy processing for copying the data stored in the migration source logical unit 72A of the migration source storage apparatus 4A to the corresponding migration destination logical unit 72B of the migration destination storage apparatus 4B consists of two steps.
- FIG. 8 conceptually shows the flow of such access destination migration processing.
- This access destination migration process is performed by the migration destination storage apparatus 4B and the host computer 2 executing necessary processes in accordance with instructions given from the management computer 3 to the migration destination storage apparatus 4B and the host computer 2, respectively.
- the migration destination storage apparatus 4B first prepares for switching the correspondence destination of the logical volume VOL of the host computer 2 from the migration source logical unit 72A to the migration destination logical unit 72B according to the instruction given from the management computer 3. . Specifically, the migration destination storage apparatus 4B maps the migration source logical unit 72A to the migration destination logical unit 72B as an external volume (SP1). By this processing, the migration source logical unit 72A is virtualized as the migration destination logical unit 72B, and the host computer 2 can read / write data from / to the migration source logical unit 72A via the migration destination storage device 4B.
- SP1 external volume
- the migration destination storage apparatus 4B issues an Inquiry request to the migration source storage apparatus 4A, thereby acquiring the inquiry information of the migration source logical unit.
- the acquired inquiry information is transferred to the migration source logical unit 72A. It is set as Inquiry information of the migration destination logical unit 72B that is mapped (SP2).
- step SP2 when the host computer 2 adds a path PT2 to the migration destination logical unit 72B as a path related to the logical volume VOL as described later, the path PT1 to the migration source logical unit 72A and The host computer 2 can recognize the path PT2 to the migration destination logical unit 72B as an alternate path of the same logical volume VOL.
- the host computer 2 thereafter deletes the path PT1 from the logical volume VOL to the migration source logical unit 72A as described later.
- all read requests and write requests for the logical volume VOL are transmitted to the migration destination storage apparatus 4B, and read processing and write processing for the read requests and write requests are executed by the migration destination storage apparatus 4B.
- the host computer 2 since the host computer 2 recognizes that the read request and the write request are still issued to the migration source storage apparatus 4A, the data input / output processing in the host computer 2 does not stop.
- the management computer 3 adds the path PT2 to the migration destination logical unit 72B to the alternate path of the logical volume VOL for the host computer 2, and the path from the alternate path of the logical volume VOL to the migration source logical unit 72A.
- An instruction is given to delete PT1. With this process, the logical unit associated with the logical volume VOL can be migrated from the migration source logical unit 72A to the migration destination logical unit 72B without stopping the data exchange.
- the access destination of the host computer 2 can be switched from the migration source logical unit 72A to the migration destination logical unit 72B.
- the path PT2 from the logical volume VOL to the migration destination logical unit 72B is added in the host computer 2
- the path from the logical volume VOL to the migration source logical unit 72A is deleted before the host computer 2
- a read request or a write request from is given to the migration destination storage apparatus 4B, data consistency between the migration source storage apparatus 4A and the migration destination storage apparatus 4B cannot be maintained.
- the migration destination storage apparatus 4B may respond with old data stored in the cache memory 42B.
- the host computer 2 when the host computer 2 adds a path PT2 to the migration destination logical unit 72B as a path from the logical volume VOL to the logical unit, the host computer 2 passes the path PT1 to the migration source logical unit 72A and the migration destination logical unit 72B.
- the read request and the write request for the migration source logical unit 72A associated with the logical volume VOL are issued using any one of the paths PT2.
- the host computer 2 updates the data stored in the migration source logical unit 72A via the path PT1 to the migration source logical unit 72A, and then migrates via the path PT2 to the migration destination logical unit 72B.
- the read cache mode of the corresponding logical device 71B in the migration destination storage apparatus 4B is set to “on”, and the migration destination storage apparatus 4B If the data before the update of the data exists in the cache memory 42B, the data before the update is read from the cache memory 42B of the migration destination storage apparatus 4B and transmitted to the host computer 2. Become.
- the write cache mode of the corresponding logical device 71B in the migration destination storage apparatus 4B is set to “ON”, and the host computer 2 further transfers the migration destination storage apparatus 4B via the path PT2 to the migration destination logical unit 72B.
- the write data is stored in the cache memory 42B of the migration destination storage apparatus 4B and then transferred to the migration source storage apparatus 4A. Therefore, before the write data is transferred from the migration destination storage apparatus 4B to the migration source storage apparatus 4A, the write data is transferred to the migration source storage apparatus 4A via the path PT1 from the host computer 2 to the migration source logical unit 72A.
- the data before update is read from the migration source storage apparatus 4A and transmitted to the host computer 2.
- the management computer 3 sends the instruction of the corresponding logical device 71B before giving an instruction to the host computer 2 to add the path PT2 to the migration destination logical unit 72B as an alternate path of the logical volume VOL.
- An instruction is given to the migration destination storage apparatus 4B to set both the read cache mode and the write cache mode to “off”. As a result, data consistency between the migration source storage apparatus 4A and the migration destination storage apparatus 4B can be ensured.
- FIG. 9 conceptually shows the flow of data copy processing in the data migration processing.
- This data copy processing is performed by the migration destination storage apparatus 4B executing necessary processing in accordance with an instruction given from the management computer 3 to the migration destination storage apparatus 4B.
- the migration destination storage apparatus 4B creates a new virtual device 70BX associated with the storage apparatus 30B according to an instruction from the management computer 3, and also creates a new logical device 71BX associated with the new virtual device 70BX. Create
- the migration destination storage apparatus 4B performs data copy from the logical device 71B to the new logical device 71BX, and then replaces the virtual device 70B with the new virtual device 70BX, thereby storing the migration destination logical unit 72B and the storage destination logical unit 72B.
- the device 30B is associated.
- FIG. 10 shows a logical unit migration instruction program 60 (FIG. 10) stored in the memory 21 of the management computer 3 in relation to the data migration processing according to the present embodiment described above. The process procedure of the data migration control process performed by 3) is shown.
- the logical unit migration instruction program 60 When the logical unit migration instruction program 60 is instructed to perform data migration from the migration source storage apparatus 4A to the migration destination storage apparatus 4B by the system administrator via the input device 23 of the management computer 3, the data shown in FIG.
- the migration control process is started.
- one or more migration source logical units 72A designated by the system administrator are assigned to different migration destination logical units respectively designated by the system administrator.
- 72B is instructed to be mapped as an external volume (hereinafter referred to as an external volume setting instruction) (SP10).
- SP10 external volume setting instruction
- the migration destination storage apparatus 4B maps each migration source logical unit 72A designated by the system administrator as an external volume to each migration destination logical unit 72B designated by the system administrator in accordance with this external volume setting instruction. Execute external volume setting processing. When the external volume setting process is completed, the migration destination storage apparatus 4B transmits an external volume setting process completion notification to the management computer 3.
- the migration destination logical unit 72B corresponding to the inquiry information of each migration source logical unit 72A is transferred to the migration destination storage apparatus 4B.
- an inquiry information hereinafter referred to as an inquiry information setting instruction (SP11).
- the migration destination storage apparatus 4B executes the inquiry information setting process for setting the inquiry information of each migration source logical unit 72A as the inquiry information of the corresponding migration destination logical unit 72B in accordance with this inquiry information setting instruction. Further, when the inquiry information setting process is completed, the migration destination storage apparatus 4B transmits an inquiry information setting process completion notification to the management computer 3.
- the logical cache migration instruction program 60 of the management computer 3 When the logical unit migration instruction program 60 of the management computer 3 receives the inquiry information setting process completion notification, the logical cache migration instruction program 60 sets both the read cache mode and the write cache mode of each migration destination logical unit 72B to the migration destination storage apparatus 4B.
- An instruction (hereinafter referred to as a cache mode off instruction) is given to set each to "OFF" (SP12).
- the migration destination storage apparatus 4B executes a cache mode off process for setting both the read cache mode and the write cache mode of each migration destination logical unit 72B to off. Further, when the cache mode off process is completed, the migration destination storage apparatus 4B transmits a cache mode off process completion notification to the management computer 3.
- the logical unit migration instruction program 60 of the management computer 3 receives the notification of completion of the cache mode off process, the logical volume corresponding to the path PT2 (FIG. 8) to each migration destination logical unit 72B to the host computer 2 respectively.
- An instruction to add a VOL replacement path (hereinafter referred to as a replacement path addition instruction) is given (SP13).
- the host computer 2 executes an alternate path addition process for adding the path PT2 to each migration destination logical unit 72B as an alternate path for the corresponding logical volume VOL. Further, when the replacement path addition process is completed, the host computer 2 transmits a replacement path addition process completion notification to the management computer 3.
- the alternate path program 51 of the host computer 2 sends read requests and write requests to the logical volume VOL not only to the migration source logical unit 72A but also to the corresponding migration destination logical unit 72B. It can also be issued against. Specifically, the alternate path program 51 randomly selects and selects one path number from a plurality of path numbers included in the path management entry 53 (FIG. 2) associated with the logical volume VOL. A read request and a write request are issued using the path PT1 or path PT2.
- the host computer 2 executes an alternate path deletion process for deleting the path PT1 to the migration source logical unit 72A from the alternate path of the logical volume VOL in accordance with this alternate path deletion instruction.
- the host computer 2 transmits an alternate path deletion process completion notification to the management computer 3.
- the logical unit migration instruction program 60 of the management computer 3 When the logical unit migration instruction program 60 of the management computer 3 receives the exchange path deletion processing completion notification, the logical unit 71B read cache mode of the logical device 71B associated with the migration destination logical unit 72B is sent to the migration destination storage apparatus 4B. And an instruction to set both the write cache mode to “ON” (hereinafter referred to as a cache mode ON instruction) (SP15).
- SP15 cache mode ON instruction
- the migration destination storage apparatus 4B performs cache mode on processing for setting both the read cache mode and the write cache mode of each logical device 71B associated with the migration destination logical unit 72B to “on” in accordance with the cache mode on instruction. Execute. Further, when the cache mode on process is completed, the migration destination storage apparatus 4B transmits a cache mode on process completion notification to the management computer 3.
- the logical unit migration instruction program 60 of the management computer 3 When the logical unit migration instruction program 60 of the management computer 3 receives the cache mode on process completion notification, it sets both the read cache mode and the write cache mode to “on” for the migration destination storage apparatus 4B as described above.
- An instruction to create a new logical device 71BX (FIG. 9) corresponding to each logical device 71B (hereinafter referred to as a logical device creation instruction) is given (SP16).
- the migration destination storage apparatus 4B executes logical device creation processing for creating the required number of new logical devices 71BX in accordance with this logical device creation instruction.
- the migration destination storage apparatus 4B transmits a logical device creation process completion notification to the management computer 3.
- the logical unit migration instruction program 60 of the management computer 3 When the logical unit migration instruction program 60 of the management computer 3 receives the logical device creation process completion notification, the logical unit migration instruction program 60 is stored in the logical device 71B associated with the migration destination logical unit 72B for the migration destination storage apparatus 4B. Is given an instruction (hereinafter referred to as a logical device copy instruction) to copy the existing data to the corresponding new logical device 71BX created in the migration destination storage apparatus 4B by the logical device creation instruction in step SP16 ( SP17).
- a logical device copy instruction an instruction to copy the existing data to the corresponding new logical device 71BX created in the migration destination storage apparatus 4B by the logical device creation instruction in step SP16 ( SP17).
- the migration destination storage apparatus 4B performs logical device copy processing for copying the data respectively stored in each logical device 71B in the migration source storage apparatus 4A to the corresponding new logical device 71BX in accordance with the logical device copy instruction. Execute. When the logical device copy process is completed, the migration destination storage apparatus 4B transmits a logical device copy process completion notification to the management computer 3.
- the logical unit migration instruction program 60 of the management computer 3 When the logical unit migration instruction program 60 of the management computer 3 receives the logical device copy process completion notification, the logical unit migration instruction program 60 sends new virtual devices 70BX respectively associated with the new logical devices 71BX to the migration destination storage apparatus 4B.
- An instruction to replace the corresponding virtual device 70B associated with the corresponding migration destination logical unit 72B (hereinafter referred to as a virtual device replacement instruction) is given (SP18).
- the migration destination storage apparatus 4B replaces the virtual device 70B associated with the migration destination logical unit 72B and the corresponding new virtual device 70BX with respect to the migration destination storage apparatus 4B according to the virtual device replacement instruction. Perform replacement processing. Further, when the virtual device replacement process is completed, the migration destination storage apparatus 4B transmits a virtual device replacement process completion notification to the management computer 3.
- FIG. 11 receives the external volume setting instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 at step SP10 of the data migration control process (FIG. 10). The processing procedure of the above-described external volume setting process executed by the storage tier management program 105 of the migration destination storage apparatus 4B is shown.
- the storage tier management program 105 When the storage tier management program 105 receives the external volume setting instruction, the storage tier management program 105 starts the external volume setting process shown in FIG. 11 and first adds the necessary number of virtual device management entries 116 to the virtual device management table 102. A number of new virtual devices 70B are created (SP20). At this time, the storage hierarchy management program 105 registers different unused virtual device numbers as the virtual device numbers 117 of these virtual devices 70B in the virtual device management entries 116, respectively, and the lower storage hierarchy of the virtual device 70B. The fiber channel address and LUN of the migration source logical unit 72A corresponding to the identification information 118 are registered in these virtual device management entries 116.
- the storage hierarchy management program 105 creates the required number of new logical devices 71B by adding the required number of logical device management entries 111 to the logical device management table 101 (SP21). At this time, the storage hierarchy management program 105 registers an unused logical device number as the logical device number 112 of these logical devices 71B in the logical device management entry 111, and also performs virtual device management as the virtual device number 113 in step SP20. The virtual device number 117 registered in the corresponding virtual device management entry 116 added to the table 102 is registered in the logical device management entry 111.
- the storage hierarchy management program 105 creates the required number of new migration destination logical units 72B by adding the required number of logical unit management entries 107 to the logical unit management table 100 (SP22). At this time, the storage hierarchy management program 105 registers an unused LUN in the logical unit management entry 107 as the LUN 108 of the newly created migration destination logical unit 72B, and at the step SP21, manages the logical device as the logical device number 109. The logical device number 112 registered in the corresponding logical device management entry 111 added to the table 101 is registered in the corresponding logical unit management entry 107.
- FIG. 12 shows the inquiry information setting instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 in step SP11 of the data migration control process (FIG. 10). The processing procedure of the inquiry information setting process executed by the received storage hierarchy management program 105 of the migration destination storage apparatus 4B is shown.
- the storage hierarchy management program 105 When the storage hierarchy management program 105 receives the inquiry information setting instruction, it starts the inquiry information setting process shown in FIG. 12, and first sends an inquiry request that is a transfer request for inquiry information of each migration source logical unit 72A to the migration source storage. It transmits to the apparatus 4A (SP30).
- the storage hierarchy management program 105 When the storage hierarchy management program 105 receives the inquiry information of each migration source logical unit 72A transferred from the migration destination storage apparatus 4B in response to the inquiry request (SP31), the storage hierarchy management program 105 converts the received inquiry information into the corresponding migration destination. It is set as inquiry information of the logical unit 72B (SP32). Specifically, the storage hierarchy management program 105 transfers the received inquiry information of the migration source logical unit 72A to the logical unit management entry 107 of the corresponding migration destination logical unit 72B in the logical unit management table 100 (FIG. 7). Registered as Inquiry information 110 of the logical unit 72B.
- FIG. 13 receives the cache mode off instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 at step SP12 of the data migration control process (FIG. 10). The processing procedure of the cache mode off process executed by the storage tier management program 105 of the migration destination storage apparatus 4B is shown.
- the storage tier management program 105 When the storage tier management program 105 receives the cache mode off instruction, it starts the cache mode off process shown in FIG. 13, and first, each logical unit corresponding to each migration destination logical unit 72B designated in the cache mode off instruction is started.
- the read cache mode of the device 71B is set to “off” (SP40).
- the storage hierarchy management program 105 corresponds to each migration destination logical unit 72B specified in the cache mode off instruction among the logical device management entries 111 constituting the logical device management table 101 (FIG. 7).
- Each of the read cache mode flags 114 of the logical device management entry 111 of the logical device 71B is set to “off”.
- the storage hierarchy management program 105 sets the write cache mode of each logical device 71B corresponding to each migration destination logical unit 72B to “off” (SP41). Specifically, the storage hierarchy management program 105 turns off the write cache mode flag 115 of the logical device management entry 111.
- FIG. 14 receives the alternate path addition instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 at step SP13 of the data migration control process (FIG. 10). The processing procedure of the alternate path addition process executed by the alternate path program 51 (FIG. 2) of the host computer 2 is shown.
- the alternate path addition process starts the alternate path addition process shown in FIG. 14.
- the migration destination storage apparatus 4 B provides the migration destination storage apparatus 4 B to the host computer 2.
- a discovery request for requesting a list of migration destination logical units 72B (hereinafter referred to as a migration destination logical unit list) is transmitted (SP50).
- each migration destination logical unit 72B is based on the migration destination logical unit list. Is added as an alternate path of the corresponding logical volume VOL (SP52). Specifically, the alternate path program 51 stores the path SP2 path to the migration destination logical unit 72B corresponding to the path management entry 53 (FIG. 2) corresponding to each logical volume VOL in the path management table 50 (FIG. 2). Number 55 (FIG. 2) is additionally registered.
- FIG. 15 receives the alternate path deletion instruction transmitted from the three logical unit migration instruction program 60 of the management computer in step SP14 of the data migration control process (FIG. 10). The processing procedure of the alternate path deletion process executed by the alternate path program 51 of the host computer 2 is shown.
- the alternate path program 51 When the alternate path program 51 receives such an alternate path deletion instruction, the logical volume VOL is connected to each path SP1 (FIG. 8) connecting the logical volume VOL and the migration source logical unit 72A in the migration source storage apparatus 4A. Delete from the VOL alternate path (SP60). Specifically, the alternate path program 51 deletes the path number 55 (FIG. 2) of the path SP1 from the path management entry 53 (FIG. 2) corresponding to the logical volume VOL in the path management table 50 (FIG. 2).
- the alternate path program 51 transmits an alternate path deletion process completion notification to the management computer 3 (SP61), and thereafter ends this alternate path deletion process.
- Cache Mode On Process FIG. 16 receives the cache mode on instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 at step SP15 of the data migration control process (FIG. 10). The processing procedure of the cache mode on process executed by the storage tier management program 105 of the migration destination storage apparatus 4B is shown.
- the storage hierarchy management program 105 When the storage hierarchy management program 105 receives the cache mode ON instruction, it starts the cache mode ON process shown in FIG. 16, and first sets the read cache mode of each migration destination logical unit 72B designated in the cache mode ON instruction. “ON” is set (SP70). Specifically, the storage hierarchy management program 105 corresponds to each of the migration source logical units 72B specified in the cache mode on instruction among the logical device management entries 111 constituting the logical device management table 101 (FIG. 7). The read cache mode flag 114 (FIG. 7) of the device management entry 111 is set to “ON”.
- the storage hierarchy management program 105 sets the write cache mode of each migration source logical unit 72B to ON (SP71). Specifically, the storage hierarchy management program 105 sets the write cache mode flag 115 (FIG. 7) of each logical device management entry 111 described above to “ON”.
- FIG. 17 received the logical device creation instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 at step SP16 of the data migration control processing (FIG. 10). The processing procedure of logical device creation processing executed by the storage tier management program 105 of the migration destination storage apparatus 4B is shown.
- the storage hierarchy management program 105 When the storage hierarchy management program 105 receives the logical device creation instruction, it starts the logical device creation processing shown in FIG. 17, and first adds the required number of virtual device management entries 116 to the virtual device management table 102 (FIG. 7). As a result, the necessary number of new virtual devices 70BX (FIG. 9) are created (SP80). At this time, the storage hierarchy management program 105 registers the unused virtual device number in the corresponding virtual device management entry 116 as the virtual device number 117 (FIG. 7) of the newly created virtual device 70BX and identifies the lower storage hierarchy. The identification information of the corresponding storage device 30B is registered as information 118.
- the storage hierarchy management program 105 creates the required number of new logical devices 71BX (FIG. 9) by adding the required number of logical device management entries 111 to the logical device management table 101 (SP81). At this time, the storage hierarchy management program 105 registers an unused logical device number in the corresponding logical device management entry 111 as the logical device number 112 (FIG. 7) of the newly created logical device 71BX and creates the newly created logical device 71BX. As the virtual device number 113 (FIG. 7) of the virtual device 70BX corresponding to the logical device 71BX, the virtual device number 117 registered in the corresponding virtual device management entry 116 added to the virtual device management table 102 in step SP80 is Register in the logical device management entry 111.
- FIG. 18 received the logical device copy instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 in step SP17 of the data migration control processing (FIG. 10).
- the processing procedure of logical device copy processing executed by the logical device copy program 106 (FIG. 7) of the migration destination storage apparatus 4B is shown.
- the logical device copy program 106 When the logical device copy program 106 receives the logical device copy instruction, it starts the logical device copy process shown in FIG. 18. First, one logical device 71B specified by the logical device copy instruction is selected from the logical device 71B to be copied. The device 71B is selected (SP90).
- the logical device copy program 106 selects one unit area in the logical device 71B selected in step SP90 (SP91).
- This unit area is a storage area having the same size as the data write unit for the logical device 71B.
- the logical device copy program 106 determines whether or not the unit area selected in step SP91 is not updated (whether data is not yet stored) (SP92).
- the logical device copy program 106 If the logical device copy program 106 obtains a positive result in this determination, it proceeds to step SP94. If it obtains a negative result, it proceeds to step SP93, and the data stored in the unit area in the logical device 71B. Is copied to the corresponding logical device 71BX among the new logical devices 71BX created in step SP81 of the logical device creation process described above with reference to FIG. 17 (SP93). Specifically, the logical device copy program 106 uses the external connection function to transfer the data stored in the unit area of the logical device 71B selected in step SP90 to the logical device 71B in the migration source storage apparatus 4A. The data is read from the migration source logical unit 72A mapped to, and the read data is stored in the new logical device 71BX.
- the logical device copy program 106 determines whether or not the same processing has been executed for all the unit areas in the logical device 71B selected in step SP91 (SP94). If the logical device copy program 106 obtains a negative result in this determination, it returns to step SP91, and thereafter, the unit storage area selected in step SP91 is sequentially switched to an unprocessed unit storage area, and step SP91 to step SP94. Repeat the process.
- step SP95 When the logical device copy program 106 eventually obtains an affirmative result in step SP94 by completing the execution of steps SP91 to SP94 for all the unit areas in the logical device 71B selected in step SP90, the logical device copy instruction It is determined whether or not the processing of step SP91 to step SP94 has been executed for all the logical devices 71B to be copied specified in (SP95).
- step SP90 If the logical device copy program 106 obtains a negative result in this determination, it returns to step SP90, and thereafter repeats the same processing while sequentially switching the logical device 71B selected in step SP90.
- step SP95 When the logical device copy program 106 obtains a positive result in step SP95 by completing the processing of steps SP91 to SP94 for all the logical devices 71B to be copied designated in the logical device copy instruction, the management computer 3 The logical device copy processing completion notification is transmitted to (SP96), and then this logical device copy processing is terminated.
- the migration destination storage device 4B performs the write operation. If data is written to the corresponding logical device 71B, this write data is written to the corresponding migration source logical unit 72A of the migration source storage apparatus 4A, so that the data copied to the logical device 71BX becomes old data. End up.
- FIG. 19 received the virtual device replacement instruction transmitted from the logical unit migration instruction program 60 of the management computer 3 in step SP18 of the data migration control process (FIG. 10). The processing procedure of the virtual device replacement process executed by the storage tier management program 105 of the migration destination storage apparatus 4B is shown.
- the storage hierarchy management program 105 Upon receiving the virtual device replacement instruction, the storage hierarchy management program 105 newly creates each virtual device 70B associated with each migration destination logical unit 72B and the logical device creation processing step SP80 described above with reference to FIG.
- the corresponding virtual device 70BX is replaced with the corresponding virtual device 70BX (SP100).
- the storage hierarchy management program 105 includes the virtual device number 113 (FIG. 7) registered in the logical device management entry 111 (FIG. 7) of the corresponding logical device 71B in the logical device management table 101 (FIG. 7). Then, the virtual device number 113 registered in the logical device management entry 111 of the corresponding logical device 71BX newly created in step SP81 of the logical device creation process (FIG. 17) is replaced. By this processing, the migration source logical unit 72A, which is the upper storage layer of the logical device 71B, is associated with the storage device 30B, which is the lower storage layer of the virtual device 70BX, and the data of the data to the migration destination storage device 4B is associated with it. The migration is complete.
- FIG. 20 shows a read process procedure executed when the migration destination storage apparatus 4B receives a read request from the host computer 2.
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B receives the read request from the host computer 2, it starts this read processing. First, it extracts the LUN of the migration destination logical unit 72B of the read destination from the read request, and at the same time the logical unit management table 100 ( Referring to FIG. 7), the logical unit management entry 107 (FIG. 7) corresponding to the extracted LUN is specified (SP110).
- the migration destination storage apparatus 4B refers to the logical device management table 101 (FIG. 7) and corresponds to the logical device number 109 (FIG. 7) registered in the logical unit management entry 107 identified in step SP110.
- the logical device management entry 111 (FIG. 7) is specified (SP111).
- the migration destination storage apparatus 4B refers to the logical device management table 101 and checks whether the read cache mode flag 114 (FIG. 7) registered in the logical device management entry 111 identified in step SP111 is set to ON. It is determined whether or not (SP112).
- the migration destination storage apparatus 4B obtains a negative result in this determination, it proceeds to step SP115, and if it obtains a positive result, it proceeds to step SP113. Therefore, for example, in the above-described data migration processing described with reference to FIGS. 10 to 19, the read cache mode of the logical device 71B associated with the migration destination logical unit 72B is set to “off”, and the read If the read request for the migration destination logical unit 72B is received before the cache mode is set to “ON”, the process proceeds to step SP115, and the read to the migration destination logical unit 72B is performed at other timings. When the request is received, the process proceeds to step SP113.
- the migration destination storage apparatus 4B proceeds to step SP113 as the determination result of step SP112, the migration destination storage apparatus 4B refers to the cache directory 104 to determine whether or not the directory entry 122 corresponding to the read data exists in the cache directory 104. (SP113).
- the migration destination storage apparatus 4B reads such data from the cache memory 42B and transmits the read data to the host computer 2 that is the source of the read request (SP118). The migration destination storage apparatus 4B then ends this read process.
- obtaining a negative result in the determination at step SP113 means that the read data is not stored in the cache memory 42B.
- the migration destination storage apparatus 4B adds the directory entry 122 (FIG. 7) corresponding to the data in the cache directory 104 (FIG. 7) (SP114).
- the migration destination storage apparatus 4B registers the address of the unused area in the cache memory 42B as the cache address 123 (FIG. 7) in the directory entry 122 to be added and as the data identification information 124 in response to the read request.
- the included data identification information is registered in the directory entry 122 to be added.
- the migration destination storage apparatus 4B proceeds to step SP115.
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B proceeds to step SP115, it refers to the logical unit management table 100 (FIG. 7) and corresponds from the virtual device number 113 registered in the logical device management entry 111 identified in step SP111.
- the virtual device 70B is specified, and the lower storage hierarchy (corresponding to the virtual device 70B) is identified from the lower storage hierarchy identification information 118 (FIG. 7) registered in the virtual device management entry 116 corresponding to the virtual device 70B.
- the storage device 30B or the migration source logical unit 72A) is specified. Then, the migration destination storage apparatus 4B transfers the read request received at that time to the lower storage hierarchy (SP115).
- the migration destination storage apparatus 4B when the migration destination storage apparatus 4B receives a response (read data) transmitted from the lower storage hierarchy in response to the read request (SP116), it stores the received read data in the cache memory 42B (SP117). In step SP117, only the cache memory 42B is used as a temporary storage location of data, and the directory entry 122 related to this read data is not added to the cache directory 104 (FIG. 7).
- FIG. 21 shows a processing procedure of write processing that is executed when the migration destination storage apparatus 4B receives a write request and write data from the host computer 2.
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B receives a write request and write data from the host computer 2, it starts the write processing shown in FIG. 21, and first extracts the LUN of the write destination logical unit 72B from the write request. Referring to the logical unit management table 100 (FIG. 7), the logical unit management entry 107 (FIG. 7) corresponding to the extracted LUN is specified (SP120).
- the migration destination storage apparatus 4B refers to the logical device management table 101 (FIG. 7) and corresponds to the logical device number 109 (FIG. 7) registered in the logical unit management entry 107 identified in step SP120.
- the logical device management entry 111 (FIG. 7) is specified (SP121).
- the migration destination storage apparatus 4B refers to the logical device management table 101 and determines whether the write cache mode flag 115 (FIG. 7) registered in the logical device management entry 111 identified in step SP121 is set to ON. It is determined whether or not (SP122).
- step SP125 When the migration destination storage apparatus 4B obtains a negative result in this determination, it proceeds to step SP125, and if it obtains a positive result, it proceeds to step SP123. Accordingly, for example, in the above-described data migration processing described with reference to FIGS. 10 to 19, the write cache mode of the logical device 71B associated with the migration destination logical unit 72B is set to “off” (step SP12 in FIG. 10). And when the write request for the migration destination logical unit 72B is received after the write cache mode is set to “ON” (see step SP15 in FIG. 10 and FIG. 18). If the process proceeds to step SP125 and a write request for the migration destination logical unit 72B is received at a timing other than this, the process proceeds to step SP123.
- step SP123 When the migration destination storage apparatus 4B proceeds to step SP123 as a result of the determination at step SP122, it refers to the cache directory 104 (FIG. 7) and whether there is a directory entry 122 (FIG. 7) corresponding to the write data. Is determined (SP123).
- the migration destination storage apparatus 4B adds the directory entry 122 corresponding to the write data in the cache directory 104 (SP124). At this time, the migration destination storage apparatus 4B registers the address of the unused area in the cache memory 42B in the directory entry 122 to be added as the cache address 123, and also identifies the data included in the write request as the data identification information 124. Information is registered in the directory entry 122 to be added. Then, the migration destination storage apparatus 4B proceeds to step SP125.
- obtaining a positive result in the determination at step SP123 means that the write data before update is stored in the cache memory 42B.
- the migration destination storage apparatus 4B overwrites the write data in the cache memory 42B with the updated write data (SP125).
- step SP125 only the cache memory 42B is used as a temporary data storage location, and the directory entry 122 related to the write data is not added to the cache directory 104.
- the migration destination storage apparatus 4B refers to the logical device management table 101 (FIG. 7), and the write cache mode registered in the logical device management entry 111 (FIG. 7) of the logical device 71B identified in step SP121. It is determined again whether the flag 115 (FIG. 7) is set to “ON” (SP126).
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B obtains a positive result in the determination at step SP126, it sends a write response to the effect that the write process has been completed to the host computer 2 that is the transmission source of the write request (SP129). This write process is terminated.
- the migration destination storage apparatus 4B obtains a negative result in the determination at step SP126, it refers to the logical unit management table 100 and is registered in the logical device management entry 111 of the logical device 71B identified at step SP121.
- the corresponding virtual device 70B is identified from the virtual device number 113, and the lower storage hierarchy identification registered in the virtual device management entry 116 (FIG. 7) corresponding to the virtual device 70B in the virtual device management table 102 (FIG. 7) From the information 118 (FIG. 7), the lower storage hierarchy (storage device 30B or migration source logical unit 72A) associated with the virtual device 70B is specified. Then, the migration destination storage apparatus 4B transfers the write request and write data received at that time to the lower storage hierarchy (SP127).
- the migration destination storage apparatus 4B When the migration destination storage apparatus 4B receives a response (write completion notification) transmitted from the lower storage hierarchy in response to the write request (SP128), the write process is completed for the host computer 2 that is the transmission source of the write request. A write response to that effect is transmitted (SP129), and then this write processing is terminated.
- the migration source logical unit 72A of the migration source storage apparatus 4A is mapped to the migration destination logical unit 72B of the migration destination storage apparatus 4B.
- the host computer 2 adds a path PT2 (FIG.
- the migration source storage Performing data migration between di device 4A and the migration destination storage apparatus 4B.
- the migration source storage apparatus 4A does not require a special function, and data is exchanged between the host computer 2 and the storage apparatus (migration source storage apparatus 4A or migration destination storage apparatus 4B). This data migration can be performed without stopping the process.
- reference numeral 130 denotes a computer system according to the second embodiment.
- the computer system 130 is configured in the same manner as the computer system 1 according to the first embodiment except that a part of the data migration processing is different.
- FIG. 22 shows the configuration of the path management entry 140 according to this embodiment.
- the path management entry 53 of the first embodiment has a corresponding logical volume VOL (FIG. 2) set in the host computer 2.
- the logical volume number 54 of 8) and the path numbers 55 assigned to the paths PT1 and PT2 (FIG. 8) connected to the logical volume VOL are registered, whereas the path according to the present embodiment is registered.
- the management entry 140 the logical volume number 141 of the corresponding logical volume VOL set in the host computer 2 and the path information 142 of each path connected to the logical volume VOL are registered.
- This path information 142 is information composed of the path numbers 143 of the paths PT1 and PT2 connected to the corresponding logical volume VOL and the path status 144 of the paths PT1 and PT2.
- the path status 144 is information indicating the status of the corresponding paths PT1 and PT2, and takes one of the values “Active”, “Standby”, and “Unavailable”. “Active” indicates that the storage device 30B, which is the physical device to which the corresponding paths PT1 and PT2 are connected, operates normally and the storage device 30B is assigned to the logical device 71B. “Standby” indicates that the storage device 30B is operating normally, but the storage device 30B is not allocated to the logical device 71B. Further, “Unavailable” indicates that a failure has occurred in the storage device 30B and the storage device 30B is not assigned to the logical device 71B.
- the alternate path program 131 (FIG. 2) of the host computer 2 refers to the corresponding path management entry 140 in the path management table 132 (FIG. 2) when processing a read request or write request for the logical volume VOL. From the plurality of path information 142 associated with the logical volume number 141 of the logical volume VOL, one or more whose path status 144 is “Active” is selected. Then, the alternate path program 131 sends the read request and the write request to the migration source storage apparatus 4A or the migration destination using the one or more paths PT1, PT2 identified by the path number 143 of the selected one or more path information 142. The data is transmitted to the storage device 4B.
- the alternate path program 131 detects a failure in the paths PT1 and PT2 in which “Active” is registered as the path status 144, the alternate path program 131 changes the path status 144 of the paths PT1 and PT2 to “Unavailable”.
- the alternate path program 131 is included in the path management entry 140 when none of the path information 142 included in a path management entry 140 has a path status 144 of “Active”.
- One or more of the plurality of path information 142 that have the path status 144 of “Standby” are identified, and the path status 144 of the identified path information 142 is changed to “Active”.
- This failover processing as described above is transparent to the application program 52 (FIG. 2), and when viewed from the application program 52, read requests and write requests to the storage devices (migration source storage device 4A and migration destination storage device 4B). Issuance does not stop.
- FIG. 23 shows a processing procedure of data migration control processing executed by the logical unit migration instruction program 133 (FIG. 3) of the management computer 3 according to the second embodiment in the present embodiment.
- the data migration control process according to the present embodiment will be described in relation to the first embodiment.
- the logical unit migration instruction program 133 issues a standby instruction instead of the alternate path deletion instruction after issuing the alternate path addition instruction.
- the write cache mode associated with the corresponding logical device in the migration destination storage apparatus is set to “ON”, so that the migration destination storage apparatus Until the migration of the migration source logical unit is completed, new data is also stored in the migration source logical unit. Therefore, if a failure occurs in the migration destination storage device before the data migration from the migration source storage device to the migration destination storage device is completed, data can be read and written even if the path is returned to the migration source storage device. There is a merit that there is no influence.
- the data migration control process will be described in more detail.
- the logical unit migration instruction program 133 is instructed to execute data migration from the migration source storage apparatus 4A to the migration destination storage apparatus 4B via the input device 23 of the management computer 3, the logical unit migration instruction program 133 is shown in FIG.
- the data migration control process is started, and steps SP130 to SP133 are processed in the same manner as steps SP10 to SP13 of the data migration control process according to the first embodiment described above with reference to FIG.
- the logical unit migration instruction program 133 instructs the host computer 2 to change the state of the path PT1 corresponding to the migration source logical unit 72A to the “Standby” (standby) state (hereinafter referred to as the Standby instruction). (Referred to as SP134).
- the host computer 2 corresponds to the path PT1 in the corresponding path management entry 140 (FIG. 22) on the path management table 132 (FIG. 2) stored in the memory 11 (FIG. 1) according to this Standby instruction.
- the path status 144 is changed to “Standby”.
- the host computer 2 sends the Set-Target-Port-Groups command defined by the SCSI standard to the migration source storage apparatus 4A, thereby setting the path PT1 connected to the migration source logical unit 72A in the Standby state. Is notified to the migration source storage apparatus 4A.
- the state of the path PT1 corresponding to the migration source logical unit 72A transitions to the Standby state.
- the host computer 2 receives a response from the migration source storage apparatus 4A to this notification, the host computer 2 transmits a path state update process completion notification to the management computer 3.
- the alternate path program 131 uses only the path PT2 corresponding to the migration destination logical unit 72B when processing the read request or write request for the logical volume VOL.
- the alternate path program 131 since the path PT1 corresponding to the migration source logical unit 72A remains as an alternate path of the logical volume VOL, the alternate path program 131 stores the data of the application program 52 (FIG. 2). It is possible to return to the state using the path PT1 corresponding to the migration source logical unit 72A without stopping the output.
- the alternate path program 131 continues until the path PT1 connecting the logical volume VOL and the migration source logical unit 72A is deleted in the host computer 2 in accordance with the alternate path deletion instruction issued from the management computer 3 in step SP138 described later. , It is possible to return to the state using the path PT1. For example, when the alternate path program 131 returns to the state using the path PT1, a failure occurs in the path PT2 connected to the migration destination logical unit 72B, or the system administrator uses the migration source storage apparatus 4A and the migration destination. A case where an instruction to stop data migration between the storage apparatuses 4B is input to the management computer 3 or the like can be considered.
- the logical unit migration instruction program 133 when the logical unit migration instruction program 133 receives the path status update process completion notification, the logical unit migration instruction program 133 sets the read cache mode of the logical device 71B associated with the migration destination logical unit 72B to “ON” for the migration destination storage apparatus 4B. (Hereinafter referred to as a read cache mode on instruction) (SP135).
- the migration destination storage apparatus 4B sets the read cache mode flag of the logical device management entry 111 (FIG. 7) corresponding to the logical device 71B in the logical device management table 101 (FIG. 7) to “ON” in accordance with this read cache mode ON instruction. Execute read cache mode ON processing to change to When the read cache mode on process is completed, the migration destination storage apparatus 4B notifies the management computer 3 of a read cache mode on process completion notification.
- the write cache mode of the corresponding logical device 71B is not set to “ON” in step SP135.
- the latest data that is the same as the previous logical unit 72B is stored.
- the alternate path program 131 can return to the state using the path PT1 connected to the migration source logical unit 72A while ensuring data consistency.
- the logical unit migration instruction program 133 When the logical unit migration instruction program 133 receives the read cache mode on process completion notification, the logical unit migration instruction program 133 performs logical processing on the migration destination storage apparatus 4B as in the first embodiment (see step SP16 and step SP17 in FIG. 10). By sequentially issuing a device creation instruction and a data copy instruction, the logical device 71B is created in the migration destination storage apparatus 4B, and the data stored in the corresponding logical device 71A in the migration source storage apparatus 4A Copy (migrate) to the logical device 71B.
- a device creation instruction and a data copy instruction By sequentially issuing a device creation instruction and a data copy instruction, the logical device 71B is created in the migration destination storage apparatus 4B, and the data stored in the corresponding logical device 71A in the migration source storage apparatus 4A Copy (migrate) to the logical device 71B.
- the logical unit migration instruction program 133 instructs the host computer 2 to delete the path PT1 (FIG. 8) to the migration source logical unit 72A from the alternate path of the logical volume VOL (alternate path deletion instruction). ) Is provided (SP138).
- the host computer 2 executes an alternate path deletion process for deleting the path PT1 to the migration source logical unit 72A from the alternate path of the logical volume VOL in accordance with this alternate path deletion instruction.
- the host computer 2 transmits an alternate path deletion process completion notification to the management computer 3.
- the logical unit migration instruction program 133 When the logical unit migration instruction program 133 receives the replacement path deletion process completion notification, the logical unit migration instruction program 133 sets the write cache mode of the logical device 71B associated with the migration destination logical unit 72B to “on” for the migration destination storage apparatus 4B.
- the write cache mode on instruction is given to instruct to set the value (SP139).
- the migration destination storage apparatus 4B Upon receiving this write cache mode on instruction, the migration destination storage apparatus 4B executes a write cache mode on setting process for setting the write cache mode associated with the logical device 71B to “on”. When the migration destination storage apparatus completes the write cache mode on setting process, it transmits a write cache mode on setting process completion notification to the management computer 3.
- the logical unit migration instruction program 133 When the logical unit migration instruction program 133 receives the write cache mode on setting process completion notification, the logical unit migration instruction program 133 sends new virtual devices 70BX respectively associated with the temporary new logical devices 71BX to the migration destination storage apparatus 4B.
- An instruction (virtual device replacement instruction) is given to replace the corresponding virtual device 70B associated with the corresponding migration destination logical unit 72B (SP140). Note that the migration destination storage apparatus 4B that has received this virtual device replacement instruction Since the processing content is the same as the processing content described above with respect to step SP18 in FIG. 10, the description thereof is omitted here.
- the computer system 130 similarly to the computer system 1 according to the first embodiment, between the host computer 2 and the storage device (migration source storage device 4A or migration destination storage device 4B).
- the data migration from the migration source storage apparatus 4A to the migration destination storage apparatus 4B can be performed without stopping the transfer of data.
- the data stored in the logical device 71A in the migration source storage apparatus 4A is copied to the logical device 71B that virtualizes the logical device 71A in the migration destination storage apparatus 4B.
- the migration source logical unit Since the new data is also stored in 72A and the host computer 2 deletes the path PT1 connected to the migration source logical unit 72A after the data copy of the logical device 71B in the migration destination storage apparatus 4B is completed, the data Failure in migration destination logical unit 72B during copy processing When that occurred, without stopping input and output of data in the host computer 2 it can be returned to the path PT1, which is immediately connected to the migration source logical unit 72A the path.
- the present invention is not limited to this, and the virtual devices 70A, 70B and the logical devices 71A, 71B are not necessarily required. One or both of the logical devices 71A may be omitted.
- another intermediate storage hierarchy may be provided.
- the data stored in the migration source logical unit 72A to the migration destination logical unit 72B it is made to correspond to one or a plurality of first intermediate storage hierarchies that associate the migration source logical unit 72A and the storage device 30A.
- a second intermediate storage tier associated with the storage device 30B is created in the second storage device 4B, and the second storage is transferred from the migration source logical unit 72A via the first and second intermediate storage tiers.
- the data may be copied to the storage device 30B of the device 4B, and then a part or all of the first intermediate storage hierarchy and the second intermediate storage hierarchy may be exchanged.
Abstract
Description
(1)第1の実施の形態
(1-1)計算機システムの構成
図1において、1は全体として本実施の形態による計算機システムを示す。本計算機システム1は、ホスト計算機2、管理計算機3、2台のストレージ装置4A,4B、SAN(Storage Area Network)5及びLAN(Local Area Network)6を備えて構成される。そしてホスト計算機2は、SAN(Storage Area Network)5を介して各ストレージ装置4A,4Bとそれぞれ接続され、管理計算機3は、LAN(Local Area Network)6を介してホスト計算機2及び各ストレージ装置4A,4Bとそれぞれ接続されている。
(1-2)本計算機システムにおけるデータ移行処理
(1-2-1)本計算機システムにおけるデータ移行処理の概要
次に、かかる計算機システム1において、移行元ストレージ装置4Aを移行先ストレージ装置4Bに入れ替える際に実行される、移行元ストレージ装置4Aに格納されているデータを移行先ストレージ装置4Bに移行させるデータ移行処理の概要について説明する。
(1-2-2)各プログラムの具体的な処理
次に、図10から図21を参照して、本実施の形態によるデータ移行処理に関する各種処理の処理内容をより詳細に説明する。なお、以下においては、各種処理の処理主体を「プログラム」として説明するが、実際上は、その「プログラム」に基づいてホスト計算機2、管理計算機3、移行元ストレージ装置4A又は移行先ストレージ装置4BのCPU10,20,40A,40Bがその処理を実行することは言うまでもない。
(1-2-2-1)データ移行制御処理
図10は、上述した本実施の形態によるデータ移行処理に関連して管理計算機3のメモリ21に格納されている論理ユニット移行指示プログラム60(図3)により実行されるデータ移行制御処理の処理手順を示す。
(1-2-2-2)外部ボリューム設定処理
図11は、かかるデータ移行制御処理(図10)のステップSP10において管理計算機3の論理ユニット移行指示プログラム60から送信された外部ボリューム設定指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行される上述の外部ボリューム設定処理の処理手順を示す。
(1-2-2-3)Inquiry情報設定処理
他方、図12は、データ移行制御処理(図10)のステップSP11において管理計算機3の論理ユニット移行指示プログラム60から送信されたInquiry情報設定指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行されるInquiry情報設定処理の処理手順を示す。
(1-2-2-4)キャッシュモードオフ処理
図13は、データ移行制御処理(図10)のステップSP12において管理計算機3の論理ユニット移行指示プログラム60から送信されたキャッシュモードオフ指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行されるキャッシュモードオフ処理の処理手順を示す。
(1-2-2-5)交替パス追加処理
図14は、データ移行制御処理(図10)のステップSP13において管理計算機3の論理ユニット移行指示プログラム60から送信された交替パス追加指示を受信したホスト計算機2の交替パスプログラム51(図2)により実行される交替パス追加処理の処理手順を示す。
(1-2-2-6)交替パス削除処理
図15は、データ移行制御処理(図10)のステップSP14において管理計算機の3論理ユニット移行指示プログラム60から送信された交替パス削除指示を受信したホスト計算機2の交替パスプログラム51により実行される交替パス削除処理の処理手順を示す。
11)。
(1-2-2-7)キャッシュモードオン処理
図16は、データ移行制御処理(図10)のステップSP15において管理計算機3の論理ユニット移行指示プログラム60から送信されたキャッシュモードオン指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行されるキャッシュモードオン処理の処理手順を示す。
(1-2-2-8)論理デバイス作成処理
図17は、データ移行制御処理(図10)のステップSP16において管理計算機3の論理ユニット移行指示プログラム60から送信された論理デバイス作成指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行される論理デバイス作成処理の処理手順を示す。
(1-2-2-9)論理デバイスコピー処理
図18は、データ移行制御処理(図10)のステップSP17において管理計算機3の論理ユニット移行指示プログラム60から送信された論理デバイスコピー指示を受信した移行先ストレージ装置4Bの論理デバイスコピープログラム106(図7)により実行される論理デバイスコピー処理の処理手順を示す。
(1-2-2-10)仮想デバイス入替え処理
図19は、データ移行制御処理(図10)のステップSP18において管理計算機3の論理ユニット移行指示プログラム60から送信された仮想デバイス入替え指示を受信した移行先ストレージ装置4Bの記憶階層管理プログラム105により実行される仮想デバイス入替え処理の処理手順を示す。
(1-3)移行先ストレージ装置による入出力処理
次に、移行先ストレージ装置4Bにおけるリード処理及びライト処理の処理内容について説明する。
(1-3-1)リード処理
図20は、移行先ストレージ装置4Bがホスト計算機2からのリード要求を受信したときに実行するリード処理の処理手順を示す。
(1-3-2)ライト処理
図21は、移行先ストレージ装置4Bがホスト計算機2からのライト要求及びライトデータを受信したときに実行するライト処理の処理手順を示す。
(1-4)本実施の形態の効果
以上のように本実施の形態による計算機システム1では、移行先ストレージ装置4Bの移行先論理ユニット72Bに移行元ストレージ装置4Aの移行元論理ユニット72Aをマッピングすると共に移行元論理ユニット72AのInquiry情報を移行先論理ユニット72Bに設定する一方、ホスト計算機2において論理ボリュームVOL(図8)から移行先論理ユニット72BへのパスPT2(図8)を追加すると共に、当該論理ボリュームVOLから移行元論理ユニット72AへのパスPT1(図8)を削除し、その後、移行元論理ユニット72Aと対応付けられた論理デバイス71Aと、移行先論理ユニット72Bと対応付けられた論理デバイス71Bとの間でデータコピーを実行することにより、移行元ストレージ装置4A及び移行先ストレージ装置4B間におけるデータ移行を行う。
この場合、かかるデータ移行を実行するに際して、移行元ストレージ装置4Aは特殊な機能を必要とせず、またホスト計算機2及びストレージ装置(移行元ストレージ装置4A又は移行先ストレージ装置4B)間におけるデータの授受を停止させることなく、かかるデータ移行を行うことができる。かくするにつき、ストレージ装置の入れ替え作業を容易化させ得る計算機システムを実現できる。
(2)第2の実施の形態
図1において、130は第2の実施の形態による計算機システムを示す。この計算機システム130は、データ移行処理の一部の処理内容が異なる点を除いて第1の実施の形態による計算機システム1と同様に構成されている。
(3)他の実施の形態
なお上述の実施の形態においては、移行元論理ユニット72A及び記憶装置30A間、並びに、移行先論理ユニット72B及び記憶装置30B間をそれぞれ対応付ける中間記憶階層として、仮想デバイス70A,70B及び論理デバイス71A,71Bを設けるようにした場合について述べたが、本発明はこれに限らず、これら仮想デバイス70A,70B及び論理デバイス71A,71Bは必ずしも必須ではなく、仮想デバイス70A及び論理デバイス71Aの一方又は両方を省略しても良い。
Claims (14)
- 計算機と、
第1の記憶装置が搭載され、前記第1の記憶装置の記憶領域を第1の論理ユニットとして前記計算機に提供する第1のストレージ装置と、
第2の記憶装置が搭載された第2のストレージ装置と
を備え、
前記第2のストレージ装置は、前記第1のストレージ装置内の前記第1の論理ユニットをそれぞれ仮想化して第2の論理ユニットとして前記計算機に提供すると共に、前記第1のストレージ装置から各前記第1の論理ユニットの構成情報を収集し、収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定し、
前記計算機は、前記第2の論理ユニットへのパスを交替パスの対象に追加すると共に、前記第1の論理ユニットへのパスを交替パスの対象から削除し、
前記第2のストレージ装置は、前記第1のストレージ装置の前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーし、当該記憶領域を前記第2の論理ユニットと対応付ける
ことを特徴とする計算機システム。 - 前記第2のストレージ装置は、
前記第2の論理ユニットに読み書きされるデータを一時的に保存するキャッシュメモリを備え、動作モードとして、計算機からのリード要求又はライト要求に応じて、リード対象又はライト対象のデータを前記キャッシュメモリから読み出し又は当該キャッシュメモリに書き込むキャッシュモードを有し、
前記第1のストレージ装置から収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定した後、前記リード要求及び前記ライト要求に対する前記キャッシュモードをオフに設定し、
前記計算機が前記第2の論理ユニットへのパスを追加すると共に、前記第1の論理ユニットへのパスを削除した後に、前記リード要求及び前記ライト要求に対する前記キャッシュモードをオンに設定する
ことを特徴とする請求項1に記載の計算機システム。 - 前記第2のストレージ装置には、前記第1の論理ユニットと前記第2の論理ユニットとを対応付ける第1の中間記憶階層が設けられ、
前記第2のストレージ装置は、
前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーする際、
前記第1の中間記憶階層に対応させて、前記第2の記憶装置と対応付けた第2の中間記憶階層を作成し、
前記第1の論理ユニットから前記第1の中間記憶階層及び前記第2の中間記憶階層を介して前記第2の記憶装置にデータをコピーし、
前記第1の中間記憶階層と、前記第2の中間記憶階層との一部又は全部を入れ替える
ことを特徴とする請求項1に記載の計算機システム。 - 前記第1及び第2のストレージ装置間におけるデータ移行を管理する管理計算機を備え、
前記第2のストレージ装置は、前記管理計算機の指示に従って、前記第1のストレージ装置内の前記第1の論理ユニットをそれぞれ仮想化して第2の論理ユニットとして前記計算機に提供すると共に、前記第1のストレージ装置から各前記第1の論理ユニットの構成情報を収集し、収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定し、
前記計算機は、前記管理計算機の指示に従って、前記第2の論理ユニットへのパスを交替パスの対象に追加すると共に、前記第1の論理ユニットへのパスを交替パスの対象から削除し、
前記第2のストレージ装置は、前記管理計算機の指示に従って、前記第1のストレージ装置の前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーし、当該記憶領域を前記第2の論理ユニットと対応付ける
ことを特徴とする請求項1に記載の計算機システム。 - 前記第2のストレージ装置は、
前記キャッシュメモリの前記キャッシュモードがオフのとき、前記計算機からのリード要求に対して、前記第1のストレージ装置へのリード要求に変換し、前記第1のストレージ装置は当該リード要求を処理し、また、前記計算機からのライト要求に対して、第2のストレージ装置及び第1のストレージ装置にデータが格納され、
前記キャッシュメモリの前記キャッシュモードがオンのとき、前記計算機からのリード要求に対して、当該リード要求が前記第1のストレージ装置より前記第2のストレージ装置に移行されていないデータを対象にしているときは、前記第1のストレージ装置へのリード要求に変換し、前記第1のストレージ装置は当該リード要求を処理し、当該リード要求が前記第2のストレージ装置に格納されているデータを対象にしているときは、第2のストレージ装置が当該リード要求を処理し、また、前記計算機からのライト要求に対して、第2のストレージ装置にのみデータが格納される
ことを特徴とする請求項1に記載の計算機システム。 - 前記計算機は、
前記第2の論理ユニットへのパスを追加した後、前記第1の論理ユニットと接続されたパスの状態をスタンバイ状態に遷移させ、
前記第1及び第2のストレージ装置間におけるデータのコピーが完了した後に、前記第1の論理ユニットと接続された前記パスを削除する
ことを特徴とする請求項1に記載の計算機システム。 - 前記計算機は、
前記第2の論理ユニットへのパスに障害が発生した場合に、前記第1の論理ユニットとスタンバイ状態で接続されたパスを、アクティブ状態に戻し、当該第1の論理ユニットへのパスを用いる状態に切り替える
ことを特徴とする請求項1に記載の計算機システム。 - 計算機と、
第1の記憶装置が搭載され、前記第1の記憶装置の記憶領域を第1の論理ユニットとして前記計算機に提供する第1のストレージ装置と、第2の記憶装置が搭載された第2のストレージ装置とを有する計算機システムにおいて第1のストレージ装置から第2のストレージ装置にデータを移行するデータ移行方法において、
前記第2のストレージ装置が、前記第1のストレージ装置内の前記第1の論理ユニットをそれぞれ仮想化して第2の論理ユニットとして前記計算機に提供すると共に、前記第1のストレージ装置から各前記第1の論理ユニットの構成情報を収集し、収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定する第1のステップと、
前記計算機が、前記第2の論理ユニットへのパスを交替パスの対象に追加すると共に、前記第1の論理ユニットへのパスを交替パスの対象から削除し、前記第2のストレージ装置が、前記第1のストレージ装置の前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーし、当該記憶領域を前記第2の論理ユニットと対応付ける第2のステップと
を備えることを特徴とするデータ移行方法。 - 前記第2のストレージ装置は、
前記第2の論理ユニットに読み書きされるデータを一時的に保存するキャッシュメモリを備え、動作モードとして、計算機からのリード要求又はライト要求に応じて、リード対象又はライト対象のデータを前記キャッシュメモリから読み出し又は当該キャッシュメモリに書き込むキャッシュモードを有し、
前記第1のステップにおいて、前記第2のストレージ装置は、前記第1のストレージ装置から収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定した後、前記リード要求及び前記ライト要求に対する前記キャッシュモードをオフに設定し、
前記第2のステップにおいて、前記第2のストレージ装置は、前記計算機が前記第2の論理ユニットへのパスを追加すると共に、前記第1の論理ユニットへのパスを削除した後に、前記リード要求及び前記ライト要求に対する前記キャッシュモードをオンに設定する
ことを特徴とする請求項8に記載のデータ移行方法。 - 前記第2のストレージ装置には、前記第1の論理ユニットと前記第2の論理ユニットとを対応付ける前記第1の中間記憶階層が設けられ、
前記第2のステップにおいて、
前記第2のストレージ装置は、
前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーする際、
前記第1の中間記憶階層に対応させて、前記第2の記憶装置と対応付けた第2の中間記憶階層を作成し、
前記第1の論理ユニットから前記第1の中間記憶階層及び前記第2の中間記憶階層を介して前記第2の記憶装置にデータをコピーし、
前記第1の中間記憶階層と、前記第2の中間記憶階層との一部又は全部を入れ替える
ことを特徴とする請求項8に記載のデータ移行方法。 - 前記第1及び第2のストレージ装置間におけるデータ移行を管理する管理計算機を備え、
前記第1のステップにおいて、前記第2のストレージ装置は、前記管理計算機の指示に従って、前記第1のストレージ装置内の前記第1の論理ユニットをそれぞれ仮想化して第2の論理ユニットとして前記計算機に提供すると共に、前記第1のストレージ装置から各前記第1の論理ユニットの構成情報を収集し、収集した各前記第1の論理ユニットの構成情報をそれぞれ対応する前記第2の論理ユニットに設定し、
前記第2のステップにおいて、前記計算機は、前記管理計算機の指示に従って、前記第2の論理ユニットへのパスを交替パスの対象に追加すると共に、前記第1の論理ユニットへのパスを交替パスの対象から削除し、前記第2のストレージ装置は、前記管理計算機の指示に従って、前記第1のストレージ装置の前記第1の論理ユニットに格納されたデータを前記第2の記憶装置が提供する記憶領域にコピーし、当該記憶領域を前記第2の論理ユニットと対応付ける
ことを特徴とする請求項8に記載のデータ移行方法。 - 前記第2のストレージ装置は、
前記第1のステップにおいて、
前記キャッシュメモリの前記キャッシュモードがオフのとき、前記計算機からのリード要求に対して、前記第1のストレージ装置へのリード要求に変換し、前記第1のストレージ装置は当該リード要求を処理し、また、前記計算機からのライト要求に対して、第2のストレージ装置及び第1のストレージ装置にデータが格納され、
前記第2のステップにおいて、
前記キャッシュメモリの前記キャッシュモードがオンのとき、前記計算機からのリード要求に対して、当該リード要求が前記第1のストレージ装置より前記第2のストレージ装置に移行されていないデータを対象にしているときは、前記第1のストレージ装置へのリード要求に変換し、前記第1のストレージ装置は当該リード要求を処理し、当該リード要求が前記第2のストレージ装置に格納されているデータを対象にしているときは、第2のストレージ装置が当該リード要求を処理し、また、前記計算機からのライト要求に対して、第2のストレージ装置にのみデータが格納される
ことを特徴とする請求項8に記載のデータ移行方法。 - 前記第2のステップにおいて、前記計算機は、
前記第2の論理ユニットへのパスを追加した後、前記第1の論理ユニットと接続されたパスの状態をスタンバイ状態に遷移させ、
前記第1及び第2のストレージ装置間におけるデータのコピーが完了した後に、前記第1の論理ユニットと接続された前記パスを削除する
ことを特徴とする請求項8に記載のデータ移行方法。 - 前記第2のステップにおいて、前記計算機は、
前記第2の論理ユニットへのパスに障害が発生した場合に、前記第1の論理ユニットとスタンバイ状態で接続されたパスを、アクティブ状態に戻し、当該第1の論理ユニットへのパスを用いる状態に切り替える
ことを特徴とする請求項8に記載のデータ移行方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012527471A JP5603941B2 (ja) | 2010-08-06 | 2010-08-06 | 計算機システム及びデータ移行方法 |
PCT/JP2010/004982 WO2012017493A1 (ja) | 2010-08-06 | 2010-08-06 | 計算機システム及びデータ移行方法 |
US12/988,523 US8443160B2 (en) | 2010-08-06 | 2010-08-06 | Computer system and data migration method |
US13/892,349 US8892840B2 (en) | 2010-08-06 | 2013-05-13 | Computer system and data migration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2010/004982 WO2012017493A1 (ja) | 2010-08-06 | 2010-08-06 | 計算機システム及びデータ移行方法 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/988,523 A-371-Of-International US8443160B2 (en) | 2010-08-06 | 2010-08-06 | Computer system and data migration method |
US13/892,349 Continuation US8892840B2 (en) | 2010-08-06 | 2013-05-13 | Computer system and data migration method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012017493A1 true WO2012017493A1 (ja) | 2012-02-09 |
Family
ID=45556960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/004982 WO2012017493A1 (ja) | 2010-08-06 | 2010-08-06 | 計算機システム及びデータ移行方法 |
Country Status (3)
Country | Link |
---|---|
US (2) | US8443160B2 (ja) |
JP (1) | JP5603941B2 (ja) |
WO (1) | WO2012017493A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013210924A (ja) * | 2012-03-30 | 2013-10-10 | Nec Corp | 仮想化システム、ストレージ装置、ストレージデータ移行方法、及びストレージデータ移行プログラム |
JP2014071534A (ja) * | 2012-09-27 | 2014-04-21 | Fujitsu Ltd | ストレージ装置、設定方法および設定プログラム |
WO2015025358A1 (ja) * | 2013-08-20 | 2015-02-26 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの制御方法 |
US10503440B2 (en) | 2015-01-21 | 2019-12-10 | Hitachi, Ltd. | Computer system, and data migration method in computer system |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8112607B2 (en) * | 2009-05-07 | 2012-02-07 | Sap Ag | Method and system for managing large write-once tables in shadow page databases |
CN102387175A (zh) * | 2010-08-31 | 2012-03-21 | 国际商业机器公司 | 一种存储系统迁移的方法和系统 |
US11061597B2 (en) * | 2010-11-09 | 2021-07-13 | Pure Storage, Inc. | Supporting live migrations and re-balancing with a virtual storage unit |
WO2013118188A1 (en) * | 2012-02-10 | 2013-08-15 | Hitachi, Ltd. | Storage system and method thereof for migrating data with cache bypass |
JP6005446B2 (ja) * | 2012-08-31 | 2016-10-12 | 富士通株式会社 | ストレージシステム、仮想化制御装置、情報処理装置、および、ストレージシステムの制御方法 |
JP6135128B2 (ja) * | 2012-12-28 | 2017-05-31 | 富士通株式会社 | 情報処理システム、記憶装置、情報処理装置、データ複製方法及びデータ複製プログラム |
US10031703B1 (en) * | 2013-12-31 | 2018-07-24 | Emc Corporation | Extent-based tiering for virtual storage using full LUNs |
US10860220B2 (en) * | 2015-01-07 | 2020-12-08 | Microsoft Technology Licensing, Llc | Method and system for transferring data between storage systems |
WO2016109893A1 (en) * | 2015-01-07 | 2016-07-14 | Mover Inc. | Method and system for transferring data between storage systems |
CN114528022A (zh) * | 2015-04-24 | 2022-05-24 | 优创半导体科技有限公司 | 实现虚拟地址的预转换的计算机处理器 |
JP6343716B2 (ja) | 2015-06-24 | 2018-06-13 | 株式会社日立製作所 | 計算機システム及び記憶制御方法 |
US10306005B1 (en) * | 2015-09-30 | 2019-05-28 | EMC IP Holding Company LLC | Data retrieval system and method |
US10101940B1 (en) * | 2015-09-30 | 2018-10-16 | EMC IP Holding Company LLC | Data retrieval system and method |
US10402100B2 (en) * | 2016-03-23 | 2019-09-03 | Netapp Inc. | Techniques for path optimization in storage networks |
CN107506016B (zh) | 2016-06-14 | 2020-04-21 | 伊姆西Ip控股有限责任公司 | 存储设备和对存储设备供电的方法 |
JP6955159B2 (ja) * | 2017-11-21 | 2021-10-27 | 富士通株式会社 | ストレージシステム、ストレージ制御装置およびプログラム |
US10768844B2 (en) * | 2018-05-15 | 2020-09-08 | International Business Machines Corporation | Internal striping inside a single device |
JP7193732B2 (ja) * | 2019-04-08 | 2022-12-21 | 富士通株式会社 | 管理装置、情報処理システムおよび管理プログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007018455A (ja) * | 2005-07-11 | 2007-01-25 | Hitachi Ltd | データマイグレーション方法又はデータマイグレーションシステム |
JP2007310495A (ja) * | 2006-05-16 | 2007-11-29 | Hitachi Ltd | 計算機システム |
JP2008176627A (ja) * | 2007-01-19 | 2008-07-31 | Hitachi Ltd | 記憶システム又はストレージ移行方法 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3918394B2 (ja) * | 2000-03-03 | 2007-05-23 | 株式会社日立製作所 | データ移行方法 |
US6766430B2 (en) | 2000-07-06 | 2004-07-20 | Hitachi, Ltd. | Data reallocation among storage systems |
JP4115093B2 (ja) | 2000-07-06 | 2008-07-09 | 株式会社日立製作所 | 計算機システム |
JP4183443B2 (ja) | 2002-05-27 | 2008-11-19 | 株式会社日立製作所 | データ再配置方法及び装置 |
JP2004013215A (ja) | 2002-06-03 | 2004-01-15 | Hitachi Ltd | ストレージシステム、ストレージサブシステム、および、それらを含む情報処理システム |
JP2004102374A (ja) | 2002-09-05 | 2004-04-02 | Hitachi Ltd | データ移行装置を有する情報処理システム |
US7263593B2 (en) * | 2002-11-25 | 2007-08-28 | Hitachi, Ltd. | Virtualization controller and data transfer control method |
JP2004220450A (ja) | 2003-01-16 | 2004-08-05 | Hitachi Ltd | ストレージ装置、その導入方法、及びその導入プログラム |
US7093088B1 (en) * | 2003-04-23 | 2006-08-15 | Emc Corporation | Method and apparatus for undoing a data migration in a computer system |
JP2005018193A (ja) * | 2003-06-24 | 2005-01-20 | Hitachi Ltd | ディスク装置のインタフェースコマンド制御方法ならびに計算機システム |
US7149859B2 (en) * | 2004-03-01 | 2006-12-12 | Hitachi, Ltd. | Method and apparatus for data migration with the efficient use of old assets |
US7343467B2 (en) * | 2004-12-20 | 2008-03-11 | Emc Corporation | Method to perform parallel data migration in a clustered storage environment |
JP4852298B2 (ja) | 2005-10-28 | 2012-01-11 | 株式会社日立製作所 | 仮想ボリュームを識別する情報を引き継ぐ方法及びその方法を用いたストレージシステム |
JP4643456B2 (ja) | 2006-01-13 | 2011-03-02 | 株式会社日立製作所 | アクセスの設定方法 |
JP2008047142A (ja) | 2007-09-18 | 2008-02-28 | Hitachi Ltd | 情報処理システム、および、情報処理システムのストレージ制御方法 |
US20090089498A1 (en) * | 2007-10-02 | 2009-04-02 | Michael Cameron Hay | Transparently migrating ongoing I/O to virtualized storage |
US20100070722A1 (en) | 2008-09-16 | 2010-03-18 | Toshio Otani | Method and apparatus for storage migration |
-
2010
- 2010-08-06 JP JP2012527471A patent/JP5603941B2/ja active Active
- 2010-08-06 WO PCT/JP2010/004982 patent/WO2012017493A1/ja active Application Filing
- 2010-08-06 US US12/988,523 patent/US8443160B2/en active Active
-
2013
- 2013-05-13 US US13/892,349 patent/US8892840B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007018455A (ja) * | 2005-07-11 | 2007-01-25 | Hitachi Ltd | データマイグレーション方法又はデータマイグレーションシステム |
JP2007310495A (ja) * | 2006-05-16 | 2007-11-29 | Hitachi Ltd | 計算機システム |
JP2008176627A (ja) * | 2007-01-19 | 2008-07-31 | Hitachi Ltd | 記憶システム又はストレージ移行方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013210924A (ja) * | 2012-03-30 | 2013-10-10 | Nec Corp | 仮想化システム、ストレージ装置、ストレージデータ移行方法、及びストレージデータ移行プログラム |
JP2014071534A (ja) * | 2012-09-27 | 2014-04-21 | Fujitsu Ltd | ストレージ装置、設定方法および設定プログラム |
WO2015025358A1 (ja) * | 2013-08-20 | 2015-02-26 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの制御方法 |
JP5948504B2 (ja) * | 2013-08-20 | 2016-07-06 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの制御方法 |
GB2549242B (en) * | 2013-08-20 | 2020-10-28 | Hitachi Ltd | Storage system and control method for storage system |
US10503440B2 (en) | 2015-01-21 | 2019-12-10 | Hitachi, Ltd. | Computer system, and data migration method in computer system |
Also Published As
Publication number | Publication date |
---|---|
US8443160B2 (en) | 2013-05-14 |
US20120036330A1 (en) | 2012-02-09 |
JPWO2012017493A1 (ja) | 2013-09-19 |
JP5603941B2 (ja) | 2014-10-08 |
US20130254504A1 (en) | 2013-09-26 |
US8892840B2 (en) | 2014-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5603941B2 (ja) | 計算機システム及びデータ移行方法 | |
US7558916B2 (en) | Storage system, data processing method and storage apparatus | |
JP4662548B2 (ja) | スナップショット管理装置及び方法並びにストレージシステム | |
US9292211B2 (en) | Computer system and data migration method | |
US8645653B2 (en) | Data migration system and data migration method | |
JP5512833B2 (ja) | ストレージの仮想化機能と容量の仮想化機能との両方を有する複数のストレージ装置を含んだストレージシステム | |
US8086808B2 (en) | Method and system for migration between physical and virtual systems | |
EP1837767B1 (en) | Storage system and data management method | |
JP5461216B2 (ja) | 論理ボリューム管理の為の方法と装置 | |
JP4990940B2 (ja) | 計算機装置及びパス管理方法 | |
WO2013046254A1 (en) | Management server and data migration method | |
JP2009116809A (ja) | 記憶制御装置、ストレージシステム及び仮想ボリュームの制御方法 | |
JP2009048497A (ja) | 論理ボリュームのペアを利用したデータ保存の方式を変更する機能を備えたストレージシステム | |
JP2009282800A (ja) | ストレージ装置及びその制御方法 | |
JP2006331158A (ja) | ストレージシステム | |
JP2006031694A (ja) | 1次ミラーシャドウを有するストレージシステム | |
US10884622B2 (en) | Storage area network having fabric-attached storage drives, SAN agent-executing client devices, and SAN manager that manages logical volume without handling data transfer between client computing device and storage drive that provides drive volume of the logical volume | |
WO2010106694A1 (en) | Data backup system and data backup method | |
JP5715297B2 (ja) | ストレージ装置及びその制御方法 | |
US11740823B2 (en) | Storage system and storage control method | |
US10503440B2 (en) | Computer system, and data migration method in computer system | |
JP5355603B2 (ja) | ディスクアレイ装置及び論理ボリュームアクセス方法 | |
WO2014087465A1 (ja) | ストレージ装置及びストレージ装置移行方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 12988523 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10855585 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012527471 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10855585 Country of ref document: EP Kind code of ref document: A1 |