US20130275690A1 - Storage system and operation method of storage system - Google Patents
Storage system and operation method of storage system Download PDFInfo
- Publication number
- US20130275690A1 US20130275690A1 US13/912,297 US201313912297A US2013275690A1 US 20130275690 A1 US20130275690 A1 US 20130275690A1 US 201313912297 A US201313912297 A US 201313912297A US 2013275690 A1 US2013275690 A1 US 2013275690A1
- Authority
- US
- United States
- Prior art keywords
- storage device
- logical volume
- virtualization
- volume
- host
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0637—Permissions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
Definitions
- the present invention relates to a storage system and an operation method of a storage system.
- This storage system is configured by including a storage device such as a disk array device.
- a storage device is configured by disposing a plurality of memory apparatuses in an array to provide a memory area based on RAID (Redundant Array of Inexpensive Disks).
- RAID Redundant Array of Inexpensive Disks
- At least one or more logical volumes are formed on a physical memory area provided by the memory apparatus group, and this logical volume is provided to a host computer (hereinafter abbreviated as “host”). By transmitting a write command or read command, the host is able to write and read data into and from the logical volume.
- Data to be managed by companies and others is increasing daily.
- companies and others for example, equip the storage system with a new storage device to expand the storage system.
- Two methods can be considered for introducing a new storage device to the storage system.
- One method is to replace the old storage device with a new storage device.
- Another method is to make the old storage device and new storage device coexist.
- the present applicant has proposed technology of connecting a host and a first storage device and connecting the first storage device and a second storage device so that the first storage device will act over and process the access request from the host (Japanese Patent Laid-Open Publication No. 2004-005370).
- the first storage device will also receive and process commands targeting the second storage device. If necessary, the first storage device issues a command to the second storage device, receives the processing result thereof, and transmits this to the host.
- the performance of the storage system is improved by making the first storage device and second storage device coexist without wasting any memory resource. Nevertheless, even with this kind of reinforced storage system, the processing performance may deteriorate during the prolonged operation thereof.
- the first storage device may be replaced with a different high-performance storage device, or a separate first storage device may be added to the existing first storage device.
- the addition or replacement of the first storage device cannot be conducted as with the addition of the first storage device described in the foregoing document. This is because the first storage device is serially connected to the second storage device and uses the memory resource of the second storage device, and the configuration of the storage system is already complicated. The first storage device cannot be simply added or replaced by only focusing attention on the first storage device.
- the present invention was devised in view of the foregoing problems, and an object of the present invention is to provide a storage system and an operation method of a storage system configured by hierarchizing a plurality of storage devices for improving the processing performance thereof relatively easily. Another object of the present invention is to provide a storage system and an operation method of a storage system for improving the processing performance by enabling the shared use of one or a plurality of connection destination storage devices by a plurality of connection source storage devices. Other objects of the present invention will become clear from the detailed description of the preferred embodiments described later.
- the storage system has a plurality of connection source storage devices capable of respectively providing a logical volume to a host device; a connection destination storage device respectively connected to each of the connection source storage devices and having a separate logical volume; and a direction unit for directing the connection destination of the separate logical volume.
- each of the connection source storage devices is configured by respectively having: a management information memory unit for storing management information for managing the separate logical volume; and a control unit for connecting the logical volume and the separate logical volume via an intermediate volume based on the management information stored in the management information memory unit; wherein the connection destination of the separate logical volume can be switched among each of the connection source storage devices based on the designation from the direction unit.
- the logical volume of the connection source storage device can be connected to a separate logical volume of the connection destination storage device via an intermediate volume. This connection may be made based on the management information stored in the management information memory unit.
- connection destination storage device when focusing on the connection source storage device, the connection destination storage device is an external storage device positioned outside the connection source storage device, and the separate logical volume of the connection destination storage device is an external volume positioned outside the connection source storage device. Therefore, in the following explanation, for ease of under the present invention, the connection destination storage device may be referred to as an external storage device, and the separate logical volume may be referred to as an external volume, respectively.
- the host device issues a read command, write command and so on with the logical volume of the connection source storage device as the access target.
- the connection source storage device receives an access request from the host device, it issues a prescribed command to the external volume connected to the logical volume of the access target, and reads and writes data from and into the external volume.
- the logical volume of the connection source storage device is an access destination volume to become the access target from the host device
- the external volume (separate logical volume) of the external storage device is the data storage destination volume for actually storing the data.
- the host device is not able to directly recognize the external volume, and the external volume is transparent to the host device.
- the direction unit designates to which logical volume of the connection source storage device the external volume should be connected. Based on this designation, the connection designation of the external volume is switched among the respective connection source storage devices. In other words, when an external volume is connected to a logical volume of one connection source storage device via an intermediate volume, when the direction unit designates the switch to the other connection source storage device, the external volume is connected to a logical volume of the other connection source storage device via an intermediate volume.
- connection source storage devices may exclusively use one or a plurality of external volumes. Accordingly, for example, when there are numerous access requests to a specific external volume, such high-load external volume is transferred to a separate connection source storage device in order to disperse the load, and the processing performance of the overall storage system can be improved thereby.
- connection destination of the separate logical volume is switchable among each of the connection source storage devices without stopping the access from the host device to the logical volume.
- the access destination of the host device is switched among each of the connection source storage devices according to the switching of the connection destination of the separate logical volume.
- the access destination of the host device will also be switched from one connection source storage device to the other connection source storage device.
- the management information is constituted by including first management information for specifying the separate logical volume, and second management information for managing the attribute of the separate logical volume; the first management information is retained by each of the connection source storage devices; and the second management information is retained by the connection source storage device of the switching destination selected as the connection destination of the separate logical volume.
- the management information for managing the separate logical volume has first management information and second management information, and the first management information is stored in each of the connection source storage devices, and the second management information is stored in the connection source storage device requiring such second management information.
- the first management information contains volume identifying information for specifying the separate logical volume in the storage system, usage authorization information for specifying the connection source storage device having usage authorization of the separate logical volume, and switching status information for showing whether the connection destination of the separate logical volume is being switched among each of the connection source storage devices; and the second management information contains a plurality of pieces of other attribute information relating to the separate logical volume.
- the usage authorization information is set with the connection source storage device that becomes the switching source among each of the connection source storage devices, notified from the connection source storage device that becomes the switching source to the connection source storage device that becomes the switching destination, and the change of the usage authorization information is determined by the connection source storage device that becomes the switching source receiving the setting completion report from the connection source storage device that becomes the switching destination.
- a switching status flag is set while the connection destination of the separate logical volume is being switched from the connection source storage device that becomes the switching source to the connection source storage device that becomes the switching destination, and the switching status flag is reset when the connection destination of the separate logical volume is switched; while the switching status flag is being set, the connection source storage device that becomes the switching source destages unwritten data relating to the separate logical volume, and the connection source storage device that becomes the switching destination processes write data from the host device with an asynchronous system; and when the switching status flag is reset, the switching destination storage device destages the write data.
- an asynchronous transfer mode is a mode for, in the case of writing data in a logical volume, reporting the completion of writing to the host device before writing such data in a physical memory apparatus.
- a synchronous transfer mode is a mode for, in the case of writing data in a logical volume, reporting the completion of writing to the host device after confirming that such data has been written in a physical memory apparatus.
- connection source storage device that becomes the switching source among each of the connection source storage devices rejects the processing of access from the host device to the separate logical volume, and destages unwritten data relating to the separate logical volume.
- connection source storage device that becomes the switching source among the access requests from the host device, rejects the access request relating to the external volume to be transferred to the connection source storage device that becomes the switching destination.
- a rejection may be made positively or negatively.
- the connection source storage device that becomes the switching source destages unwritten data relating to such external volume to be transferred. As a result, the consistency of the data stored in such external volume can be maintained.
- connection source storage device that becomes the switching source issues a destage completion report to the connection source storage device that becomes the switching destination; and upon receiving the destage completion report, the connection source storage device that becomes the switching destination performs the processing of access from the host device to the separate logical volume.
- the dirty data before transfer (before switching) is written in a physical memory apparatus configuring the external volume of the transfer target to maintain the consistency of data.
- a monitoring unit is further provided for monitoring the load status relating to at least the connection source storage device that becomes the switching source among each of the connection source storage devices.
- connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices are respectively selected based on the monitoring result of the monitoring unit.
- the load status for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate, data traffic and so on may be considered.
- IOPS input/output per second
- CPU usage rate CPU usage rate
- cache memory usage rate cache memory usage rate
- data traffic data traffic and so on
- the load status for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate, data traffic and so on
- IOPS input/output per second
- CPU usage rate CPU usage rate
- cache memory usage rate cache memory usage rate
- data traffic data traffic and so on
- a management terminal to be connected to each of the connection source storage devices is further provided, wherein the direction unit and the monitoring unit are respectively provided to the management terminal.
- the storage system has a plurality of connection source storage devices to be used by at least one or more host devices, and at least one or more connection destination storage devices to be connected to each of the connection source storage devices, wherein the host device and each of the connection source storage devices are respectively connected via a first communication network, and each of the connection source storage devices and the connection destination storage device are connected via a second communication network separated from the first communication network
- connection destination storage device has a separate logical volume to be logically connected to a logical volume of each of the connection source storage devices.
- each of the connection source storage devices has a control unit for creating the logical volume and connecting the logical volume and the separate logical volume via an intermediate volume based on management information; and a memory used by the control unit and for storing the management information.
- the management terminal to be connected to each of the connection source storage devices has a monitoring unit for respectively monitoring the load status of each of the connection source storage devices, and a direction unit for respectively selecting the connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices based on the monitoring result of the monitoring unit.
- the management terminal switches the connection destination of the separate logical volume from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination based on the designation from the direction unit;
- the management information is constituted by including first management information for specifying the separate logical volume, and second management information for managing the attribute of the separate logical volume, and the first management information is respectively stored in the connection source storage device selected as the switching source and the connection source storage device selected as the switching destination.
- the entirety of the second management information is stored in the connection source storage device selected as the switching source, and only the second management information relating to the separate logical volume in which the connection destination is switched is transferred from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination.
- the operation method of a storage system is a method of operating a storage system having a first connection source storage device and a second connection source storage device capable of respectively providing a logical volume to a host device via a first communication network, and a connection destination storage device connected to each of the first and second connection source storage device via a second communication network, comprising the following steps.
- the plurality of separate logical volumes are respectively connected to one or a plurality of logical volumes of the first connection source storage device via an intermediate volume of the first connection source storage device based on the management information for respectively connecting to a plurality of separate logical volumes of the connection destination storage device, and the first connection source storage device is made to process the access request from the host device.
- the second connection source storage device is connected to the host device via the first communication network, to the connection destination storage device via the second communication network, and to the first connection source storage device via a third communication network.
- the first management information transfer step information for respectively specifying the plurality of separate logical volumes among the management information of the first connection source storage device is transferred from the first connection source storage device to the second connection source storage device via the third communication network.
- a separate logical volume is selected to be transferred to the second connection source storage device among the plurality of separate logical volumes used by the first connection source storage device.
- attribute information relating to the separate logical volume selected as the transfer target among the management information of the first connection source storage device is transferred from the first connection source storage device to the second connection source storage device via the third communication network.
- the separate logical volume selected as the transfer target is connected to the logical volume of the second connection source storage device via an intermediate volume of the second connection source storage device based on the information acquired at the first management information transfer step and the second management information transfer step, the path information for the host device to access the logical volume of the second connection source storage device is set in the host device, and the second connection source storage device is made to process the access request from the host device.
- the third communication network may also be used in combination with either the first communication network or second communication network.
- the whole or a part of the means, functions and steps of the present invention may sometimes be configured as a computer program to be executed with a computer system.
- a computer program such computer program, for instance, may be fixed in various storage mediums and distributed, or transmitted via a communication network.
- FIG. 1 is an explanatory diagram showing the overall concept of an embodiment of the present invention
- FIG. 2 is an explanatory diagram showing the schematic configuration of the storage system
- FIG. 3 is a block diagram showing the hardware configuration of the storage system
- FIG. 4 is an explanatory diagram showing the frame format of the memory configuration of the storage system
- FIG. 5 is an explanatory diagram showing the respective configurations of the management table and attribute table to be used by a first virtualization storage device
- FIG. 6 is an explanatory diagram showing the respective configurations of the management table and attribute table to be used by a second virtualization storage device
- FIG. 7 is an explanatory diagram showing the configuration of the path definition information and the situation of the host path being switched based on this path definition information
- FIG. 8 is a flowchart showing the processing of the virtualization storage devices acquiring information on the external storage device and creating a management table and the like;
- FIG. 9 is an explanatory diagram showing the processing in the case of operating in the asynchronous transfer mode.
- FIG. 10 is an explanatory diagram showing the processing in the case of operating in the synchronous transfer mode
- FIG. 11 is a flowchart showing the transfer designation processing to be performed using the management terminal
- FIG. 12 is an explanatory diagram showing a screen display example of the load status being monitored with the management terminal
- FIG. 13 is a flowchart showing the outline of the processing for newly adding the second virtualization storage device to the storage system and transferring the volume from the first virtualization storage device;
- FIG. 14 is a flowchart showing the access processing to be executed with the first virtualization storage device, which is the transfer source;
- FIG. 15 is a flowchart showing the access processing to be executed with the second virtualization storage device, which is the transfer destination;
- FIG. 16 is a flowchart showing the outline of the processing for transferring the volume between a plurality of virtualization storage devices
- FIG. 17 is a flowchart showing the processing for the second virtualization storage device, which is the transfer destination, to connect with the external volume, which is the transfer target;
- FIG. 18 is an explanatory diagram showing the frame format of the situation of operating the storage system with a plurality of virtualization storage devices.
- FIG. 19 is a flowchart showing the transfer designation processing to be executed with the storage system according to the second embodiment.
- FIG. 1 is an explanatory diagram of the configuration showing the overall schematic of an embodiment of the present invention.
- this storage system may be configured by having a plurality of virtualization storage devices 1 , 2 , a plurality of external storage devices 3 , a plurality of host devices (hereinafter referred to as a “host”) 4 , an upper level SAN (Storage Area Network) 5 , a lower level SAN 6 , a management terminal 7 , and a device-to-device LAN (Local Area Network) 8 .
- SAN Storage Area Network
- the virtualization storage device 1 , 2 corresponds to a “connection source storage device”
- the external storage device 3 corresponds to a “connection destination storage device”.
- the host 4 corresponds to a “host device”
- the upper level SAN 5 corresponds to a “first communication network”
- the lower level SAN 6 corresponds to a “second communication network”
- the management terminal 7 corresponds to a “management terminal”
- the device-to-device LAN 8 corresponds to a “third communication network”.
- the upper level SAN 5 and lower level SAN 6 may be configured as a FC_SAN (Fibre Channel_Storage Area Network) or IP_SAN (Internet Protocol_SAN), but it is not limited thereto, and, for instance, may also be configured as a LAN or WAN (Wide Area Network).
- the upper level SAN 5 is used for respectively connecting the respective hosts 4 and the respective virtualization storage devices 1 , 2 .
- the lower level SAN 6 is used for respectively connecting the respective virtualization storage devices 1 , 2 and the respective external storage device 3 .
- the upper level SAN 5 and lower level SAN 6 are separated, and the traffic or failure of one communication network will not directly influence the other communication network.
- the first virtualization storage device 1 is used for virtualizing a volume 3 A of the external storage device 3 and providing this to the host 4 .
- This first virtualization storage device 1 for instance, has a control unit 1 A, a first management table 1 B, a second management table 1 C, a logical volume 1 D, and an intermediate volume 1 E.
- control unit 1 A corresponds to a “control unit”
- first management table 1 B corresponds to “first management information”
- second management table 1 C corresponds to “second management information
- the logical volume 1 D corresponds to a “logical volume”
- the intermediate volume 1 E corresponds to an “intermediate volume”.
- the control unit 1 A controls the overall operation of the first virtualization storage device 1 .
- the control unit 1 A for instance, creates a logical volume 1 D and provides this to the host 4 . Further, the control unit 1 A connects the logical volume 1 D and external volume 3 A via the intermediate volume 1 E by using the first management table 1 B and second management table 1 C. Moreover, the control unit 1 A transfers the whole or a part of the external volume 3 A under its own control to the second virtualization storage device 2 based on the designation from the management terminal 7 .
- the first management table 1 B is used for identifying the respective external volumes 3 A in the storage system and connecting a desired external volume 3 A to the logical volume 1 D.
- the second management table 1 C is used for managing other attribute information such as the copy status or difference management information (difference bitmap) of the respective external volumes 3 A.
- the second virtualization storage device 2 may be configured the same as the first virtualization storage device 1 .
- the second virtualization storage device 2 as with the first virtualization storage device 1 , is able to connect the whole or a part of the respective external volumes 3 A to the logical volume 2 D via the intermediate volume 2 E.
- the second virtualization storage device 2 as with the first virtualization storage device 1 , is able to provide the external volume 3 A to the host 4 as though it is one's own internal volume.
- the second virtualization storage device 2 may be configured by having a control unit 2 A, a first management table 2 B, a second management table 2 C, a logical volume 2 D and an intermediate volume 2 E.
- Each of these components 2 A to 2 E have the same configuration as each of the components 1 A to 1 E described with reference to the first virtualization storage device 1 , and the detailed description thereof is omitted.
- the size of the second management table 2 C is smaller than the size of the second management table 1 C of the first virtualization storage device 1 .
- the table size of the second management table 2 C is smaller than that of the second management table 1 C.
- the first virtualization storage device 1 When the first virtualization storage device 1 is already being used prior to the second virtualization storage device 2 being added to the storage system; that is, when the first virtualization storage device 1 is virtualizing and using all external volumes 3 A, the first virtualization storage device 1 has already obtained the attribute information of all external volumes 3 A. Under these circumstances, when the second virtualization storage device 2 is added to the storage system, and a part of the external volume 3 A is transferred from the first virtualization storage device 1 to the second virtualization storage device 2 , only the attribute information relating to such transferred external volume 3 A is copied from the second management table 1 C of the first virtualization storage device 1 to the second management table 2 C of the second virtualization storage device 2 .
- Each external storage device 3 has at least one or more external volumes 3 A.
- An external volume is a volume existing outside the respective virtualization storage devices 1 , 2 .
- Each external volume 3 A for example, is provided on a physical memory area of one or a plurality of memory apparatuses.
- memory apparatus for instance, a hard disk drive, optical disk drive, semiconductor memory drive, tape drive and so on may be considered.
- the hard disk drive for example, various disks such as a FC (Fibre Channel) disk, SAS (Serial Attached SCSI) disk and SATA (Serial AT Attachment) disk may be used.
- Each external volume 3 A is connected to one of the logical volumes 1 D, 2 D via the intermediate volume 1 E, 2 E, and provides a memory area to the virtualization storage devices 1 , 2 .
- the management terminal 7 is connected to both of the virtualization storage devices 1 , 2 via the device-to-device LAN 8 .
- the management terminal 7 for example, is configured as a personal computer, portable information terminal (including portable phones) or the like, and has a monitoring unit 7 A.
- the monitoring unit 7 A respectively monitors the load status of the respective virtualization storage devices 1 , 2 , and is able to display the monitoring result on a terminal screen.
- IOPS input/output per second
- CPU usage rate CPU usage rate
- cache memory usage rate cache memory usage rate
- a user such as a system administrator is able to comprehend the load status of the respective virtualization storage devices 1 , 2 based on the monitoring result of the monitoring unit 7 A, and thereby determine the disposition of the volumes.
- the volume disposition may also be automatically conducted based on the load status of the respective virtualization storage devices 1 , 2 .
- the user's decision to transfer the volume is notified to the respective virtualization storage devices 1 , 2 via the management terminal 7 .
- the operation method of the storage system according to the present embodiment is explained.
- the user introduces the first virtualization storage device 1 to the storage system, virtualizes the external volume 3 A of the respective external storage devices 3 with the first virtualization storage device 1 , and provides this to the respective hosts 4 .
- the user decides the introduction of the second virtualization storage device 2 .
- the user is able to decide the introduction of the second virtualization storage device 2 based on the monitoring result of the monitoring unit 7 A (S 0 ).
- the second virtualization storage device 2 is added to the storage system (S 1 ).
- the user or a corporate engineer selling the second virtualization storage device 2 respectively connects the second virtualization storage device 2 to the upper level SAN and lower level SAN 6 (S 2 A, S 2 B). Further, the second virtualization storage device 2 is connected to the first virtualization storage device 1 via the device-to-device LAN 8 (S 3 ).
- contents of the first management table 1 B of the first virtualization storage device 1 are copied to the second virtualization storage device 2 (S 4 ). Thereby, the first management table 2 B is created in the second virtualization storage device 2 .
- the user selects the external volume 3 A to be transferred from the first virtualization storage device 1 to the second virtualization storage device 2 based on the monitoring result of the monitoring unit 7 A, and designates the transfer of the volume (S 5 ).
- the second virtualization storage device 2 connects the external volume 3 A designated by the management terminal 7 and the logical volume 2 D by using the first management table 2 B and second management table 2 C (S 7 ). And, the second virtualization storage device 2 sets information for making the host 4 recognize the logical volume 2 D, and the host 4 sets a path for accessing this logical volume 2 D (S 8 ).
- the data used by the host 4 in reality, is stored in a prescribed external volume 3 A. Before the transfer of the volume, the host 4 is accessing a prescribed external volume 3 A from the logical volume 1 D of the first virtualization storage device 1 via the intermediate volume 1 E. The host 4 is totally unaware that such data is stored in a prescribed external volume 3 A.
- the second virtualization storage device 2 When transferring such prescribed external volume 3 A from the first virtualization storage device 1 to the second virtualization storage device 2 , the second virtualization storage device 2 connects such prescribed external volume 3 A to the logical volume 2 D via the intermediate volume 2 E.
- the host 4 is able to access this logical volume 2 D by correcting the path information, and is thereby able to read and write desired data.
- a plurality of virtualization storage devices 1 , 2 may be used to virtualize and utilize the external volume 3 A. And, the external volume 3 A may be transferred between the respective virtualization storage devices 1 , 2 . Accordingly, the first virtualization storage device 1 and second virtualization storage device 2 can be used to disperse the processing load, and the processing performance of the storage system can be improved thereby. Thus, even when the demand of storage services increases, by appropriately adding virtualization storage devices, it will be possible to deal with such increased demand, and the usability can be improved.
- the first virtualization storage device 1 may be removed from the storage system.
- FIG. 2 is an explanatory diagram showing the overall schematic of the storage system according to the present embodiment.
- the first virtualization storage device 100 A illustrated in FIG. 2 corresponds to the first virtualization storage device 1 of FIG. 1
- the second virtualization storage device 100 B corresponds to the second virtualization storage device 2 of FIG. 1
- the external storage device 200 illustrated in FIG. 2 corresponds to the external storage device 3 of FIG. 1
- the host 10 of FIG. 2 corresponds to the host 4 of FIG. 1
- the management terminal 20 of FIG. 2 corresponds to the management terminal 7 of FIG. 1
- the communication network CN 1 of FIG. 2 corresponds to the upper level SAN 5 of FIG. 1
- the communication network CN 2 of FIG. 2 corresponds to the lower level SAN 6 of FIG. 1
- the communication network CN 3 of FIG. 2 corresponds to the device-to-device LAN 8 of FIG. 1 .
- the respective hosts 10 are respectively connected to the respective virtualization storage devices 100 A, 100 B via the upper level network CN 1 .
- the respective virtualization storage devices 100 A, 100 B are respectively connected to the respective external storage device 200 via the lower level network CN 2 .
- the respective virtualization storage devices 100 A, 100 B and the management terminal 20 are connected via the management network CN 3 .
- the communication network CN 1 , CN 2 may be configured as an IP_SAN or FC_SAN.
- the communication network CN 3 may be configured as a LAN.
- the management communication network CN 3 may be abolished, and either or both the upper level network CN 1 and lower level network CN 2 may be used to transfer information for managing the storage system.
- the host 10 may be configured by having an HBA (Host Bus Adapter) 11 , a volume management unit 12 , and an application program 13 (abbreviated as “application” in the diagrams).
- HBA Hyper Bus Adapter
- application application program 13
- the upper level network CN 1 is configured as an IP_SAN
- a LAN card equipped with a TCP/IP offload engine may be used.
- the volume management unit 12 manages the path information and the like to the volume to be accessed.
- the first virtualization storage device 100 A may be configured by having a host connection interface (abbreviated as “I/F” in the drawings) 111 T, a controller 101 A, and an external storage connection interface 111 E.
- I/F host connection interface
- the first virtualization storage device 100 A has a logical volume 164 as described later, the hierarchical memory configuration will be described later together with FIG. 4 .
- the host connection interface 111 T is used for connecting to the respective hosts 10 via the upper level communication network CN 1 .
- the external storage connection interface 111 E is used for connecting to the respective external storage devices 200 via the lower level communication network CN 2 .
- the controller 101 A is used for controlling the operation of the first virtualization storage device 100 A. Although details of the controller 101 A will be described in detail later, the controller 101 A, for instance, may be configured by having one or a plurality of microprocessors, memory, data processing circuit and the like.
- a management table T 1 A and attribute table T 2 A are respectively stored in the control memory 140 used by the controller 101 A.
- the management table T 1 A corresponds to the first management table 1 B of FIG. 1
- the attribute table T 2 A corresponds to the second management table 1 C of FIG. 1 .
- These management tables T 1 A, T 2 A will be described in detail later.
- Write data and the like written from the host 10 is stored in the cache memory 130 used by the controller 101 A.
- the second virtualization storage device 100 B may be configured by having a host connection interface 111 T, a controller 101 B, and an external storage connection interface 111 E. And, a management table T 1 B and attribute table T 2 B are stored in the control memory 140 used by the controller 101 B.
- the respective external storage devices 200 may be configured by respectively having a controller 210 , a communication port 211 , and a logical volume 240 . Since the logical volume 240 is a volume existing outside the respective virtualization storage devices 100 A, 100 B, this is sometimes referred to as an external volume in the present specification.
- the management terminal 20 for instance, is configured as a personal computer, workstation, portable information terminal or the like, and has a monitoring unit 21 .
- the monitoring unit 21 respectively acquires the load status of the respective virtualization storage devices 100 A, 100 B, and displays the acquired load status on a terminal screen.
- reference numeral 30 in FIG. 2 represents a switch.
- the switch 30 is only shown in the upper level network, one or a plurality of such switches may also be provided to the lower level network CN 2 .
- FIG. 3 is an explanatory diagram showing the detailed hardware configuration of the respective virtualization storage devices 100 A, 100 B.
- the first virtualization storage device 100 A may be configured by having a plurality of channel adapters (hereinafter referred to as a “CHA”) 110 , a plurality of disk adapters (hereinafter referred to as a “DKA”) 120 , a cache memory 130 , a control memory 140 , a connection control unit 150 , a memory unit 160 , and a service processor (hereinafter abbreviated as “SVP”) 170 .
- CHA channel adapters
- DKA disk adapters
- SVP service processor
- Each CHA 110 performs data communication with the host 10 .
- Each CHA 110 may have at least one or more communication interfaces 111 T for communicating with the host 10 .
- Each CHA 110 may be configured as a microcomputer system equipped with a CPU, memory and so on.
- Each CHA 110 interprets and executes the various commands such as a read command or write command received from the host 10 .
- Each CHA 110 is assigned a network address (e.g., IP address or WWN) for identifying the respective CHAs 110 , and each CHA 110 may also individually function as a NAS (Network Attached Storage).
- a network address e.g., IP address or WWN
- each CHA 110 may also individually function as a NAS (Network Attached Storage).
- NAS Network Attached Storage
- each CHA 110 receives and processes the request from each host 10 individually.
- a prescribed CHA 110 is provided with an interface (target port) 111 T for communicating with the host 10
- the other CHAs 110 are provided with an interface (externally connected port) 111 E for communicating with the external storage device 200 .
- Each DKA 120 is used for transferring data to and from the disk drive 161 of the memory unit 160 .
- Each DKA 120 as with the CHA 110 , is configured as a microcomputer system equipped with a CPU, memory and so on.
- Each DKA 120 for example, is able to write data that the CHA 110 received from the host 10 or data read from the external storage device 200 into a prescribed disk drive 161 . Further, each DKA 120 is also able to read data from a prescribed disk drive 161 and transmit this to the host 10 or external storage device 200 .
- each DKA 120 converts a logical address into a physical address.
- each DKA 120 When the disk drive 161 is managed according to RAID, each DKA 120 performs data access according to such RAID configuration. For example, each DKA 120 respectively writes the same data in separate disk drive groups (RAID groups) (RAID 1, etc.), or executes a parity account and writes the data and parity in the disk drive group (RAID 5, etc.).
- RAID groups disk drive groups
- the respective virtualization storage devices 100 A, 100 B virtualize and incorporate the external volume 240 of the external storage device 200 , and provides this to the host 10 as though it is one's own internal volume.
- the respective virtualization storage devices 100 A, 100 B do not necessarily have to have a memory unit 160 .
- the respective virtualization storage devices 100 A, 100 B are used to virtualize and utilize the external volume 240 .
- the DKA 120 will not be required.
- the configuration may also be such that one virtualization storage device has a memory unit 160 , and the other virtualization storage device does not have a memory unit 160 .
- the cache memory 130 stores the data received from the host 10 or external storage device 200 . Further, the cache memory 130 stores data read from the disk drive 161 . As described later, the memory space of the cache memory 130 is used to create a virtual, intermediate memory apparatus (V-VOL).
- V-VOL virtual, intermediate memory apparatus
- the control memory 140 stores various types of control information to be used in the operation of the virtualization storage device 100 A. Further, a work area is set in the control memory 140 , and various tables described later are also stored therein.
- one or a plurality of disk drives 161 may be used as the cache disk.
- the cache memory 130 and control memory 140 may be configured to be separate memories, or a part of the memory area of the same memory may be used as the cache area, and the other memory area may be used as the control area.
- connection control unit 150 mutually connects the respective CHAs 110 , respective DKAs 120 , cache memory 130 and control memory 140 .
- the connection control unit 150 can be configured as a crossbar switch or the like.
- the memory unit 160 has a plurality of disk drives 161 .
- the disk drive 161 for example, various memory apparatuses such as a hard disk drive, flexible disk drive, magnetic tape drive, semiconductor memory drive and optical disk drive as well as the equivalents thereof may be used. Further, for instance, different types of disks such as a FC (Fibre Channel) disk and a SATA (Serial AT Attachment) disk may coexist in the memory unit 160 .
- FC Fibre Channel
- SATA Serial AT Attachment
- the service processor (SVP) 170 is respectively connected to each CHA 110 via an internal network such as a LAN.
- the SVP 170 is able to send and receive data to and from the control memory 140 or DKA 120 via the CHA 110 .
- the SVP 170 extracts various types of information in the first virtualization storage device 100 A and provides this to the management terminal 20 .
- the second virtualization storage device 100 B can also be configured the same as the first virtualization storage device 100 A, the explanation thereof is omitted. Nevertheless, the respective virtualization storage devices 100 A, 100 B do not have to be configured the same.
- the external storage device 200 may be configured approximately the same as the virtualization storage devices 100 A, 100 B, or may be configured more simple than the respective virtualization storage devices 100 A, 100 B.
- the upper level network CN 1 connecting the host 10 and respective virtualization storage devices 100 A, 100 B and the lower level network CN 2 mutually connecting the respective storage devices 100 A, 100 B, 200 are respectively configured as a separate communication network. Therefore, large quantities of data can be transferred with the lower level network CN 2 without directly influencing the upper level network CN 1 .
- FIG. 4 is an explanatory diagram showing the memory configuration of the storage system. Foremost, the configuration of the virtualization storage devices 100 A, 100 B is explained taking the first virtualization storage device 100 A as an example.
- the memory configuration of the first virtualization storage device 100 A can be broadly classified into a physical memory hierarchy and a logical memory hierarchy.
- the physical memory hierarchy is configured from a PDEV (Physical Device) 161 , which is a physical disk.
- PDEV corresponds to the foregoing disk drive 161 .
- the logical memory hierarchy may be configured from a plurality of (e.g., two types of) hierarchies.
- One logical hierarchy may be configured from a VDEV (Virtual Device) 162 , and a virtual VDEV (hereinafter sometimes referred to as “V-VOL”) 163 which is treated like the VDEV 162 .
- V-VOL virtual VDEV
- the other logical hierarchy may be configured from a LDEV (Logical Device) 164 .
- the VDEV 162 is configured by grouping a prescribed number of PDEVs 161 such as in a set of fours (3D+1P), or a set of eights (7D+1P).
- the memory areas provided respectively from each PDEV 161 belonging to the group are assembled to form a single RAID storage area. This RAID memory area becomes the VDEV 162 .
- the V-VOL 163 is a virtual intermediate memory apparatus that does not require a physical memory area.
- the V-VOL 163 is not directly associated with a physical memory area, and is a virtual existence to become the receiver for mapping an LU (Logical Unit) of the external storage controller device 200 .
- This V-VOL 163 corresponds to an intermediate volume.
- At least one or more LDEVs 164 may be provided on the VDEV 162 or V-VOL 163 .
- the LDEV 164 may be configured by dividing the VDEV 162 in a fixed length.
- the host 10 is an open host, by the LDEV 164 being mapped with the LU 165 , the host 10 will recognize the LDEV 164 as a single physical disk.
- An open host can access a desired LDEV 164 by designating the LUN (Logical Unit Number) or logical block address.
- LUN Logical Unit Number
- a mainframe host will directly recognize the LDEV 164 .
- the LU 165 is a device that can be recognized as a logical unit of SCSI.
- Each LU 165 is connected to the host 10 via the target port 111 T.
- At least one or more LDEVs 164 may be respectively associated with each LU 165 .
- the LU size can be virtually expanded.
- a CMD (Command Device) 166 is a dedicated LU to be used for transferring commands and statuses between the I/O control program operating on the host 10 and the storage device 100 .
- a command from the host 10 is written in the CMD 166 .
- the first virtualization storage device 100 executes the processing according to the command written in the CMD 166 , and writes the execution result thereof as the status in the CMD 166 .
- the host device 10 reads and confirms the status written in the CMD 166 , and writes the processing contents to be executed subsequently in the CMD 166 .
- the host device 10 is able to give various designations to the first virtualization storage device 100 A via the CMD 166 .
- the command received from the host device 10 may also be processed directly by the first virtualization storage device 100 A without being stored in the CMD 166 .
- the CMD may be created as a virtual device without defining the actual device (LU) and configured to receive and process the command from the host device 10 .
- the CHA 110 writes the command received from the host device 10 in the control memory 140
- the CHA 110 or DKA 120 processes this command stored in the control memory 140 .
- the processing results are written in the control memory 140 , and transmitted from the CHA 110 to the host device 10 .
- An external storage device 200 is connected to an initiator port (External Port) 111 E for external connection of the first virtualization storage device 100 A via the lower level network CN 2 .
- the external storage device 200 has a plurality of PDEVs 220 , a VDEV 230 set on a memory area provided by the PDEV 220 , and one or more LDEVs 240 that can be set on the VDEV 230 . And, each LDEV 240 is respectively associated with the LU 250 .
- the PDEV 220 corresponds to the disk drive 220 of FIG. 3 .
- the LDEV 240 corresponds to a “separate logical volume”, and corresponds to the external volume 3 A of FIG. 1 .
- the LU 250 (i.e., LDEV 240 ) of the external storage device 200 is mapped to the V-VOL 163 .
- the “LDEV 1 ”, “LDEV 2 ” of the external storage device 200 are respectively mapped to the “V-VOL 1 ”, “V-VOL 2 ” of the first virtualization storage device 100 A via the “LU 1 ”, “LU 2 ” of the external storage device 200 .
- “V-VOL 1 ”, “V-VOL 2 ” are respectively mapped to the “LDEV 3 ”, “LDEV 4 ”, and the host device 10 is thereby able to use these volumes via the “LU 3 ”, “LU 4 ”.
- the VDEV 162 , V-VOL 163 may adopt the RAID configuration.
- a single disk drive 161 may be assigned to a plurality of VDEVs 162 , V-VOLs 163 (slicing), and a single VDEV 162 , V-VOL 163 may be formed from a plurality of disk drives 161 (striping).
- the second virtualization storage device 100 B may have the same hierarchical memory configuration as the first virtualization storage device 100 A, the explanation thereof is omitted.
- FIG. 5 is an explanatory diagram showing the schematic configuration of the management table T 1 A and attribute table T 2 A used by the first virtualization storage device 100 A.
- Each of these tables T 1 A, T 2 A may be stored in the control memory 140 .
- the management table T 1 A is used for uniformly managing the respective external volumes 240 dispersed in the storage system.
- the management table T 1 A may be configured by respectively associating a network address (WWN: World Wide Name) for connected to the respective external volumes 240 , a number (LUN: Logical Unit Number) of the respective external volumes 240 , volume size of the respective external volumes 240 , an external volume number, owner right information and transfer status flag.
- WWN World Wide Name
- LUN Logical Unit Number
- an external volume number is identifying information for uniquely specifying the respective external volumes 240 in the storage system.
- Owner right information is information for specifying the virtualization storage devices having the authority to use such external volume. When “0” is set in the owner right information, it shows that such external volume 240 is unused. When “1” is set in the owner right information, it shows that one's own device has the usage authorization to use such external volume 240 . Further, when “ ⁇ 1” is set in the owner right information, it shows that the other virtualization storage devices have the usage authorization to use such external volume 240 .
- the first virtualization storage device 100 A has the usage authorization thereof.
- the second virtualization storage device 100 B has the usage authorization thereof.
- the owner right information is set as “1” in one management table regarding a certain external volume 240
- the ownership right information of such external volume is set to “ ⁇ 1” in the other management table.
- the affiliation of such external volume 240 can be specified.
- the case number assigned to the respective virtualization storage devices may also be set.
- identifying information capable of uniquely specifying the respective virtualization storage devices in the storage system may be used as the owner right information.
- the transfer status flag is information showing that the external volume 240 is being transferred from one virtualization storage device to the other virtualization storage device.
- “1” is set in the transfer status flag, this shows that the owner right of such external volume 240 is being changed.
- “0” is set in the transfer status flag, this shows that such external volume 240 is in a normal state, and the owner right is not being changed.
- the attribute table T 2 A is a table for managing various types of attribute information of the respective external volumes 240 .
- the attribute table T 2 A may be configured by associating the LU number of the respective external volumes 240 , path definition information, replication configuration information, replication status information, and replication bitmap information.
- Path definition information is information for showing, via which port of which CHA 110 , the logical volume 164 connected to such external volume 240 is to be accessed by the host 10 .
- a plurality of paths may be set in the path definition information. One path is the normally used primary path, and the other path is an alternate path to be used when there is failure in the primary path.
- the replication configuration information is information showing the correspondence of the volumes configuring a copy-pair.
- a volume in which “P” is set in the replication configuration information is a primary volume (copy source volume), and a volume in which “S” is set in the replication configuration information is a secondary volume (copy destination volume).
- the numbers appended to “P” and “S” are serial numbers for identifying the respective copy-pairs.
- the replication status information is information showing the status of the respective volumes configuring the copy-pair.
- “Pair” is set in the replication status information
- the volume thereof is in synchronization with the volume of the other party, and shows that the respective volumes forming the copy-pair are maintaining the same memory contents.
- “Resync” is set in the replication status information
- this shows that the volume thereof and the volume of the other party are in resynchronization.
- “Simplex” is set in the replication status information, this shows that the volume thereof is not a target of replication.
- “Suspend” is set in the replication status information, this shows that the volume thereof has not been updated with the volume of the other party.
- the replication bitmap information is information showing the updated position of the data in the volume thereof. For example, a flag showing whether the data has been updated is prepared for each segment, and this means that, in a segment with “1” set to the flag, the data thereof has been updated.
- the size of the replication bitmap information will be 128 KB.
- the total size of the replication bitmap information will be n ⁇ 128 KB.
- n is 16384
- the table size of the attribute table T 2 A will be enormous. According, when the entirety of this attribute table T 2 A is to be transferred to the second virtualization storage device 100 B, the control memory 140 of the second virtualization storage device 100 B will be compressed. Thus, in the present embodiment, among the information stored in the attribute table T 2 A, only the information relating to the volume to be transferred to the second virtualization storage device 100 B is transferred to the second virtualization storage device 100 B. In other words, attribute information is transferred to the necessary extent. Thereby, the data volume to be transferred can be reduced, the time required for creating the attribute table can be shortened, and the compression of the memory resource (control memory 140 ) of the second virtualization storage device 100 B, which is the transfer destination, can be prevented.
- information such as the device type (disc device or tape device, etc.), vendor name, identification number of the respective storage devices and so on may also be managed. Such information may be managed with either the management table T 1 A or attribute table T 2 A.
- FIG. 6 is an explanatory diagram showing the schematic configuration of the management table T 1 B and attribute table T 2 B used by the second virtualization storage device 100 B.
- the management table T 1 B as with the management table T 1 A described above, for instance, is configured by associating a network address such as WWN, an LU number, volume size, an external volume number, owner right information and a transfer status flag.
- the management table T 1 A and management table T 1 B are configured the same excluding the owner right information.
- the attribute table T 2 B is also configured by associating an LU number, path definition information, replication configuration information, replication status information and replication bitmap information. Nevertheless, as described above, in order to effectively use the memory resource of the second virtualization storage device 100 B, it should be noted that only the attribute information of the volume under the control of the second virtualization storage device 100 B is registered in the management table T 2 B.
- FIG. 7 is an explanatory diagram showing the schematic configuration of the path setting information T 3 to be used by the volume management unit 12 of the host 10 .
- This path setting information T 3 may be stored in the memory of the host 10 or a local disk.
- the path setting information T 3 includes information relating to the primary path to be used in normal times, and information relating to the alternate path to be used in abnormal times.
- Each path for instance, is configured by including information for specifying the HBA 11 to be used, port number of the access destination, and LU number for identifying the volume of the access target.
- the alternate path described first is a normal alternate path
- the subsequently described alternate path is a path unique to the present embodiment.
- the second alternate path is a path set upon transferring the volume from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
- FIG. 7 shows a frame format of the situation of switching from the primary path to the alternate path.
- the volume 420 of “# 0 ” is transferred from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
- the host 10 Before the transfer, by accessing the Port # 0 from the HBA # 0 as shown with the thick line in FIG. 7 , the host 10 is able to read and write data from and into the logical volume of the first virtualization storage device 100 A.
- the external volume 240 is accessed from the Port # 1 based on the access from the host 10 .
- the second alternate path is a path to the second virtualization storage device 100 B, which is the volume transfer destination.
- the second virtualization storage device 100 B processes this access request, and returns the processing result to the host 10 .
- the processible state of the access request means that even when the access request from the host 10 is processed, inconsistency in the data stored in the volume will not occur. This will be described in detail later.
- the host 10 when the host 10 is unsuccessful in accessing via the primary path, it switches to the first alternate path, and, when it is unsuccessful in accessing via the first alternate path, it switches to the second alternate path. Accordingly, until the access request of the host 10 is accepted, some time (path switching time) will be required. Nevertheless, this path switching time is not wasteful time. This is because, as described later, destage processing to the transferred volume can be performed during such path switching time. In the present embodiment, merely by adding a new path to the path setting information T 3 stored in the host 10 , the access destination of the host 10 can be switched.
- FIG. 8 is a flowchart showing the outline of the processing for searching the external volume existing in the storage system and registering this in the management table T 1 A.
- FIG. 8 shows an example of a case where the first virtualization storage device 100 A executes the processing.
- the first virtualization storage device 100 A issues a command (“Test Unit Ready”) toward the respective external storage devices 200 for confirming the existence thereof (S 11 ).
- Each external storage device 200 operating normally will return a Ready reply having a Good status as the response to such command (S 12 ).
- the first virtualization storage device 100 A issues an “Inquiry” command to each external storage device 200 in which the existence thereof has been confirmed (S 13 ).
- Each external storage device 200 that received this command for instance, transmits information regarding the device type and so on to the first virtualization storage device 100 A (S 14 ).
- the first virtualization storage device 100 A issues a “Read Capacity” command to each external storage device 200 (S 15 ).
- Each external storage device 200 transmits the size of the external volume 240 to the first virtualization storage device 100 A (S 16 ).
- the first virtualization storage device 100 A transmits a “Report LUN” command to each external storage device 200 (S 17 ).
- Each external storage device 200 transmits the LUN quantity and LUN number to the first virtualization storage device 100 A (S 18 ).
- the first virtualization storage device 100 A registers the information acquired from each external storage device 200 in the management table T 1 A and attribute table T 2 A, respectively. As described above, the first virtualization storage device 100 A is able to respectively create the management table T 1 A and attribute table T 2 A by issuing a plurality of inquiry commands.
- the configuration of the storage system may change by one of the external storage devices 200 being removed, or a new external storage device 200 being added.
- the first virtualization storage device 100 A is able to detect such change in configuration based on command and notifications such as RSCN (Registered State Change Notification), LIP (Loop Initialization Primitive), SCR (State Change Registration) or SCN (State Change Notification).
- RSCN Registered State Change Notification
- LIP Loop Initialization Primitive
- SCR State Change Registration
- SCN State Change Notification
- the method of the virtualization storage devices 100 A, 100 B using the external volume 240 to process the access request from the host 10 is explained.
- the first virtualization storage device 100 A processes the access request
- the second virtualization storage device 100 B may also perform the same processing.
- the processing method of a write command is explained.
- the method for processing the write command two methods; namely, the synchronous transfer mode and asynchronous transfer mode may be considered.
- the first virtualization storage device 100 A when the first virtualization storage device 100 A receives a write command from the host 10 , the first virtualization storage device 100 A stores the write data received from the host 10 in the cache memory 130 , and thereafter transfers the write data to the external storage device 200 via the communication network CN 2 .
- the external storage device 200 receives the write data and stores this in the cache memory, it transmits a reply signal to the first virtualization storage device 100 A.
- the first virtualization storage device 100 A receives the reply signal from the external storage device 200 , it transmits a write completion report to the host 10 .
- the synchronous transfer mode As described above, in the synchronous transfer mode, after the write data is transferred to the external storage device 200 , the completion of the write command processing is notified to the host 10 . Accordingly, in the synchronous transfer mode, a delay will arise in the time of waiting for the reply from the external storage device 200 . Thus, the synchronous transfer mode is suitable in cases where the distance between the first virtualization storage device 100 A and external storage device 200 is relatively short. Contrarily, if the first virtualization storage device 100 A and external storage device 200 are far apart, generally speaking, the synchronous transfer mode is not suitable due to problems of delays in reply and delays in propagation.
- the first virtualization storage device 100 A when the first virtualization storage device 100 A receives a write command from the host 10 , it stores the write data in the cache memory 130 , and thereafter immediately issues a write completion report to the host 10 . After issuing the write completion report to the host 10 , the first virtualization storage device 100 A transfers the write data to the external storage device 200 .
- the write completion report to the host 10 and the data transfer to the external storage device 200 are conducted asynchronously. Accordingly, in the case of the asynchronous transfer mode, the write completion report can be transmitted to the host 10 quickly irrelevant to the distance between the first virtualization storage device 100 A and external storage device 200 .
- the asynchronous transfer mode is suitable when the distance between the first virtualization storage device 100 A and external storage device 200 is relatively long.
- FIG. 9 is an explanatory diagram showing the case of the asynchronous transfer mode.
- the virtualization storage devices 100 A, 100 B are not differentiated, and will be referred to as the “virtualization storage device 100 ”.
- the management tables T 1 A, T 1 B are not differentiated, and will be referred to as the “management table T 1 ”.
- the host 10 issues a write command to a prescribed LU 165 of the virtualization storage devices 100 (S 31 ).
- the LU 165 is associated with the LU 250 of the external storage device 200 via the V-VOL 163 .
- the LU 165 of the virtualization storage devices 100 is an access target from the host 10 , but the external LU 250 is actually storing the data. Therefore, for instance, the LU 165 may be referred to as an “access destination logical memory apparatus” and the LU 250 may be referred to as a “data storage destination logical memory apparatus”, respectively.
- the virtualization storage devices 100 When the virtualization storage devices 100 receives a write command from the host 10 , it specifies the LU targeted by such write command, refers to the management table T 1 and determines whether this LU is associated with an external volume. When this is a write command to an LU associated with an external volume, the virtualization storage device 100 transmits a write command to the external storage device 200 having such external volume (S 32 ).
- the host 10 transmits the write data with the LU 165 as the write target to the virtualization storage devices 100 (S 33 ).
- the virtualization storage device 100 temporarily stores the write data received from the host 10 in the cache memory 130 (S 34 ). After the virtualization storage device 100 stores the write data in the cache memory 130 , it reports the completion of writing to the host 10 (S 35 ).
- the virtualization storage device 100 transmits the write data stored in the cache memory 130 to the external storage device 200 (S 36 ).
- the external storage device 200 stores the write data received from the virtualization storage device 100 in the cache memory.
- the external storage device 200 reports the completion of writing to the virtualization storage device 100 (S 37 ).
- the external storage device 200 looks out for a period with few I/O, and writes the write data stored in the cache memory in the memory apparatus 220 (destage processing). In the asynchronous transfer mode, after write data is received from the host 10 , the write completion can be sent to the host 10 in a short reply time ⁇ 1 .
- FIG. 10 shows a case of the synchronous transfer mode.
- the virtualization storage device 100 Upon receiving the write command issued from the host 10 (S 41 ), the virtualization storage device 100 specifies the external volume (LU 250 ) associated with the access destination volume (LU 165 ) of the write command, and issues a write command to such external volume (S 42 ).
- the virtualization storage device 100 When the virtualization storage device 100 receives the write data from the host 10 (S 43 ), it stores this write data in the cache memory 130 (S 44 ). The virtualization storage device 100 transfers the write data stored in the cache memory 130 to the external storage device 200 such that it is written in the external volume (S 45 ). After storing the write data in the cache memory, the external storage device 200 reports the completion of writing to the virtualization storage device 100 (S 46 ). When the virtualization storage device 100 confirms the completion of writing in the external storage device 200 , it reports the completion of writing to the host 10 (S 47 ). In the synchronous transfer mode, since the report of the write completion to the host 10 is made upon waiting for the processing in the external storage device 200 , the reply time ⁇ 2 will become long. The reply time ⁇ 2 of the synchronous transfer mode is longer than the reply time ⁇ 1 of the asynchronous transfer mode ( ⁇ 2 ⁇ 1 ).
- the respective virtualization storage devices 100 A, 100 B are able to incorporate and use the external volume 240 of the external storage device 200 as though it is a virtual internal volume.
- the external volume 240 may also be transferred from the second virtualization storage device 100 B to the first virtualization storage device 100 A.
- FIG. 11 is a flowchart showing the processing for designating the transfer of the volume to the respective virtualization storage devices 100 A, 100 B.
- the monitoring unit 21 acquires performance information from the first virtualization storage device 100 A (S 51 ).
- the monitoring unit 21 displays the acquired performance information on a terminal screen of the management terminal 20 (S 52 ).
- This performance information corresponds to the information showing the “load status”, and, for instance, includes the input/output per second (IOPS), CPU usage rate, cache memory usage rate and so on.
- IOPS input/output per second
- the user discovers whether there is a high-load CPU based on the performance information displayed on the screen of the management terminal 20 (S 53 ).
- This CPU represents the CPU built in the CHA 110 .
- the user confirms that every CPU of other CHAs 110 is of a load that is greater than a prescribed value (S 54 ).
- the user determines the transfer of the external volume 240 under the control of such CHA 110 (S 55 ). Subsequently, the user sets a path of the transfer destination (S 56 ). In other words, the user defines the path information regarding which port the host 10 will use for the access in the second virtualization storage device 100 B, which is the transfer destination (S 56 ). The defined path information is added to the host 10 . Finally, the user designates the transfer of such external volume 240 to the respective virtualization storage devices 100 A, 100 B (S 57 ).
- the user specifies the external volume that is being the bottleneck in the first virtualization storage device 100 A, which is the transfer source (switching source) (S 53 to S 55 ) based on the monitoring result of the monitoring unit 21 (S 51 , S 52 ), and designates the start of transfer by defining the path of the transfer destination (S 56 , S 57 ).
- the transfer source switching source
- FIG. 12 is an explanatory diagram showing an example of a screen showing the monitoring result of the monitoring unit 21 .
- the monitoring unit 21 is able to respectively acquire performance information from the respective virtualization storage devices 100 A, 100 B, and display such performance information upon performing statistical processing or creating a graphical chart thereof.
- the selection unit G 11 it is possible to select which load status regarding which resource among the various resources in the storage system is to be displayed.
- the resource for instance, “network”, “storage”, “switch” and so on may be considered.
- the user may further select one of the virtualization storage devices 100 A, 100 B. Further, when the user selects one of the virtualization storage devices 100 A, 100 B, the user may make a more detailed selection. As such detailed selection, “port” and “LU” may be considered. As described above, the user is able to select in detail the desired target for confirming the load status.
- the overall status of the selected virtualization storage device can be displayed as a list among the virtualization storage devices 100 A, 100 B.
- a more detailed monitoring target status such as the “port” and “LU”
- the load status can be displayed as a graph.
- the user is able to relatively easily determine which part of which virtualization storage device is a bottleneck based on the performance monitoring screen as shown in FIG. 12 .
- the user is able to determine the volume to be transferred based on such determination.
- FIG. 13 is a flowchart showing the situation of newly adding a second virtualization storage device 100 B to the storage system in a state where the first virtualization storage device 100 A is in operation, and transferring one or a plurality of volumes from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
- the first virtualization storage device 100 A is abbreviated as the “first storage”
- the second virtualization storage device 100 B is abbreviated as the “second storage”, respectively.
- the user will be able to comprehend the load status of the first virtualization storage device 100 A with the methods described with reference to FIG. 11 and FIG. 12 . As a result, the user will be able to determine the additional injection of the second virtualization storage device 100 B.
- the user or engineer of the vendor performs physical connection procedures of the newly introduced second virtualization storage device 100 B (S 61 ).
- the host connection interface 111 T of the second virtualization storage device 100 B is connected to the upper level network CN 1
- the external storage connection interface 111 E of the second virtualization storage device 100 B is connected to the lower level network CN 2
- the SVP 170 of the second virtualization storage device 100 B is connected to the network CN 3 .
- the second virtualization storage device 100 B acquires the memory contents of the management table T 1 A from the first virtualization storage device 100 A (S 62 ). Based on such acquired contents, the second virtualization storage device 100 B creates a management table T 1 B. The second virtualization storage device 100 B respectively detects the external volumes 240 in the storage system based on the management table T 1 B (S 63 ).
- the second virtualization storage device 100 B connects the designated external volume 240 to the V-VOL 163 via the interface 111 E (S 65 ).
- the second virtualization storage device 100 B acquires attribute information relating to the transfer target volume from the storage device of the transfer source; that is, the first virtualization storage device 100 A (S 151 ).
- the second virtualization storage device 100 B registers the attribute information other than the path information among the acquired attribute information in the attribute table T 2 B (S 152 ).
- the second virtualization storage device 100 B newly sets path definition information regarding the transfer target volume (S 153 ).
- the user selects the logical volume 164 to be accessed from the host 10 as the transfer target.
- the external volume 240 connected to such logical volume 164 will be reconnected to a separate logical volume 164 of the transfer destination storage device ( 100 B).
- the virtualization storage devices 100 A, 100 B connect the external volume 240 to the logical volume 164 via the V-VOL 163 , and are able to use this as though it is one's own internal memory apparatus.
- the volume management unit 12 of the host 10 adds the path information for accessing the transferred volume to the path setting information T 3 (S 66 ).
- path information for accessing the logical volume 164 connected to the external volume 240 via a prescribed port of the second virtualization storage device 100 B is set.
- the first virtualization storage device 100 A sets an owner right regarding the external volume 240 designated as the transfer target (S 67 ). In other words, “ ⁇ 1” is set in the owner right information regarding the transfer target volume.
- the first virtualization storage device 100 A notifies the set owner right information to the second virtualization storage device 100 B (S 68 ).
- the second virtualization storage device 100 B acquires the owner right information from the first virtualization storage device 100 A (S 69 ), it registers the acquired owner right information in the management table T 1 B (S 70 ).
- the owner right information is registered in the management table T 1 B upon the value thereof being changed to “1”. This is because the usage authorization of the transfer target volume has been transferred to the second virtualization storage device 100 B.
- the second virtualization storage device 100 B reports the completion of registration of the owner right information to the first virtualization storage device 100 A (S 71 ).
- the first virtualization storage device 100 A receives the setting completion report of the owner right information from the second virtualization storage device 100 B (S 72 ).
- the first virtualization storage device 100 A starts the destage processing without processing the access request (S 74 ). Access processing in the transfer source before the completion of transfer will be described later with reference to FIG. 14 .
- the second virtualization storage device 100 B receives a notice indicating the completion of destage processing from the first virtualization storage device 100 A (S 75 ).
- the host 10 refers to the path setting information T 3 , switches to a different path (S 76 ), and reissues the command (S 77 ).
- the switch shall be from the primary path passing through the first virtualization storage device 100 A to the second alternate path passing through the second virtualization storage device 100 B.
- the second virtualization storage device 100 B When the second virtualization storage device 100 B receives a command from the host 10 , it performs access processing (S 78 ). If at the point in time of receiving the command the destage processing of the transfer target volume is complete, normal access processing will be performed. If the destage processing is not complete, however, different access processing will be performed. Access processing in the transfer destination before the completion of the transfer will be described later with reference to FIG. 15 . Incidentally, the flow shown in FIG. 13 is merely an example, and, in reality, there are cases where the order of steps will be different.
- FIG. 14 is a flowchart showing the details of S 74 in FIG. 13 .
- the first virtualization storage device 100 A which is the transfer source storage device, receives a command from the host 10 (S 81 : YES), it analyzes the access target of such command.
- the first virtualization storage device 100 A determines whether the command in which the logical volume 164 connected to the external volume 240 of its own usage authorization is the access target (S 82 ). In other words, the first virtualization storage device 100 A determines whether the command is an access request relating to the external volume 240 in which it has the owner right.
- the command processing from the host 10 is rejected (S 83 ).
- Refection of the command processing may be made by not replying for a prescribed period of time (negative rejection), or by notifying the host 10 that processing is impossible (positive rejection).
- the first virtualization storage device 100 A starts the destage processing of dirty data regarding the external volume 240 in which the access was requested from the host 10 (S 84 ). And, when the destage processing is complete (S 85 : YES), the first virtualization storage device 100 A notifies the second virtualization storage device 100 B to such effect (S 86 ).
- the access target of the host 10 is the logical volume 164 of the first virtualization storage device 100 A.
- the logical volume 164 is selected as the transfer target. and, this logical volume 164 is connected to the logical volume 240 of the external storage device 200 .
- the first virtualization storage device 100 A is processing the write command in the asynchronous transfer mode. Accordingly, the first virtualization storage device 100 A reports the completion of writing to the host 10 at the time the write data received from the host 10 is stored in the cache memory 130 . The write data stored in the cache memory 130 is transferred to the external storage device 200 in a prescribed timing, and reflected in the external volume 240 .
- the data stored in the cache memory 130 of the first virtualization storage device 100 A and the data stored in the external volume 240 are different. Updated data regarding a certain segment or a segment group is stored in the cache memory 130 , and old data before the update is regarding the same segment or segment group is stored in the external volume 240 .
- dirty data data that is not reflected in the external volume 240 and which does not coincide with the memory contents of the cache memory 130 and the memory contents of the external volume 240
- dirty data in which write data is written in the external volume 240 and which coincides with the memory contents of the cache memory 130 and the memory contents of the external volume 240 is referred to as clean data.
- the processing of writing and reflecting the dirty data stored in the cache memory 130 of the first virtualization storage device 100 A into the external volume 240 is referred to as destage processing.
- the first virtualization storage device 100 A which is the transfer source, with perform destage processing without processing the access request from the host 10 .
- the first virtualization storage device 100 A identifies the type of command (S 87 ), and performs normal access processing.
- the first virtualization storage device 100 A stores the write data received from the host 10 in the cache memory 130 (S 88 ), and notifies the host 10 of the completion of writing (S 89 ). Next, while looking out for a prescribed timing, the first virtualization storage device 100 A refers to the management table T 1 A, confirms the path to the external volume 240 (S 90 ), and transfers the write data to the external volume 240 (S 91 ).
- the first virtualization storage device 100 A When it is a read command, the first virtualization storage device 100 A reads the data requested from the host 10 from the external volume 240 (S 92 ), and transfers this data to the host 10 (S 93 ). Incidentally, when reading data from the external volume 240 , the management table T 1 A is referred to. Further, when the data requested from the host 10 already exists on the cache memory 130 (when the data has been sliced), the first virtualization storage device 100 A transfers the data stored in the cache memory 130 to the host 10 without accessing the external volume 240 .
- FIG. 15 is a flowchart showing the details of S 78 in FIG. 13 .
- the second virtualization storage device 100 B which is the transfer destination, receives a command from the host 10 (S 101 : YES), it analyzes the access target of such command.
- the second virtualization storage device 100 B determines whether the access target of the host 10 is a logical volume 164 connected to the external volume 240 under the control of the second virtualization storage device 100 B (S 102 ). In other words, the second virtualization storage device 100 B determines whether the command is an access request relating to the external volume 240 in which it has the owner right thereof.
- the second virtualization storage device 100 B determines whether this is an access request relating to the volume in which it has the owner right thereof (S 102 : YES), it determines whether the destage processing performed by the first virtualization storage device 100 A regarding the external volume 240 connected to the logical volume 164 thereof is complete (S 103 ). In other words, the second virtualization storage device 100 B determines whether a destage completion notification has been acquired from the first virtualization storage device 100 A regarding such volume.
- the second virtualization storage device 100 B rejects the command processing (S 104 ). This is in order to maintain the consistency of data regarding the transfer target volume.
- the second virtualization storage device 100 B when the second virtualization storage device 100 B has the owner right regarding the access target volume from the host 10 (S 102 : YES), and the destage processing at the transfer destination regarding the volume is complete (S 103 : YES), the second virtualization storage device 100 B is able to perform the normal access processing.
- the normal access processing performed by the second virtualization storage device 100 B is the same as the normal access processing performed by the first virtualization storage device 100 A.
- the second virtualization storage device 100 B distinguishes the type of command received from the host 10 (S 105 ). When it is a write command, the second virtualization storage device 100 B stores the write data received from the host 10 in the cache memory 130 (S 106 ), and thereafter notifies the completion of writing to the host 10 (S 107 ). And, the second virtualization storage device 100 B refers to the management table T 1 B, confirms the path to the external volume 240 (S 108 ), and transfers the write data stored in the cache memory 130 to the external volume and writes it therein (S 109 ).
- the second virtualization storage device 100 B reads the data requested from the host 10 from the external volume 240 (or cache memory 130 ) (S 110 ), and transfers this data to the host 10 (S 111 ).
- the foregoing explanation is an example of newly introducing the second virtualization storage device 100 B to the storage system. Next, a case of introducing the second virtualization storage device 100 B and thereafter dispersing the load is explained.
- FIG. 16 is a flowchart showing a different example of transferring a volume between the respective virtualization storage devices 100 A, 100 B.
- the user is able to comprehend the operational status of the storage system based on the monitoring result of the monitoring unit 21 .
- the user may issue a designation so as to transfer the external volume 240 under the control of the first virtualization storage device 100 A to the second virtualization storage device 100 B via the management terminal 20 (S 121 ).
- a path for accessing via the second virtualization storage device 100 B is added to the path setting information T 3 of the host 10 based on the transfer designation from the management terminal 20 .
- the first virtualization storage device 100 A When the first virtualization storage device 100 A receives a transfer designation from the management terminal 20 , it changes the owner right of the external volume designated as the transfer target from “1” to “ ⁇ 1”, and notifies this change to the second virtualization storage device 100 B (S 122 ).
- the second virtualization storage device 100 B When the second virtualization storage device 100 B receives a notice from the first virtualization storage device 100 A (S 123 ), it sets “1” in the transfer status flag relating to the transfer target volume and updates the management table T 1 B (S 124 ), and notifies the completion of setting of the transfer status flag to the first virtualization storage device 100 A (S 125 ).
- the first virtualization storage device 100 A When the first virtualization storage device 100 A receives a notice from the second virtualization storage device 100 B, similarly, it sets “1” in the transfer status flag relating to the transfer target volume and updates the management table T 1 A (S 126 ). And, the first virtualization storage device 100 A starts the destage processing of dirty data relating to the transfer target volume (S 127 ).
- the first virtualization storage device 100 A will reject such processing (S 129 ).
- the host 10 When the access processing is rejected by the first virtualization storage device 100 A, the host 10 refers to the path setting information T 3 and switches the path (S 130 ).
- the explanation is regarding a case of switching from the primary path passing through the first virtualization storage device 100 A to the alternate path passing through the second virtualization storage device 100 B.
- the host 10 After switching the path, the host 10 reissues the command (S 131 ).
- This command may be a write command or a read command, and let it be assumed that a write command has been issued for the sake of convenience of explanation.
- the second virtualization storage device 100 B When the second virtualization storage device 100 B receives a write command from the host 10 (S 132 ), it receives write data transmitted from the host 10 after the write command, and stores this in the cache memory 130 (S 132 ). After storing the write data in the cache memory 130 , the second virtualization storage device 100 B reports the completion of writing to the host 10 (S 133 ). The host 10 receives a processing completion notice from the second virtualization storage device 100 B (S 134 ).
- the first virtualization storage device 100 A notifies the completion of the destage processing to the second virtualization storage device 100 B (S 136 ).
- the second virtualization storage device 100 B receives this destage completion notice (S 137 ), it resets the transfer status flag relating to the transfer target volume (S 138 ). Thereby, the transfer of the volume is completed while maintaining the consistency of the volume.
- the second virtualization storage device 100 B performs the normal access processing (S 140 ).
- the second virtualization storage device 100 B may reject the processing of the read command until the destage processing by the first virtualization storage device 100 A is complete.
- FIG. 18 is an explanatory diagram showing a frame format of the situation of transferring the volume according to the present embodiment. Foremost, as shown in FIG. 18( a ), let it be assumed that only the first virtualization storage device 100 A is initially operating in the storage system. Under these circumstances, the first virtualization storage device 100 A is using all external volumes 240 .
- the user determines the introduction of the second virtualization storage device 100 B based on the load status of the first virtualization storage device 100 A, and adds the second virtualization storage device 100 B to the storage system.
- these volumes 240 are connected to the logical volume 164 of the second virtualization storage device 100 B. More precisely, when the user designates the transfer of a volume regarding the logical volume 164 of the first virtualization storage device 100 A, the external volumes 240 (#B, #C) connected to the transfer target logical volume 164 are re-connected to the logical volume 164 of the second virtualization storage device 100 B. Thereby, at least a part of the load of the first virtualization storage device 100 A will be transferred to the second virtualization storage device 100 B, and the bottleneck in the first virtualization storage device 100 A can be resolved. As a result, the response performance and efficiency of the overall storage system can be improved.
- a plurality of virtualization storage devices 100 A, 100 B may be used to manage each of the external volumes 240 . Accordingly, the load in the storage system can be dispersed and the processing performance of the overall storage system can be improved.
- the external volume 240 can be transferred between the respective virtualization storage devices 100 A, 100 B without stopping the access from the host 10 . Therefore, the volume can be transferred via online without having to shut down the host 10 , and the usability will improve.
- the user merely needs to make a designation via the management terminal 20 to transfer the external volume 240 between the respective virtualization storage devices 100 A, 100 B. Accordingly, in a storage system having a plurality of virtualization storage devices 100 A, 100 B capable of virtualizing and using the external volume 240 , the performance of the storage system can be improved with a relatively simple operation.
- the virtualization storage device 100 A which is the transfer source, is configured such that it can reject the access request from the host 10 until the destage processing relating to the transfer target external volume 240 is complete. Therefore, the volume can be transferred while maintaining the consistency of data.
- the second embodiment of the present invention is now explained with reference to FIG. 19 .
- the present embodiment corresponds to a modified example of the foregoing first embodiment.
- the storage system autonomously disperses the load between the respective virtualization storage devices 100 A, 100 B.
- FIG. 19 is a flowchart of the transfer designation processing according to the present embodiment.
- This transfer designation processing for example, can be executed with the management terminal 20 .
- the management terminal 20 acquires the performance information from the respective virtualization storage devices 100 A, 100 B (S 161 ).
- the management terminal 20 based on each type of performance information, respectively calculates the loads LS 1 , LS 2 of the respective virtualization storage devices 100 A, 100 B (S 162 ). These loads, for example, may be calculated based on the input/output per second, CPU usage rate, cache memory usage rate and the like.
- the management terminal 20 compares the load LS 1 of the first virtualization storage device 100 A and the load LS 2 of the second virtualization storage device 100 B (S 163 ). When the first load LS 1 is greater than the second load LS 2 (LS 1 >LS 2 ), the management terminal 20 determines the logical volume (external volume) to the transferred from the first virtualization storage device 100 A to the second virtualization storage device 100 B (S 164 ). The management terminal 20 , for instance, may select the volume of the highest load in the device.
- the management terminal 20 judges whether the transfer timing has arrived (S 165 ), and, when the transfer timing has arrived (S 165 : YES), it defines the path information of the transfer destination (S 166 ), and respectively issues a transfer designation to the respective virtualization storage devices 100 A, 100 B (S 166 ). For example, a time frame with low access frequency from the host 10 may be pre-selected as the transfer timing.
- the management terminal 20 determines the volume to be transferred from the second virtualization storage device 100 B to the first virtualization storage device 100 A (S 168 ).
- the management terminal 20 looks out for a prescribed transfer timing (S 169 : YES), defines the path of the transfer destination (S 170 ), and respectively issues a transfer designation to the respective virtualization storage devices 100 A, 100 B (S 171 ).
- the present embodiment configured as described above also yields the same effects as the foregoing embodiments.
- the load dispersion between the plurality of virtualization storage devices 100 A, 100 B capable of respectively virtualizing the external volume 240 can be performed autonomously.
- the present invention is not limited to the embodiments described above. Those skilled in the art may make various additions and modifications within the scope of the present invention.
- the configuration may also be such that all external volumes are transferred to the second virtualization storage device, and the first virtualization storage device may be entirely replaced with the second virtualization storage device.
- the present invention is not limited thereto, and the configuration may be such that the function of the management terminal is built in one of the virtualization storage devices.
- the logical volume 164 of the transfer source and the logical volume 164 of the transfer destination will be set to be of the same size.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention is able to improve the processing performance of a storage system by respectively virtualizing the external volumes and enabling the shared use of such external volumes by a plurality of available virtualization storage devices. By virtualizing and incorporating the external volume of an external storage device, a first virtualization storage device is able to provide the volume to a host as though it is an internal volume. When the load of the first virtualization storage device increases, a second virtualization storage device 2 is newly introduced, and connected to the storage system. When a transfer direction is issued from a management terminal, the external volume relating to the selected logical volume is transferred from the first virtualization storage device to the second virtualization storage device.
Description
- This application is a continuation application of U.S. Ser. No. 13/222,569, filed Aug. 31, 2011 which is a continuation application of U.S. Ser. No. 12/367, 706, filed Feb. 9, 2009, which is a continuation application of U.S. Ser. No. 11/181,877, filed Jul. 15, 2005 (now abandoned), which relates to and claims priority from Japanese Patent Application No. 2005-150868 filed on May 24, 2005, the entire disclosures of all of which are incorporated herein by reference.
- The present invention relates to a storage system and an operation method of a storage system.
- For instance, government agencies, companies, educational institutions and others manage data with a relative large storage system for handling various data in large quantities. This storage system, for example, is configured by including a storage device such as a disk array device. For instance, a storage device is configured by disposing a plurality of memory apparatuses in an array to provide a memory area based on RAID (Redundant Array of Inexpensive Disks). At least one or more logical volumes are formed on a physical memory area provided by the memory apparatus group, and this logical volume is provided to a host computer (hereinafter abbreviated as “host”). By transmitting a write command or read command, the host is able to write and read data into and from the logical volume.
- Data to be managed by companies and others is increasing daily. Thus, companies and others, for example, equip the storage system with a new storage device to expand the storage system. Two methods can be considered for introducing a new storage device to the storage system. One method is to replace the old storage device with a new storage device. Another method is to make the old storage device and new storage device coexist.
- Nevertheless, when making a full transition from the old storage device to a new storage device, the old storage device cannot be utilized. Meanwhile, when making the old storage device and new storage device coexist, the configuration of the storage system will become complex, and the management and operation thereof will become extremely troublesome.
- Thus, the present applicant has proposed technology of connecting a host and a first storage device and connecting the first storage device and a second storage device so that the first storage device will act over and process the access request from the host (Japanese Patent Laid-Open Publication No. 2004-005370). With this technology, the first storage device will also receive and process commands targeting the second storage device. If necessary, the first storage device issues a command to the second storage device, receives the processing result thereof, and transmits this to the host.
- With the conventional technology described in the foregoing document, the performance of the storage system is improved by making the first storage device and second storage device coexist without wasting any memory resource. Nevertheless, even with this kind of reinforced storage system, the processing performance may deteriorate during the prolonged operation thereof.
- For example, if the number of hosts connected to the first storage device increases, since numerous access requests will be issued from the respective hosts, the processing performance of the storage system will most likely deteriorate. Further, data to be managed will increase daily, and the method of use and frequency of use will differ diversely according to the nature of the respective data.
- Thus, further reinforcement of the storage system is desired. In such a case, the first storage device may be replaced with a different high-performance storage device, or a separate first storage device may be added to the existing first storage device. Nevertheless, the addition or replacement of the first storage device cannot be conducted as with the addition of the first storage device described in the foregoing document. This is because the first storage device is serially connected to the second storage device and uses the memory resource of the second storage device, and the configuration of the storage system is already complicated. The first storage device cannot be simply added or replaced by only focusing attention on the first storage device.
- The present invention was devised in view of the foregoing problems, and an object of the present invention is to provide a storage system and an operation method of a storage system configured by hierarchizing a plurality of storage devices for improving the processing performance thereof relatively easily. Another object of the present invention is to provide a storage system and an operation method of a storage system for improving the processing performance by enabling the shared use of one or a plurality of connection destination storage devices by a plurality of connection source storage devices. Other objects of the present invention will become clear from the detailed description of the preferred embodiments described later.
- In order to achieve the foregoing objects, the storage system according to the present invention has a plurality of connection source storage devices capable of respectively providing a logical volume to a host device; a connection destination storage device respectively connected to each of the connection source storage devices and having a separate logical volume; and a direction unit for directing the connection destination of the separate logical volume. And each of the connection source storage devices is configured by respectively having: a management information memory unit for storing management information for managing the separate logical volume; and a control unit for connecting the logical volume and the separate logical volume via an intermediate volume based on the management information stored in the management information memory unit; wherein the connection destination of the separate logical volume can be switched among each of the connection source storage devices based on the designation from the direction unit.
- The logical volume of the connection source storage device can be connected to a separate logical volume of the connection destination storage device via an intermediate volume. This connection may be made based on the management information stored in the management information memory unit.
- Here, when focusing on the connection source storage device, the connection destination storage device is an external storage device positioned outside the connection source storage device, and the separate logical volume of the connection destination storage device is an external volume positioned outside the connection source storage device. Therefore, in the following explanation, for ease of under the present invention, the connection destination storage device may be referred to as an external storage device, and the separate logical volume may be referred to as an external volume, respectively.
- The host device issues a read command, write command and so on with the logical volume of the connection source storage device as the access target. When the connection source storage device receives an access request from the host device, it issues a prescribed command to the external volume connected to the logical volume of the access target, and reads and writes data from and into the external volume. As described above, the logical volume of the connection source storage device is an access destination volume to become the access target from the host device, and the external volume (separate logical volume) of the external storage device is the data storage destination volume for actually storing the data. The host device is not able to directly recognize the external volume, and the external volume is transparent to the host device.
- The direction unit designates to which logical volume of the connection source storage device the external volume should be connected. Based on this designation, the connection designation of the external volume is switched among the respective connection source storage devices. In other words, when an external volume is connected to a logical volume of one connection source storage device via an intermediate volume, when the direction unit designates the switch to the other connection source storage device, the external volume is connected to a logical volume of the other connection source storage device via an intermediate volume.
- Thereby, a plurality of connection source storage devices may exclusively use one or a plurality of external volumes. Accordingly, for example, when there are numerous access requests to a specific external volume, such high-load external volume is transferred to a separate connection source storage device in order to disperse the load, and the processing performance of the overall storage system can be improved thereby.
- In the present embodiment, the connection destination of the separate logical volume is switchable among each of the connection source storage devices without stopping the access from the host device to the logical volume.
- In the present embodiment, the access destination of the host device is switched among each of the connection source storage devices according to the switching of the connection destination of the separate logical volume. In other words, when the connection destination of the external volume is switched from one connection source storage device to the other connection source storage device, the access destination of the host device will also be switched from one connection source storage device to the other connection source storage device.
- In the present embodiment, the management information is constituted by including first management information for specifying the separate logical volume, and second management information for managing the attribute of the separate logical volume; the first management information is retained by each of the connection source storage devices; and the second management information is retained by the connection source storage device of the switching destination selected as the connection destination of the separate logical volume.
- In other words, the management information for managing the separate logical volume has first management information and second management information, and the first management information is stored in each of the connection source storage devices, and the second management information is stored in the connection source storage device requiring such second management information.
- In the present embodiment, the first management information contains volume identifying information for specifying the separate logical volume in the storage system, usage authorization information for specifying the connection source storage device having usage authorization of the separate logical volume, and switching status information for showing whether the connection destination of the separate logical volume is being switched among each of the connection source storage devices; and the second management information contains a plurality of pieces of other attribute information relating to the separate logical volume.
- In the present embodiment, the usage authorization information is set with the connection source storage device that becomes the switching source among each of the connection source storage devices, notified from the connection source storage device that becomes the switching source to the connection source storage device that becomes the switching destination, and the change of the usage authorization information is determined by the connection source storage device that becomes the switching source receiving the setting completion report from the connection source storage device that becomes the switching destination.
- In the present embodiment, a switching status flag is set while the connection destination of the separate logical volume is being switched from the connection source storage device that becomes the switching source to the connection source storage device that becomes the switching destination, and the switching status flag is reset when the connection destination of the separate logical volume is switched; while the switching status flag is being set, the connection source storage device that becomes the switching source destages unwritten data relating to the separate logical volume, and the connection source storage device that becomes the switching destination processes write data from the host device with an asynchronous system; and when the switching status flag is reset, the switching destination storage device destages the write data.
- Here, an asynchronous transfer mode is a mode for, in the case of writing data in a logical volume, reporting the completion of writing to the host device before writing such data in a physical memory apparatus. Contrarily, a synchronous transfer mode is a mode for, in the case of writing data in a logical volume, reporting the completion of writing to the host device after confirming that such data has been written in a physical memory apparatus.
- In the present embodiment, the connection source storage device that becomes the switching source among each of the connection source storage devices rejects the processing of access from the host device to the separate logical volume, and destages unwritten data relating to the separate logical volume.
- In other words, the connection source storage device that becomes the switching source, among the access requests from the host device, rejects the access request relating to the external volume to be transferred to the connection source storage device that becomes the switching destination. A rejection may be made positively or negatively. And, the connection source storage device that becomes the switching source destages unwritten data relating to such external volume to be transferred. As a result, the consistency of the data stored in such external volume can be maintained.
- In the present embodiment, when the destage is complete, the connection source storage device that becomes the switching source issues a destage completion report to the connection source storage device that becomes the switching destination; and upon receiving the destage completion report, the connection source storage device that becomes the switching destination performs the processing of access from the host device to the separate logical volume.
- In other words, the dirty data before transfer (before switching) is written in a physical memory apparatus configuring the external volume of the transfer target to maintain the consistency of data.
- In the present embodiment, a monitoring unit is further provided for monitoring the load status relating to at least the connection source storage device that becomes the switching source among each of the connection source storage devices.
- And, the connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices are respectively selected based on the monitoring result of the monitoring unit.
- As the load status, for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate, data traffic and so on may be considered. For example, when there is a logical volume where the load status became higher and a prescribed threshold value, the external volume to which such logical volume is connected is transferred to a separate connection source storage device. Thereby, the load of the connection source storage device of the switching source can be reduced.
- In the present embodiment, a management terminal to be connected to each of the connection source storage devices is further provided, wherein the direction unit and the monitoring unit are respectively provided to the management terminal.
- The storage system according to another perspective of the present invention has a plurality of connection source storage devices to be used by at least one or more host devices, and at least one or more connection destination storage devices to be connected to each of the connection source storage devices, wherein the host device and each of the connection source storage devices are respectively connected via a first communication network, and each of the connection source storage devices and the connection destination storage device are connected via a second communication network separated from the first communication network
- Further, the connection destination storage device has a separate logical volume to be logically connected to a logical volume of each of the connection source storage devices. And, each of the connection source storage devices has a control unit for creating the logical volume and connecting the logical volume and the separate logical volume via an intermediate volume based on management information; and a memory used by the control unit and for storing the management information.
- Moreover, the management terminal to be connected to each of the connection source storage devices has a monitoring unit for respectively monitoring the load status of each of the connection source storage devices, and a direction unit for respectively selecting the connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices based on the monitoring result of the monitoring unit.
- In addition, the management terminal switches the connection destination of the separate logical volume from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination based on the designation from the direction unit;
- Further, the management information is constituted by including first management information for specifying the separate logical volume, and second management information for managing the attribute of the separate logical volume, and the first management information is respectively stored in the connection source storage device selected as the switching source and the connection source storage device selected as the switching destination.
- The entirety of the second management information is stored in the connection source storage device selected as the switching source, and only the second management information relating to the separate logical volume in which the connection destination is switched is transferred from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination.
- The operation method of a storage system according to yet a different perspective of the present invention is a method of operating a storage system having a first connection source storage device and a second connection source storage device capable of respectively providing a logical volume to a host device via a first communication network, and a connection destination storage device connected to each of the first and second connection source storage device via a second communication network, comprising the following steps.
- In the initial operation step, the plurality of separate logical volumes are respectively connected to one or a plurality of logical volumes of the first connection source storage device via an intermediate volume of the first connection source storage device based on the management information for respectively connecting to a plurality of separate logical volumes of the connection destination storage device, and the first connection source storage device is made to process the access request from the host device.
- In the device addition step, the second connection source storage device is connected to the host device via the first communication network, to the connection destination storage device via the second communication network, and to the first connection source storage device via a third communication network.
- In the first management information transfer step, information for respectively specifying the plurality of separate logical volumes among the management information of the first connection source storage device is transferred from the first connection source storage device to the second connection source storage device via the third communication network.
- In the transfer target selection step, a separate logical volume is selected to be transferred to the second connection source storage device among the plurality of separate logical volumes used by the first connection source storage device.
- In the second management information transfer step, attribute information relating to the separate logical volume selected as the transfer target among the management information of the first connection source storage device is transferred from the first connection source storage device to the second connection source storage device via the third communication network.
- In the additional operation step, the separate logical volume selected as the transfer target is connected to the logical volume of the second connection source storage device via an intermediate volume of the second connection source storage device based on the information acquired at the first management information transfer step and the second management information transfer step, the path information for the host device to access the logical volume of the second connection source storage device is set in the host device, and the second connection source storage device is made to process the access request from the host device.
- Incidentally, the third communication network may also be used in combination with either the first communication network or second communication network.
- The whole or a part of the means, functions and steps of the present invention may sometimes be configured as a computer program to be executed with a computer system. When the whole or a part of the configuration of the present invention is configured as a computer program, such computer program, for instance, may be fixed in various storage mediums and distributed, or transmitted via a communication network.
-
FIG. 1 is an explanatory diagram showing the overall concept of an embodiment of the present invention; -
FIG. 2 is an explanatory diagram showing the schematic configuration of the storage system; -
FIG. 3 is a block diagram showing the hardware configuration of the storage system; -
FIG. 4 is an explanatory diagram showing the frame format of the memory configuration of the storage system; -
FIG. 5 is an explanatory diagram showing the respective configurations of the management table and attribute table to be used by a first virtualization storage device; -
FIG. 6 is an explanatory diagram showing the respective configurations of the management table and attribute table to be used by a second virtualization storage device; -
FIG. 7 is an explanatory diagram showing the configuration of the path definition information and the situation of the host path being switched based on this path definition information; -
FIG. 8 is a flowchart showing the processing of the virtualization storage devices acquiring information on the external storage device and creating a management table and the like; -
FIG. 9 is an explanatory diagram showing the processing in the case of operating in the asynchronous transfer mode; -
FIG. 10 is an explanatory diagram showing the processing in the case of operating in the synchronous transfer mode; -
FIG. 11 is a flowchart showing the transfer designation processing to be performed using the management terminal; -
FIG. 12 is an explanatory diagram showing a screen display example of the load status being monitored with the management terminal; -
FIG. 13 is a flowchart showing the outline of the processing for newly adding the second virtualization storage device to the storage system and transferring the volume from the first virtualization storage device; -
FIG. 14 is a flowchart showing the access processing to be executed with the first virtualization storage device, which is the transfer source; -
FIG. 15 is a flowchart showing the access processing to be executed with the second virtualization storage device, which is the transfer destination; -
FIG. 16 is a flowchart showing the outline of the processing for transferring the volume between a plurality of virtualization storage devices; -
FIG. 17 is a flowchart showing the processing for the second virtualization storage device, which is the transfer destination, to connect with the external volume, which is the transfer target; -
FIG. 18 is an explanatory diagram showing the frame format of the situation of operating the storage system with a plurality of virtualization storage devices; and -
FIG. 19 is a flowchart showing the transfer designation processing to be executed with the storage system according to the second embodiment. -
FIG. 1 is an explanatory diagram of the configuration showing the overall schematic of an embodiment of the present invention. As shown inFIG. 1 , this storage system, for instance, may be configured by having a plurality ofvirtualization storage devices external storage devices 3, a plurality of host devices (hereinafter referred to as a “host”) 4, an upper level SAN (Storage Area Network) 5, a lower level SAN 6, amanagement terminal 7, and a device-to-device LAN (Local Area Network) 8. - Here, the
virtualization storage device external storage device 3 corresponds to a “connection destination storage device”. Thehost 4 corresponds to a “host device”, theupper level SAN 5 corresponds to a “first communication network”, the lower level SAN 6 corresponds to a “second communication network”, themanagement terminal 7 corresponds to a “management terminal”, and the device-to-device LAN 8 corresponds to a “third communication network”. - Incidentally, the
upper level SAN 5 and lower level SAN 6, for example, may be configured as a FC_SAN (Fibre Channel_Storage Area Network) or IP_SAN (Internet Protocol_SAN), but it is not limited thereto, and, for instance, may also be configured as a LAN or WAN (Wide Area Network). Theupper level SAN 5 is used for respectively connecting therespective hosts 4 and the respectivevirtualization storage devices virtualization storage devices external storage device 3. Theupper level SAN 5 and lower level SAN 6 are separated, and the traffic or failure of one communication network will not directly influence the other communication network. - Attention is focused on the configuration of the first
virtualization storage device 1. The firstvirtualization storage device 1 is used for virtualizing avolume 3A of theexternal storage device 3 and providing this to thehost 4. This firstvirtualization storage device 1, for instance, has acontrol unit 1A, a first management table 1B, a second management table 1C, alogical volume 1D, and anintermediate volume 1E. - Here, the
control unit 1A corresponds to a “control unit”, the first management table 1B corresponds to “first management information”, the second management table 1C corresponds to “second management information, thelogical volume 1D corresponds to a “logical volume”, and theintermediate volume 1E corresponds to an “intermediate volume”. - The
control unit 1A controls the overall operation of the firstvirtualization storage device 1. Thecontrol unit 1A, for instance, creates alogical volume 1D and provides this to thehost 4. Further, thecontrol unit 1A connects thelogical volume 1D andexternal volume 3A via theintermediate volume 1E by using the first management table 1B and second management table 1C. Moreover, thecontrol unit 1A transfers the whole or a part of theexternal volume 3A under its own control to the secondvirtualization storage device 2 based on the designation from themanagement terminal 7. - The first management table 1B is used for identifying the respective
external volumes 3A in the storage system and connecting a desiredexternal volume 3A to thelogical volume 1D. The second management table 1C is used for managing other attribute information such as the copy status or difference management information (difference bitmap) of the respectiveexternal volumes 3A. - The second
virtualization storage device 2 may be configured the same as the firstvirtualization storage device 1. The secondvirtualization storage device 2, as with the firstvirtualization storage device 1, is able to connect the whole or a part of the respectiveexternal volumes 3A to thelogical volume 2D via theintermediate volume 2E. And, the secondvirtualization storage device 2, as with the firstvirtualization storage device 1, is able to provide theexternal volume 3A to thehost 4 as though it is one's own internal volume. - The second
virtualization storage device 2, for instance, may be configured by having acontrol unit 2A, a first management table 2B, a second management table 2C, alogical volume 2D and anintermediate volume 2E. Each of thesecomponents 2A to 2E have the same configuration as each of thecomponents 1A to 1E described with reference to the firstvirtualization storage device 1, and the detailed description thereof is omitted. - Nevertheless, it should be noted that the size of the second management table 2C is smaller than the size of the second management table 1C of the first
virtualization storage device 1. In the present embodiment, only the attribute information relating to theexternal volume 3A transferred from the firstvirtualization storage device 1 to the secondvirtualization storage device 2 is copied from the second management table 1C of the firstvirtualization storage device 1 to the second management table 2C of the secondvirtualization storage device 2. Accordingly, the table size of the second management table 2C is smaller than that of the second management table 1C. - When the first
virtualization storage device 1 is already being used prior to the secondvirtualization storage device 2 being added to the storage system; that is, when the firstvirtualization storage device 1 is virtualizing and using allexternal volumes 3A, the firstvirtualization storage device 1 has already obtained the attribute information of allexternal volumes 3A. Under these circumstances, when the secondvirtualization storage device 2 is added to the storage system, and a part of theexternal volume 3A is transferred from the firstvirtualization storage device 1 to the secondvirtualization storage device 2, only the attribute information relating to such transferredexternal volume 3A is copied from the second management table 1C of the firstvirtualization storage device 1 to the second management table 2C of the secondvirtualization storage device 2. - Each
external storage device 3 has at least one or moreexternal volumes 3A. An external volume is a volume existing outside the respectivevirtualization storage devices external volume 3A, for example, is provided on a physical memory area of one or a plurality of memory apparatuses. As such memory apparatus, for instance, a hard disk drive, optical disk drive, semiconductor memory drive, tape drive and so on may be considered. Further, as the hard disk drive, for example, various disks such as a FC (Fibre Channel) disk, SAS (Serial Attached SCSI) disk and SATA (Serial AT Attachment) disk may be used. Eachexternal volume 3A is connected to one of thelogical volumes intermediate volume virtualization storage devices - The
management terminal 7 is connected to both of thevirtualization storage devices device LAN 8. Themanagement terminal 7, for example, is configured as a personal computer, portable information terminal (including portable phones) or the like, and has amonitoring unit 7A. Themonitoring unit 7A respectively monitors the load status of the respectivevirtualization storage devices - As the load status, for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate and so on may be considered. A user such as a system administrator is able to comprehend the load status of the respective
virtualization storage devices monitoring unit 7A, and thereby determine the disposition of the volumes. - Incidentally, at least a part of the judgment process by the user may be realized by a computer program, and the volume disposition may also be automatically conducted based on the load status of the respective
virtualization storage devices virtualization storage devices management terminal 7. - Next, the operation method of the storage system according to the present embodiment is explained. In the most initial state, only the respective
external storage devices 3 exist in the storage system. Thereafter, the user introduces the firstvirtualization storage device 1 to the storage system, virtualizes theexternal volume 3A of the respectiveexternal storage devices 3 with the firstvirtualization storage device 1, and provides this to therespective hosts 4. Thereafter, for instance, fourmore hosts 4 are additionally increased, and, when the processing performance of the firstvirtualization storage device 1 is used up to its upper limit, the user decides the introduction of the secondvirtualization storage device 2. The user is able to decide the introduction of the secondvirtualization storage device 2 based on the monitoring result of themonitoring unit 7A (S0). - Then, the second
virtualization storage device 2 is added to the storage system (S1). The user or a corporate engineer selling the secondvirtualization storage device 2 respectively connects the secondvirtualization storage device 2 to the upper level SAN and lower level SAN 6 (S2A, S2B). Further, the secondvirtualization storage device 2 is connected to the firstvirtualization storage device 1 via the device-to-device LAN 8 (S3). - Next, contents of the first management table 1B of the first
virtualization storage device 1 are copied to the second virtualization storage device 2 (S4). Thereby, the first management table 2B is created in the secondvirtualization storage device 2. - The user selects the
external volume 3A to be transferred from the firstvirtualization storage device 1 to the secondvirtualization storage device 2 based on the monitoring result of themonitoring unit 7A, and designates the transfer of the volume (S5). - Based on the designation from the
management terminal 7, only the attribute information relating to theexternal volume 3A transferred to the secondvirtualization storage device 2 among the attribute information stored in the second management table 1C of the firstvirtualization storage device 1 is transferred from the firstvirtualization storage device 1 to the second virtualization storage device 2 (S6). - The second
virtualization storage device 2 connects theexternal volume 3A designated by themanagement terminal 7 and thelogical volume 2D by using the first management table 2B and second management table 2C (S7). And, the secondvirtualization storage device 2 sets information for making thehost 4 recognize thelogical volume 2D, and thehost 4 sets a path for accessing thislogical volume 2D (S8). - The data used by the
host 4, in reality, is stored in a prescribedexternal volume 3A. Before the transfer of the volume, thehost 4 is accessing a prescribedexternal volume 3A from thelogical volume 1D of the firstvirtualization storage device 1 via theintermediate volume 1E. Thehost 4 is totally unaware that such data is stored in a prescribedexternal volume 3A. - When transferring such prescribed
external volume 3A from the firstvirtualization storage device 1 to the secondvirtualization storage device 2, the secondvirtualization storage device 2 connects such prescribedexternal volume 3A to thelogical volume 2D via theintermediate volume 2E. Thehost 4 is able to access thislogical volume 2D by correcting the path information, and is thereby able to read and write desired data. - As described above, in the present embodiment, a plurality of
virtualization storage devices external volume 3A. And, theexternal volume 3A may be transferred between the respectivevirtualization storage devices virtualization storage device 1 and secondvirtualization storage device 2 can be used to disperse the processing load, and the processing performance of the storage system can be improved thereby. Thus, even when the demand of storage services increases, by appropriately adding virtualization storage devices, it will be possible to deal with such increased demand, and the usability can be improved. - Incidentally, it is not necessary to make the respective
virtualization storage devices external volumes 3A from the firstvirtualization storage device 1 to the secondvirtualization storage device 2, the firstvirtualization storage device 1 may be removed from the storage system. Embodiments of the present invention are now described in detail below. -
FIG. 2 is an explanatory diagram showing the overall schematic of the storage system according to the present embodiment. To foremost explain the correspondence withFIG. 1 , the firstvirtualization storage device 100A illustrated inFIG. 2 corresponds to the firstvirtualization storage device 1 ofFIG. 1 , and the secondvirtualization storage device 100B corresponds to the secondvirtualization storage device 2 ofFIG. 1 . Similarly, theexternal storage device 200 illustrated inFIG. 2 corresponds to theexternal storage device 3 ofFIG. 1 , thehost 10 ofFIG. 2 corresponds to thehost 4 ofFIG. 1 , and themanagement terminal 20 ofFIG. 2 corresponds to themanagement terminal 7 ofFIG. 1 . The communication network CN1 ofFIG. 2 corresponds to theupper level SAN 5 ofFIG. 1 , the communication network CN2 ofFIG. 2 corresponds to the lower level SAN 6 ofFIG. 1 , and the communication network CN3 ofFIG. 2 corresponds to the device-to-device LAN 8 ofFIG. 1 . - To foremost explain the network configuration of the storage system, the
respective hosts 10 are respectively connected to the respectivevirtualization storage devices virtualization storage devices external storage device 200 via the lower level network CN2. And, the respectivevirtualization storage devices management terminal 20 are connected via the management network CN3. For example, the communication network CN1, CN2 may be configured as an IP_SAN or FC_SAN. Further, for instance, the communication network CN3 may be configured as a LAN. Nevertheless, the management communication network CN3 may be abolished, and either or both the upper level network CN1 and lower level network CN2 may be used to transfer information for managing the storage system. - The schematic configuration of the storage system is now explained. The
host 10, for example, may be configured by having an HBA (Host Bus Adapter) 11, avolume management unit 12, and an application program 13 (abbreviated as “application” in the diagrams). When the upper level network CN1 is configured as an IP_SAN, in substitute for theHBA 11, for instance, a LAN card equipped with a TCP/IP offload engine may be used. Thevolume management unit 12 manages the path information and the like to the volume to be accessed. - The first
virtualization storage device 100A, for example, may be configured by having a host connection interface (abbreviated as “I/F” in the drawings) 111T, acontroller 101A, and an externalstorage connection interface 111E. Incidentally, although the firstvirtualization storage device 100A has alogical volume 164 as described later, the hierarchical memory configuration will be described later together withFIG. 4 . - The
host connection interface 111T is used for connecting to therespective hosts 10 via the upper level communication network CN1. The externalstorage connection interface 111E is used for connecting to the respectiveexternal storage devices 200 via the lower level communication network CN2. - The
controller 101A is used for controlling the operation of the firstvirtualization storage device 100A. Although details of thecontroller 101A will be described in detail later, thecontroller 101A, for instance, may be configured by having one or a plurality of microprocessors, memory, data processing circuit and the like. A management table T1A and attribute table T2A are respectively stored in thecontrol memory 140 used by thecontroller 101A. The management table T1A corresponds to the first management table 1B ofFIG. 1 , and the attribute table T2A corresponds to the second management table 1C ofFIG. 1 . These management tables T1A, T2A will be described in detail later. Write data and the like written from thehost 10 is stored in thecache memory 130 used by thecontroller 101A. - The second
virtualization storage device 100B, as with the firstvirtualization storage device 100A, may be configured by having ahost connection interface 111T, acontroller 101B, and an externalstorage connection interface 111E. And, a management table T1B and attribute table T2B are stored in thecontrol memory 140 used by thecontroller 101B. - The respective
external storage devices 200, for example, may be configured by respectively having acontroller 210, acommunication port 211, and alogical volume 240. Since thelogical volume 240 is a volume existing outside the respectivevirtualization storage devices - The
management terminal 20, for instance, is configured as a personal computer, workstation, portable information terminal or the like, and has amonitoring unit 21. Themonitoring unit 21 respectively acquires the load status of the respectivevirtualization storage devices - Incidentally,
reference numeral 30 inFIG. 2 represents a switch. InFIG. 2 , although theswitch 30 is only shown in the upper level network, one or a plurality of such switches may also be provided to the lower level network CN2. -
FIG. 3 is an explanatory diagram showing the detailed hardware configuration of the respectivevirtualization storage devices virtualization storage device 100, the firstvirtualization storage device 100A, for instance, may be configured by having a plurality of channel adapters (hereinafter referred to as a “CHA”) 110, a plurality of disk adapters (hereinafter referred to as a “DKA”) 120, acache memory 130, acontrol memory 140, aconnection control unit 150, amemory unit 160, and a service processor (hereinafter abbreviated as “SVP”) 170. - Each
CHA 110 performs data communication with thehost 10. EachCHA 110 may have at least one ormore communication interfaces 111T for communicating with thehost 10. EachCHA 110 may be configured as a microcomputer system equipped with a CPU, memory and so on. EachCHA 110 interprets and executes the various commands such as a read command or write command received from thehost 10. - Each
CHA 110 is assigned a network address (e.g., IP address or WWN) for identifying therespective CHAs 110, and eachCHA 110 may also individually function as a NAS (Network Attached Storage). When there is a plurality ofhosts 10, eachCHA 110 receives and processes the request from eachhost 10 individually. Among therespective CHAs 110, aprescribed CHA 110 is provided with an interface (target port) 111T for communicating with thehost 10, and theother CHAs 110 are provided with an interface (externally connected port) 111E for communicating with theexternal storage device 200. - Each
DKA 120 is used for transferring data to and from thedisk drive 161 of thememory unit 160. EachDKA 120, as with theCHA 110, is configured as a microcomputer system equipped with a CPU, memory and so on. EachDKA 120, for example, is able to write data that theCHA 110 received from thehost 10 or data read from theexternal storage device 200 into aprescribed disk drive 161. Further, eachDKA 120 is also able to read data from aprescribed disk drive 161 and transmit this to thehost 10 orexternal storage device 200. When inputting and outputting data to and from thedisk drive 161, eachDKA 120 converts a logical address into a physical address. - When the
disk drive 161 is managed according to RAID, eachDKA 120 performs data access according to such RAID configuration. For example, eachDKA 120 respectively writes the same data in separate disk drive groups (RAID groups) (RAID 1, etc.), or executes a parity account and writes the data and parity in the disk drive group (RAID 5, etc.). Incidentally, in the present embodiment, the respectivevirtualization storage devices external volume 240 of theexternal storage device 200, and provides this to thehost 10 as though it is one's own internal volume. - Therefore, the respective
virtualization storage devices memory unit 160. The respectivevirtualization storage devices external volume 240. When the respectivevirtualization storage devices memory unit 160, theDKA 120 will not be required. Incidentally, the configuration may also be such that one virtualization storage device has amemory unit 160, and the other virtualization storage device does not have amemory unit 160. - The
cache memory 130 stores the data received from thehost 10 orexternal storage device 200. Further, thecache memory 130 stores data read from thedisk drive 161. As described later, the memory space of thecache memory 130 is used to create a virtual, intermediate memory apparatus (V-VOL). - The
control memory 140 stores various types of control information to be used in the operation of thevirtualization storage device 100A. Further, a work area is set in thecontrol memory 140, and various tables described later are also stored therein. - Incidentally, one or a plurality of
disk drives 161 may be used as the cache disk. Further, thecache memory 130 andcontrol memory 140 may be configured to be separate memories, or a part of the memory area of the same memory may be used as the cache area, and the other memory area may be used as the control area. - The
connection control unit 150 mutually connects therespective CHAs 110,respective DKAs 120,cache memory 130 andcontrol memory 140. Theconnection control unit 150, for instance, can be configured as a crossbar switch or the like. - The
memory unit 160 has a plurality of disk drives 161. As thedisk drive 161, for example, various memory apparatuses such as a hard disk drive, flexible disk drive, magnetic tape drive, semiconductor memory drive and optical disk drive as well as the equivalents thereof may be used. Further, for instance, different types of disks such as a FC (Fibre Channel) disk and a SATA (Serial AT Attachment) disk may coexist in thememory unit 160. - The service processor (SVP) 170 is respectively connected to each
CHA 110 via an internal network such as a LAN. TheSVP 170 is able to send and receive data to and from thecontrol memory 140 orDKA 120 via theCHA 110. TheSVP 170 extracts various types of information in the firstvirtualization storage device 100A and provides this to themanagement terminal 20. - Since the second
virtualization storage device 100B can also be configured the same as the firstvirtualization storage device 100A, the explanation thereof is omitted. Nevertheless, the respectivevirtualization storage devices - The
external storage device 200 may be configured approximately the same as thevirtualization storage devices virtualization storage devices - Here, care should be given to the network configuration of the storage system. As described above, the upper level network CN1 connecting the
host 10 and respectivevirtualization storage devices respective storage devices - Explanation is not provided with reference to
FIG. 4 .FIG. 4 is an explanatory diagram showing the memory configuration of the storage system. Foremost, the configuration of thevirtualization storage devices virtualization storage device 100A as an example. - The memory configuration of the first
virtualization storage device 100A, for example, can be broadly classified into a physical memory hierarchy and a logical memory hierarchy. The physical memory hierarchy is configured from a PDEV (Physical Device) 161, which is a physical disk. PDEV corresponds to the foregoingdisk drive 161. - The logical memory hierarchy may be configured from a plurality of (e.g., two types of) hierarchies. One logical hierarchy may be configured from a VDEV (Virtual Device) 162, and a virtual VDEV (hereinafter sometimes referred to as “V-VOL”) 163 which is treated like the
VDEV 162. The other logical hierarchy may be configured from a LDEV (Logical Device) 164. - The
VDEV 162, for example, is configured by grouping a prescribed number ofPDEVs 161 such as in a set of fours (3D+1P), or a set of eights (7D+1P). The memory areas provided respectively from each PDEV 161 belonging to the group are assembled to form a single RAID storage area. This RAID memory area becomes theVDEV 162. - In contrast to the
VDEV 162 being created on a physical memory area, the V-VOL 163 is a virtual intermediate memory apparatus that does not require a physical memory area. The V-VOL 163 is not directly associated with a physical memory area, and is a virtual existence to become the receiver for mapping an LU (Logical Unit) of the externalstorage controller device 200. This V-VOL 163 corresponds to an intermediate volume. - At least one or
more LDEVs 164 may be provided on theVDEV 162 or V-VOL 163. TheLDEV 164, for instance, may be configured by dividing theVDEV 162 in a fixed length. When thehost 10 is an open host, by theLDEV 164 being mapped with theLU 165, thehost 10 will recognize theLDEV 164 as a single physical disk. An open host can access a desiredLDEV 164 by designating the LUN (Logical Unit Number) or logical block address. Incidentally, a mainframe host will directly recognize theLDEV 164. - The
LU 165 is a device that can be recognized as a logical unit of SCSI. EachLU 165 is connected to thehost 10 via thetarget port 111T. At least one ormore LDEVs 164 may be respectively associated with eachLU 165. Incidentally, as a result of associating a plurality ofLDEVs 164 to asingle LU 165, the LU size can be virtually expanded. - A CMD (Command Device) 166 is a dedicated LU to be used for transferring commands and statuses between the I/O control program operating on the
host 10 and thestorage device 100. - For example, a command from the
host 10 is written in theCMD 166. The firstvirtualization storage device 100 executes the processing according to the command written in theCMD 166, and writes the execution result thereof as the status in theCMD 166. Thehost device 10 reads and confirms the status written in theCMD 166, and writes the processing contents to be executed subsequently in theCMD 166. As described above, thehost device 10 is able to give various designations to the firstvirtualization storage device 100A via theCMD 166. - Incidentally, the command received from the
host device 10 may also be processed directly by the firstvirtualization storage device 100A without being stored in theCMD 166. Moreover, the CMD may be created as a virtual device without defining the actual device (LU) and configured to receive and process the command from thehost device 10. In other words, for example, theCHA 110 writes the command received from thehost device 10 in thecontrol memory 140, and theCHA 110 orDKA 120 processes this command stored in thecontrol memory 140. The processing results are written in thecontrol memory 140, and transmitted from theCHA 110 to thehost device 10. - An
external storage device 200 is connected to an initiator port (External Port) 111E for external connection of the firstvirtualization storage device 100A via the lower level network CN2. - The
external storage device 200 has a plurality ofPDEVs 220, aVDEV 230 set on a memory area provided by thePDEV 220, and one ormore LDEVs 240 that can be set on theVDEV 230. And, eachLDEV 240 is respectively associated with theLU 250. ThePDEV 220 corresponds to thedisk drive 220 ofFIG. 3 . TheLDEV 240 corresponds to a “separate logical volume”, and corresponds to theexternal volume 3A ofFIG. 1 . - The LU 250 (i.e., LDEV 240) of the
external storage device 200 is mapped to the V-VOL 163. For example, the “LDEV 1”, “LDEV 2” of theexternal storage device 200 are respectively mapped to the “V-VOL 1”, “V-VOL 2” of the firstvirtualization storage device 100A via the “LU 1”, “LU 2” of theexternal storage device 200. And, “V-VOL 1”, “V-VOL 2” are respectively mapped to the “LDEV 3”, “LDEV 4”, and thehost device 10 is thereby able to use these volumes via the “LU 3”, “LU 4”. - Incidentally, the
VDEV 162, V-VOL 163 may adopt the RAID configuration. In other words, asingle disk drive 161 may be assigned to a plurality ofVDEVs 162, V-VOLs 163 (slicing), and asingle VDEV 162, V-VOL 163 may be formed from a plurality of disk drives 161 (striping). - Since the second
virtualization storage device 100B may have the same hierarchical memory configuration as the firstvirtualization storage device 100A, the explanation thereof is omitted. -
FIG. 5 is an explanatory diagram showing the schematic configuration of the management table T1A and attribute table T2A used by the firstvirtualization storage device 100A. Each of these tables T1A, T2A may be stored in thecontrol memory 140. - The management table T1A is used for uniformly managing the respective
external volumes 240 dispersed in the storage system. The management table T1A, for instance, may be configured by respectively associating a network address (WWN: World Wide Name) for connected to the respectiveexternal volumes 240, a number (LUN: Logical Unit Number) of the respectiveexternal volumes 240, volume size of the respectiveexternal volumes 240, an external volume number, owner right information and transfer status flag. - Here, an external volume number is identifying information for uniquely specifying the respective
external volumes 240 in the storage system. Owner right information is information for specifying the virtualization storage devices having the authority to use such external volume. When “0” is set in the owner right information, it shows that suchexternal volume 240 is unused. When “1” is set in the owner right information, it shows that one's own device has the usage authorization to use suchexternal volume 240. Further, when “−1” is set in the owner right information, it shows that the other virtualization storage devices have the usage authorization to use suchexternal volume 240. - Specifically, with respect to the
external volume 240 to which “1” is set in the owner right information in the management table T1A used by the firstvirtualization storage device 100A, the firstvirtualization storage device 100A has the usage authorization thereof. Similarly, with respect to theexternal volume 240 to which “−1” is set in the management table T1A, the secondvirtualization storage device 100B has the usage authorization thereof. As described above, when the owner right information is set as “1” in one management table regarding a certainexternal volume 240, the ownership right information of such external volume is set to “−1” in the other management table. By referring to the owner right information, whether such external volume is under the control of one of the virtualization storage devices, or is an unused volume can be known. - Incidentally, in the present embodiment, since only two
virtualization storage devices external volume 240 can be specified. In addition to the above, if there are three or more virtualization storage devices in the storage system, as the owner right information, for instance, the case number assigned to the respective virtualization storage devices may also be set. In other words, identifying information capable of uniquely specifying the respective virtualization storage devices in the storage system may be used as the owner right information. - The transfer status flag is information showing that the
external volume 240 is being transferred from one virtualization storage device to the other virtualization storage device. When “1” is set in the transfer status flag, this shows that the owner right of suchexternal volume 240 is being changed. Meanwhile, when “0” is set in the transfer status flag, this shows that suchexternal volume 240 is in a normal state, and the owner right is not being changed. - The attribute table T2A is a table for managing various types of attribute information of the respective
external volumes 240. The attribute table T2A, for example, may be configured by associating the LU number of the respectiveexternal volumes 240, path definition information, replication configuration information, replication status information, and replication bitmap information. Path definition information is information for showing, via which port of whichCHA 110, thelogical volume 164 connected to suchexternal volume 240 is to be accessed by thehost 10. A plurality of paths may be set in the path definition information. One path is the normally used primary path, and the other path is an alternate path to be used when there is failure in the primary path. - The replication configuration information is information showing the correspondence of the volumes configuring a copy-pair. A volume in which “P” is set in the replication configuration information is a primary volume (copy source volume), and a volume in which “S” is set in the replication configuration information is a secondary volume (copy destination volume). Incidentally, the numbers appended to “P” and “S” are serial numbers for identifying the respective copy-pairs.
- The replication status information is information showing the status of the respective volumes configuring the copy-pair. When “Pair” is set in the replication status information, the volume thereof is in synchronization with the volume of the other party, and shows that the respective volumes forming the copy-pair are maintaining the same memory contents. When “Resync” is set in the replication status information, this shows that the volume thereof and the volume of the other party are in resynchronization. When “Simplex” is set in the replication status information, this shows that the volume thereof is not a target of replication. When “Suspend” is set in the replication status information, this shows that the volume thereof has not been updated with the volume of the other party.
- The replication bitmap information is information showing the updated position of the data in the volume thereof. For example, a flag showing whether the data has been updated is prepared for each segment, and this means that, in a segment with “1” set to the flag, the data thereof has been updated. For example, when managing the existence of the update of data regarding a
logical volume 164 having a volume size of 1 TB in a segment size of 1 MB, the size of the replication bitmap information will be 128 KB. When the firstvirtualization storage device 100A is able to set n number oflogical volumes 164, the total size of the replication bitmap information will be n×128 KB. When n is 16384, the total size of the replication bitmap information will be 16384×128 KB=2048 MB. - As described above, when only focusing attention on the replication bitmap information, the table size of the attribute table T2A will be enormous. According, when the entirety of this attribute table T2A is to be transferred to the second
virtualization storage device 100B, thecontrol memory 140 of the secondvirtualization storage device 100B will be compressed. Thus, in the present embodiment, among the information stored in the attribute table T2A, only the information relating to the volume to be transferred to the secondvirtualization storage device 100B is transferred to the secondvirtualization storage device 100B. In other words, attribute information is transferred to the necessary extent. Thereby, the data volume to be transferred can be reduced, the time required for creating the attribute table can be shortened, and the compression of the memory resource (control memory 140) of the secondvirtualization storage device 100B, which is the transfer destination, can be prevented. - Incidentally, in addition to the foregoing items, for instance, information such as the device type (disc device or tape device, etc.), vendor name, identification number of the respective storage devices and so on may also be managed. Such information may be managed with either the management table T1A or attribute table T2A.
-
FIG. 6 is an explanatory diagram showing the schematic configuration of the management table T1B and attribute table T2B used by the secondvirtualization storage device 100B. The management table T1B, as with the management table T1A described above, for instance, is configured by associating a network address such as WWN, an LU number, volume size, an external volume number, owner right information and a transfer status flag. The management table T1A and management table T1B are configured the same excluding the owner right information. - The attribute table T2B, as with the attribute table T2A described above, is also configured by associating an LU number, path definition information, replication configuration information, replication status information and replication bitmap information. Nevertheless, as described above, in order to effectively use the memory resource of the second
virtualization storage device 100B, it should be noted that only the attribute information of the volume under the control of the secondvirtualization storage device 100B is registered in the management table T2B. -
FIG. 7 is an explanatory diagram showing the schematic configuration of the path setting information T3 to be used by thevolume management unit 12 of thehost 10. This path setting information T3 may be stored in the memory of thehost 10 or a local disk. - The path setting information T3 includes information relating to the primary path to be used in normal times, and information relating to the alternate path to be used in abnormal times. Each path, for instance, is configured by including information for specifying the
HBA 11 to be used, port number of the access destination, and LU number for identifying the volume of the access target. - Although a plurality of alternate paths is described in the path setting information T3, the alternate path described first is a normal alternate path, and the subsequently described alternate path is a path unique to the present embodiment. In other words, the second alternate path is a path set upon transferring the volume from the first
virtualization storage device 100A to the secondvirtualization storage device 100B. - The lower part of
FIG. 7 shows a frame format of the situation of switching from the primary path to the alternate path. Here, explained is a case where the volume 420 of “#0” is transferred from the firstvirtualization storage device 100A to the secondvirtualization storage device 100B. - Before the transfer, by accessing the
Port # 0 from theHBA # 0 as shown with the thick line inFIG. 7 , thehost 10 is able to read and write data from and into the logical volume of the firstvirtualization storage device 100A. In the firstvirtualization storage device 100A, theexternal volume 240 is accessed from thePort # 1 based on the access from thehost 10. - When transferring the volume, information for the
host 10 to access the transferred volume is added to the path setting information T3 as the second alternate path. And, the firstvirtualization storage device 100A rejects the access request regarding the transferred volume. - Therefore, even if the
host 10 tries to access the transferred volume via the primary path shown with the thick line inFIG. 7 , such access will be rejected by the firstvirtualization storage device 100A. Thus, thehost 10 tries re-accessing such transferred volume by switching to the first alternate path (HBA # 1→Port # 2→LU #0) shown with the dotted line inFIG. 7 . Nevertheless, this access is also rejected by the firstvirtualization storage device 100A. - Then, the
host 10 tries to access the volume by switching to the second alternate path (HBA # 1→Port # 4→LU #0) shown with the dashed line inFIG. 7 . The second alternate path is a path to the secondvirtualization storage device 100B, which is the volume transfer destination. When the access request from thehost 10 is in a processible state, the secondvirtualization storage device 100B processes this access request, and returns the processing result to thehost 10. The processible state of the access request means that even when the access request from thehost 10 is processed, inconsistency in the data stored in the volume will not occur. This will be described in detail later. - As described above, when the
host 10 is unsuccessful in accessing via the primary path, it switches to the first alternate path, and, when it is unsuccessful in accessing via the first alternate path, it switches to the second alternate path. Accordingly, until the access request of thehost 10 is accepted, some time (path switching time) will be required. Nevertheless, this path switching time is not wasteful time. This is because, as described later, destage processing to the transferred volume can be performed during such path switching time. In the present embodiment, merely by adding a new path to the path setting information T3 stored in thehost 10, the access destination of thehost 10 can be switched. -
FIG. 8 is a flowchart showing the outline of the processing for searching the external volume existing in the storage system and registering this in the management table T1A. Here, an example of a case where the firstvirtualization storage device 100A executes the processing is explained. - Foremost, the first
virtualization storage device 100A issues a command (“Test Unit Ready”) toward the respectiveexternal storage devices 200 for confirming the existence thereof (S11). Eachexternal storage device 200 operating normally will return a Ready reply having a Good status as the response to such command (S12). - Next, the first
virtualization storage device 100A issues an “Inquiry” command to eachexternal storage device 200 in which the existence thereof has been confirmed (S13). Eachexternal storage device 200 that received this command, for instance, transmits information regarding the device type and so on to the firstvirtualization storage device 100A (S14). - The first
virtualization storage device 100A issues a “Read Capacity” command to each external storage device 200 (S15). Eachexternal storage device 200 transmits the size of theexternal volume 240 to the firstvirtualization storage device 100A (S16). - The first
virtualization storage device 100A transmits a “Report LUN” command to each external storage device 200 (S17). Eachexternal storage device 200 transmits the LUN quantity and LUN number to the firstvirtualization storage device 100A (S18). - The first
virtualization storage device 100A registers the information acquired from eachexternal storage device 200 in the management table T1A and attribute table T2A, respectively. As described above, the firstvirtualization storage device 100A is able to respectively create the management table T1A and attribute table T2A by issuing a plurality of inquiry commands. - Incidentally, the configuration of the storage system may change by one of the
external storage devices 200 being removed, or a newexternal storage device 200 being added. When the configuration of the storage system is changed, for example, the firstvirtualization storage device 100A is able to detect such change in configuration based on command and notifications such as RSCN (Registered State Change Notification), LIP (Loop Initialization Primitive), SCR (State Change Registration) or SCN (State Change Notification). Incidentally, the foregoing processing may also be executed by the secondvirtualization storage device 100B. - Next, the method of the
virtualization storage devices external volume 240 to process the access request from thehost 10 is explained. Here, although explained is a case where the firstvirtualization storage device 100A processes the access request, the secondvirtualization storage device 100B may also perform the same processing. Foremost, the processing method of a write command is explained. As the method for processing the write command, two methods; namely, the synchronous transfer mode and asynchronous transfer mode may be considered. - In the case of the synchronous transfer mode, when the first
virtualization storage device 100A receives a write command from thehost 10, the firstvirtualization storage device 100A stores the write data received from thehost 10 in thecache memory 130, and thereafter transfers the write data to theexternal storage device 200 via the communication network CN2. When theexternal storage device 200 receives the write data and stores this in the cache memory, it transmits a reply signal to the firstvirtualization storage device 100A. When the firstvirtualization storage device 100A receives the reply signal from theexternal storage device 200, it transmits a write completion report to thehost 10. - As described above, in the synchronous transfer mode, after the write data is transferred to the
external storage device 200, the completion of the write command processing is notified to thehost 10. Accordingly, in the synchronous transfer mode, a delay will arise in the time of waiting for the reply from theexternal storage device 200. Thus, the synchronous transfer mode is suitable in cases where the distance between the firstvirtualization storage device 100A andexternal storage device 200 is relatively short. Contrarily, if the firstvirtualization storage device 100A andexternal storage device 200 are far apart, generally speaking, the synchronous transfer mode is not suitable due to problems of delays in reply and delays in propagation. - Contrarily, in the case of an asynchronous transfer mode, when the first
virtualization storage device 100A receives a write command from thehost 10, it stores the write data in thecache memory 130, and thereafter immediately issues a write completion report to thehost 10. After issuing the write completion report to thehost 10, the firstvirtualization storage device 100A transfers the write data to theexternal storage device 200. The write completion report to thehost 10 and the data transfer to theexternal storage device 200 are conducted asynchronously. Accordingly, in the case of the asynchronous transfer mode, the write completion report can be transmitted to thehost 10 quickly irrelevant to the distance between the firstvirtualization storage device 100A andexternal storage device 200. Thus, the asynchronous transfer mode is suitable when the distance between the firstvirtualization storage device 100A andexternal storage device 200 is relatively long. -
FIG. 9 is an explanatory diagram showing the case of the asynchronous transfer mode. InFIG. 9 andFIG. 10 , thevirtualization storage devices virtualization storage device 100”. Further, the management tables T1A, T1B are not differentiated, and will be referred to as the “management table T1”. - The
host 10 issues a write command to aprescribed LU 165 of the virtualization storage devices 100 (S31). TheLU 165 is associated with theLU 250 of theexternal storage device 200 via the V-VOL 163. TheLU 165 of thevirtualization storage devices 100 is an access target from thehost 10, but theexternal LU 250 is actually storing the data. Therefore, for instance, theLU 165 may be referred to as an “access destination logical memory apparatus” and theLU 250 may be referred to as a “data storage destination logical memory apparatus”, respectively. - When the
virtualization storage devices 100 receives a write command from thehost 10, it specifies the LU targeted by such write command, refers to the management table T1 and determines whether this LU is associated with an external volume. When this is a write command to an LU associated with an external volume, thevirtualization storage device 100 transmits a write command to theexternal storage device 200 having such external volume (S32). - After the write command is issued, the
host 10 transmits the write data with theLU 165 as the write target to the virtualization storage devices 100 (S33). Thevirtualization storage device 100 temporarily stores the write data received from thehost 10 in the cache memory 130 (S34). After thevirtualization storage device 100 stores the write data in thecache memory 130, it reports the completion of writing to the host 10 (S35). - After converting the address and so on, the
virtualization storage device 100 transmits the write data stored in thecache memory 130 to the external storage device 200 (S36). Theexternal storage device 200 stores the write data received from thevirtualization storage device 100 in the cache memory. And, theexternal storage device 200 reports the completion of writing to the virtualization storage device 100 (S37). Theexternal storage device 200, for example, looks out for a period with few I/O, and writes the write data stored in the cache memory in the memory apparatus 220 (destage processing). In the asynchronous transfer mode, after write data is received from thehost 10, the write completion can be sent to thehost 10 in a short reply time δ1. -
FIG. 10 shows a case of the synchronous transfer mode. Upon receiving the write command issued from the host 10 (S41), thevirtualization storage device 100 specifies the external volume (LU 250) associated with the access destination volume (LU 165) of the write command, and issues a write command to such external volume (S42). - When the
virtualization storage device 100 receives the write data from the host 10 (S43), it stores this write data in the cache memory 130 (S44). Thevirtualization storage device 100 transfers the write data stored in thecache memory 130 to theexternal storage device 200 such that it is written in the external volume (S45). After storing the write data in the cache memory, theexternal storage device 200 reports the completion of writing to the virtualization storage device 100 (S46). When thevirtualization storage device 100 confirms the completion of writing in theexternal storage device 200, it reports the completion of writing to the host 10 (S47). In the synchronous transfer mode, since the report of the write completion to thehost 10 is made upon waiting for the processing in theexternal storage device 200, the reply time δ2 will become long. The reply time δ2 of the synchronous transfer mode is longer than the reply time δ1 of the asynchronous transfer mode (δ2≧δ1). - As described above, the respective
virtualization storage devices external volume 240 of theexternal storage device 200 as though it is a virtual internal volume. - Next, the method of transferring the
external volume 240 being used by the firstvirtualization storage device 100A to the secondvirtualization storage device 100B is explained. Incidentally, theexternal volume 240 may also be transferred from the secondvirtualization storage device 100B to the firstvirtualization storage device 100A. -
FIG. 11 is a flowchart showing the processing for designating the transfer of the volume to the respectivevirtualization storage devices - For example, when the user provides a designation to the
management terminal 20, themonitoring unit 21 acquires performance information from the firstvirtualization storage device 100A (S51). Themonitoring unit 21 displays the acquired performance information on a terminal screen of the management terminal 20 (S52). This performance information corresponds to the information showing the “load status”, and, for instance, includes the input/output per second (IOPS), CPU usage rate, cache memory usage rate and so on. - The user discovers whether there is a high-load CPU based on the performance information displayed on the screen of the management terminal 20 (S53). This CPU represents the CPU built in the
CHA 110. Next, the user confirms that every CPU ofother CHAs 110 is of a load that is greater than a prescribed value (S54). - And, in order to alleviate the load of the high-
load CHA 110, the user determines the transfer of theexternal volume 240 under the control of such CHA 110 (S55). Subsequently, the user sets a path of the transfer destination (S56). In other words, the user defines the path information regarding which port thehost 10 will use for the access in the secondvirtualization storage device 100B, which is the transfer destination (S56). The defined path information is added to thehost 10. Finally, the user designates the transfer of suchexternal volume 240 to the respectivevirtualization storage devices - In other words, the user specifies the external volume that is being the bottleneck in the first
virtualization storage device 100A, which is the transfer source (switching source) (S53 to S55) based on the monitoring result of the monitoring unit 21 (S51, S52), and designates the start of transfer by defining the path of the transfer destination (S56, S57). Each of the foregoing processing steps may also be conducted automatically. -
FIG. 12 is an explanatory diagram showing an example of a screen showing the monitoring result of themonitoring unit 21. Themonitoring unit 21 is able to respectively acquire performance information from the respectivevirtualization storage devices - In the selection unit G11, it is possible to select which load status regarding which resource among the various resources in the storage system is to be displayed. Here, as the resource, for instance, “network”, “storage”, “switch” and so on may be considered.
- When the user selects “storage”, the user may further select one of the
virtualization storage devices virtualization storage devices - For example, in the first display unit G12, the overall status of the selected virtualization storage device can be displayed as a list among the
virtualization storage devices - The user is able to relatively easily determine which part of which virtualization storage device is a bottleneck based on the performance monitoring screen as shown in
FIG. 12 . Thus, the user is able to determine the volume to be transferred based on such determination. -
FIG. 13 is a flowchart showing the situation of newly adding a secondvirtualization storage device 100B to the storage system in a state where the firstvirtualization storage device 100A is in operation, and transferring one or a plurality of volumes from the firstvirtualization storage device 100A to the secondvirtualization storage device 100B. Incidentally, inFIG. 13 and so on, the firstvirtualization storage device 100A is abbreviated as the “first storage” and the secondvirtualization storage device 100B is abbreviated as the “second storage”, respectively. - The user will be able to comprehend the load status of the first
virtualization storage device 100A with the methods described with reference toFIG. 11 andFIG. 12 . As a result, the user will be able to determine the additional injection of the secondvirtualization storage device 100B. - Foremost, the user or engineer of the vendor performs physical connection procedures of the newly introduced second
virtualization storage device 100B (S61). Specifically, thehost connection interface 111T of the secondvirtualization storage device 100B is connected to the upper level network CN1, the externalstorage connection interface 111E of the secondvirtualization storage device 100B is connected to the lower level network CN2, and theSVP 170 of the secondvirtualization storage device 100B is connected to the network CN3. - Next, the second
virtualization storage device 100B acquires the memory contents of the management table T1A from the firstvirtualization storage device 100A (S62). Based on such acquired contents, the secondvirtualization storage device 100B creates a management table T1B. The secondvirtualization storage device 100B respectively detects theexternal volumes 240 in the storage system based on the management table T1B (S63). - When the user designates the transfer of the volume from the management terminal 20 (S64), the second
virtualization storage device 100B connects the designatedexternal volume 240 to the V-VOL 163 via theinterface 111E (S65). - Details of external connection are shown in
FIG. 17 . Thus, explanation will foremost be made with reference toFIG. 17 . The secondvirtualization storage device 100B acquires attribute information relating to the transfer target volume from the storage device of the transfer source; that is, the firstvirtualization storage device 100A (S151). The secondvirtualization storage device 100B registers the attribute information other than the path information among the acquired attribute information in the attribute table T2B (S152). The secondvirtualization storage device 100B newly sets path definition information regarding the transfer target volume (S153). - Here, the user selects the
logical volume 164 to be accessed from thehost 10 as the transfer target. When the selectedlogical volume 164 is connected to theexternal volume 240, in hindsight, theexternal volume 240 connected to suchlogical volume 164 will be reconnected to a separatelogical volume 164 of the transfer destination storage device (100B). As described above, thevirtualization storage devices external volume 240 to thelogical volume 164 via the V-VOL 163, and are able to use this as though it is one's own internal memory apparatus. - Returning to
FIG. 13 , thevolume management unit 12 of thehost 10 adds the path information for accessing the transferred volume to the path setting information T3 (S66). In other words, path information for accessing thelogical volume 164 connected to theexternal volume 240 via a prescribed port of the secondvirtualization storage device 100B is set. - The first
virtualization storage device 100A sets an owner right regarding theexternal volume 240 designated as the transfer target (S67). In other words, “−1” is set in the owner right information regarding the transfer target volume. The firstvirtualization storage device 100A notifies the set owner right information to the secondvirtualization storage device 100B (S68). - When the second
virtualization storage device 100B acquires the owner right information from the firstvirtualization storage device 100A (S69), it registers the acquired owner right information in the management table T1B (S70). Here, the owner right information is registered in the management table T1B upon the value thereof being changed to “1”. This is because the usage authorization of the transfer target volume has been transferred to the secondvirtualization storage device 100B. The secondvirtualization storage device 100B reports the completion of registration of the owner right information to the firstvirtualization storage device 100A (S71). The firstvirtualization storage device 100A receives the setting completion report of the owner right information from the secondvirtualization storage device 100B (S72). - When the access request relating to the transfer target volume is issued by the host 10 (S73), the first
virtualization storage device 100A starts the destage processing without processing the access request (S74). Access processing in the transfer source before the completion of transfer will be described later with reference toFIG. 14 . The secondvirtualization storage device 100B receives a notice indicating the completion of destage processing from the firstvirtualization storage device 100A (S75). - Meanwhile, when the command processing issued to the first
virtualization storage device 100A is rejected, thehost 10 refers to the path setting information T3, switches to a different path (S76), and reissues the command (S77). Here, for the sake of convenience of explanation, the switch shall be from the primary path passing through the firstvirtualization storage device 100A to the second alternate path passing through the secondvirtualization storage device 100B. - When the second
virtualization storage device 100B receives a command from thehost 10, it performs access processing (S78). If at the point in time of receiving the command the destage processing of the transfer target volume is complete, normal access processing will be performed. If the destage processing is not complete, however, different access processing will be performed. Access processing in the transfer destination before the completion of the transfer will be described later with reference toFIG. 15 . Incidentally, the flow shown inFIG. 13 is merely an example, and, in reality, there are cases where the order of steps will be different. -
FIG. 14 is a flowchart showing the details of S74 inFIG. 13 . When the firstvirtualization storage device 100A, which is the transfer source storage device, receives a command from the host 10 (S81: YES), it analyzes the access target of such command. The firstvirtualization storage device 100A determines whether the command in which thelogical volume 164 connected to theexternal volume 240 of its own usage authorization is the access target (S82). In other words, the firstvirtualization storage device 100A determines whether the command is an access request relating to theexternal volume 240 in which it has the owner right. - When the first
virtualization storage device 100A determines that it is an access to thelogical volume 164 connected to theexternal volume 240 in which it does not have the usage authorization; that is, theexternal volume 240 in which “−1” is set in the owner right information (S82: NO), the command processing from thehost 10 is rejected (S83). Refection of the command processing, for instance, may be made by not replying for a prescribed period of time (negative rejection), or by notifying thehost 10 that processing is impossible (positive rejection). - The first
virtualization storage device 100A starts the destage processing of dirty data regarding theexternal volume 240 in which the access was requested from the host 10 (S84). And, when the destage processing is complete (S85: YES), the firstvirtualization storage device 100A notifies the secondvirtualization storage device 100B to such effect (S86). - Amore detailed explanation is now provided. The access target of the
host 10 is thelogical volume 164 of the firstvirtualization storage device 100A. Thelogical volume 164 is selected as the transfer target. and, thislogical volume 164 is connected to thelogical volume 240 of theexternal storage device 200. - Here, the first
virtualization storage device 100A is processing the write command in the asynchronous transfer mode. Accordingly, the firstvirtualization storage device 100A reports the completion of writing to thehost 10 at the time the write data received from thehost 10 is stored in thecache memory 130. The write data stored in thecache memory 130 is transferred to theexternal storage device 200 in a prescribed timing, and reflected in theexternal volume 240. - At the stage before the write data is written in the
external volume 240, the data stored in thecache memory 130 of the firstvirtualization storage device 100A and the data stored in theexternal volume 240 are different. Updated data regarding a certain segment or a segment group is stored in thecache memory 130, and old data before the update is regarding the same segment or segment group is stored in theexternal volume 240. As described above, data that is not reflected in theexternal volume 240 and which does not coincide with the memory contents of thecache memory 130 and the memory contents of theexternal volume 240 is referred to as dirty data. Incidentally, data in which write data is written in theexternal volume 240 and which coincides with the memory contents of thecache memory 130 and the memory contents of theexternal volume 240 is referred to as clean data. The processing of writing and reflecting the dirty data stored in thecache memory 130 of the firstvirtualization storage device 100A into theexternal volume 240 is referred to as destage processing. - In the present embodiment, in order to maintain the consistency of data before and after the transfer of volume, when the owner right is changed, the first
virtualization storage device 100A, which is the transfer source, with perform destage processing without processing the access request from thehost 10. - Meanwhile, when the access target from the
host 10 is alogical volume 164 other than the transfer target (S82: YES), the firstvirtualization storage device 100A identifies the type of command (S87), and performs normal access processing. - When it is a write command, the first
virtualization storage device 100A stores the write data received from thehost 10 in the cache memory 130 (S88), and notifies thehost 10 of the completion of writing (S89). Next, while looking out for a prescribed timing, the firstvirtualization storage device 100A refers to the management table T1A, confirms the path to the external volume 240 (S90), and transfers the write data to the external volume 240 (S91). - When it is a read command, the first
virtualization storage device 100A reads the data requested from thehost 10 from the external volume 240 (S92), and transfers this data to the host 10 (S93). Incidentally, when reading data from theexternal volume 240, the management table T1A is referred to. Further, when the data requested from thehost 10 already exists on the cache memory 130 (when the data has been sliced), the firstvirtualization storage device 100A transfers the data stored in thecache memory 130 to thehost 10 without accessing theexternal volume 240. -
FIG. 15 is a flowchart showing the details of S78 inFIG. 13 . When the secondvirtualization storage device 100B, which is the transfer destination, receives a command from the host 10 (S101: YES), it analyzes the access target of such command. The secondvirtualization storage device 100B determines whether the access target of thehost 10 is alogical volume 164 connected to theexternal volume 240 under the control of the secondvirtualization storage device 100B (S102). In other words, the secondvirtualization storage device 100B determines whether the command is an access request relating to theexternal volume 240 in which it has the owner right thereof. - When the second
virtualization storage device 100B determines that this is an access request relating to the volume in which it has the owner right thereof (S102: YES), it determines whether the destage processing performed by the firstvirtualization storage device 100A regarding theexternal volume 240 connected to thelogical volume 164 thereof is complete (S103). In other words, the secondvirtualization storage device 100B determines whether a destage completion notification has been acquired from the firstvirtualization storage device 100A regarding such volume. - When the second
virtualization storage device 100B does not have an owner right with respect to the access target of the host 10 (S102: NO), or when the secondvirtualization storage device 100B has the owner right but the destage processing at the transfer source is not complete (S103: NO), the secondvirtualization storage device 100B rejects the command processing (S104). This is in order to maintain the consistency of data regarding the transfer target volume. - Contrarily, when the second
virtualization storage device 100B has the owner right regarding the access target volume from the host 10 (S102: YES), and the destage processing at the transfer destination regarding the volume is complete (S103: YES), the secondvirtualization storage device 100B is able to perform the normal access processing. The normal access processing performed by the secondvirtualization storage device 100B is the same as the normal access processing performed by the firstvirtualization storage device 100A. - In other words, the second
virtualization storage device 100B distinguishes the type of command received from the host 10 (S105). When it is a write command, the secondvirtualization storage device 100B stores the write data received from thehost 10 in the cache memory 130 (S106), and thereafter notifies the completion of writing to the host 10 (S107). And, the secondvirtualization storage device 100B refers to the management table T1B, confirms the path to the external volume 240 (S108), and transfers the write data stored in thecache memory 130 to the external volume and writes it therein (S109). - When it is a read command, the second
virtualization storage device 100B reads the data requested from thehost 10 from the external volume 240 (or cache memory 130) (S110), and transfers this data to the host 10 (S111). - The foregoing explanation is an example of newly introducing the second
virtualization storage device 100B to the storage system. Next, a case of introducing the secondvirtualization storage device 100B and thereafter dispersing the load is explained. -
FIG. 16 is a flowchart showing a different example of transferring a volume between the respectivevirtualization storage devices - The user is able to comprehend the operational status of the storage system based on the monitoring result of the
monitoring unit 21. For example, when the user judges that the load of the firstvirtualization storage device 100A is heavy, the user may issue a designation so as to transfer theexternal volume 240 under the control of the firstvirtualization storage device 100A to the secondvirtualization storage device 100B via the management terminal 20 (S121). Further, a path for accessing via the secondvirtualization storage device 100B is added to the path setting information T3 of thehost 10 based on the transfer designation from themanagement terminal 20. - When the first
virtualization storage device 100A receives a transfer designation from themanagement terminal 20, it changes the owner right of the external volume designated as the transfer target from “1” to “−1”, and notifies this change to the secondvirtualization storage device 100B (S122). - When the second
virtualization storage device 100B receives a notice from the firstvirtualization storage device 100A (S123), it sets “1” in the transfer status flag relating to the transfer target volume and updates the management table T1B (S124), and notifies the completion of setting of the transfer status flag to the firstvirtualization storage device 100A (S125). - When the first
virtualization storage device 100A receives a notice from the secondvirtualization storage device 100B, similarly, it sets “1” in the transfer status flag relating to the transfer target volume and updates the management table T1A (S126). And, the firstvirtualization storage device 100A starts the destage processing of dirty data relating to the transfer target volume (S127). - Before the completion of the destage processing, if a command requesting access to the transfer target
logical volume 164 is issued from the host 10 (S128), the firstvirtualization storage device 100A will reject such processing (S129). - When the access processing is rejected by the first
virtualization storage device 100A, thehost 10 refers to the path setting information T3 and switches the path (S130). Here, the explanation is regarding a case of switching from the primary path passing through the firstvirtualization storage device 100A to the alternate path passing through the secondvirtualization storage device 100B. After switching the path, thehost 10 reissues the command (S131). This command may be a write command or a read command, and let it be assumed that a write command has been issued for the sake of convenience of explanation. - When the second
virtualization storage device 100B receives a write command from the host 10 (S132), it receives write data transmitted from thehost 10 after the write command, and stores this in the cache memory 130 (S132). After storing the write data in thecache memory 130, the secondvirtualization storage device 100B reports the completion of writing to the host 10 (S133). Thehost 10 receives a processing completion notice from the secondvirtualization storage device 100B (S134). - Meanwhile, when the destage processing performed by the first
virtualization storage device 100A is complete (S135), the firstvirtualization storage device 100A notifies the completion of the destage processing to the secondvirtualization storage device 100B (S136). When the secondvirtualization storage device 100B receives this destage completion notice (S137), it resets the transfer status flag relating to the transfer target volume (S138). Thereby, the transfer of the volume is completed while maintaining the consistency of the volume. After the transfer of the volume is complete, if thehost 10 issues a different command (S139), the secondvirtualization storage device 100B performs the normal access processing (S140). - Incidentally, if the command issued at 5131 is a read command, the second
virtualization storage device 100B may reject the processing of the read command until the destage processing by the firstvirtualization storage device 100A is complete. -
FIG. 18 is an explanatory diagram showing a frame format of the situation of transferring the volume according to the present embodiment. Foremost, as shown inFIG. 18( a), let it be assumed that only the firstvirtualization storage device 100A is initially operating in the storage system. Under these circumstances, the firstvirtualization storage device 100A is using allexternal volumes 240. - As shown in
FIG. 18( b), the user determines the introduction of the secondvirtualization storage device 100B based on the load status of the firstvirtualization storage device 100A, and adds the secondvirtualization storage device 100B to the storage system. - As shown in
FIG. 18( c) when the user designates the transfer of thevolume 240 of “#B” and “#C” via themanagement terminal 20, thesevolumes 240 are connected to thelogical volume 164 of the secondvirtualization storage device 100B. More precisely, when the user designates the transfer of a volume regarding thelogical volume 164 of the firstvirtualization storage device 100A, the external volumes 240 (#B, #C) connected to the transfer targetlogical volume 164 are re-connected to thelogical volume 164 of the secondvirtualization storage device 100B. Thereby, at least a part of the load of the firstvirtualization storage device 100A will be transferred to the secondvirtualization storage device 100B, and the bottleneck in the firstvirtualization storage device 100A can be resolved. As a result, the response performance and efficiency of the overall storage system can be improved. - As described above, according to the present embodiment, a plurality of
virtualization storage devices external volumes 240. Accordingly, the load in the storage system can be dispersed and the processing performance of the overall storage system can be improved. - In the present embodiment, the
external volume 240 can be transferred between the respectivevirtualization storage devices host 10. Therefore, the volume can be transferred via online without having to shut down thehost 10, and the usability will improve. - In the present embodiment, the user merely needs to make a designation via the
management terminal 20 to transfer theexternal volume 240 between the respectivevirtualization storage devices virtualization storage devices external volume 240, the performance of the storage system can be improved with a relatively simple operation. - In the present embodiment, the
virtualization storage device 100A, which is the transfer source, is configured such that it can reject the access request from thehost 10 until the destage processing relating to the transfer targetexternal volume 240 is complete. Therefore, the volume can be transferred while maintaining the consistency of data. - The second embodiment of the present invention is now explained with reference to
FIG. 19 . The present embodiment corresponds to a modified example of the foregoing first embodiment. In the present embodiment, the storage system autonomously disperses the load between the respectivevirtualization storage devices -
FIG. 19 is a flowchart of the transfer designation processing according to the present embodiment. This transfer designation processing, for example, can be executed with themanagement terminal 20. Themanagement terminal 20 acquires the performance information from the respectivevirtualization storage devices management terminal 20, based on each type of performance information, respectively calculates the loads LS1, LS2 of the respectivevirtualization storage devices - The
management terminal 20 compares the load LS1 of the firstvirtualization storage device 100A and the load LS2 of the secondvirtualization storage device 100B (S163). When the first load LS1 is greater than the second load LS2 (LS1>LS2), themanagement terminal 20 determines the logical volume (external volume) to the transferred from the firstvirtualization storage device 100A to the secondvirtualization storage device 100B (S164). Themanagement terminal 20, for instance, may select the volume of the highest load in the device. - The
management terminal 20 judges whether the transfer timing has arrived (S165), and, when the transfer timing has arrived (S165: YES), it defines the path information of the transfer destination (S166), and respectively issues a transfer designation to the respectivevirtualization storage devices host 10 may be pre-selected as the transfer timing. - Meanwhile, when the second load LS2 is equal to or greater than the first load LS1 (LS1≦LS2), the
management terminal 20 determines the volume to be transferred from the secondvirtualization storage device 100B to the firstvirtualization storage device 100A (S168). - The
management terminal 20, as described above, looks out for a prescribed transfer timing (S169: YES), defines the path of the transfer destination (S170), and respectively issues a transfer designation to the respectivevirtualization storage devices - The present embodiment configured as described above also yields the same effects as the foregoing embodiments. In addition, with the present embodiment, the load dispersion between the plurality of
virtualization storage devices external volume 240 can be performed autonomously. - Incidentally, the present invention is not limited to the embodiments described above. Those skilled in the art may make various additions and modifications within the scope of the present invention.
- For example, in each of the foregoing embodiments, although a case was mainly explained where a plurality of virtualization storage devices coexists, the present invention is not limited thereto, the configuration may also be such that all external volumes are transferred to the second virtualization storage device, and the first virtualization storage device may be entirely replaced with the second virtualization storage device.
- Further, in each of the foregoing embodiments, although a case was mainly explained where the management terminal is configured from a separate computer, the present invention is not limited thereto, and the configuration may be such that the function of the management terminal is built in one of the virtualization storage devices.
- Moreover, in each of the foregoing embodiments, although a case was mainly explained where two virtualization storage devices are used, the present invention is not limited thereto, and the present invention may also be applied to cases of using three or more virtualization storage devices.
- Further, in each of the foregoing embodiments, although a case was mainly explained where the virtualization storage devices are operated in an asynchronous transfer mode, these may also be operated in a synchronous transfer mode. When operating the virtualization storage devices in a synchronous transfer mode, generally speaking, since the memory contents of the external volume will always be updated to be the latest contents, such memory contents may be transferred between the respective virtualization storage devices quickly without having to wait for the completion of the destage processing at the transfer source.
- Incidentally, when transferring a volume, the
logical volume 164 of the transfer source and thelogical volume 164 of the transfer destination will be set to be of the same size.
Claims (17)
1. (canceled)
2. A storage system comprising:
at least one physical storage devices providing a logical volume;
a first virtualization device coupled to a computer and being configured to provide a first logical volume with a first identification number to the computer, wherein the first logical volume includes a first virtual storage area mapped to all or a part of a storage area of the logical volume, and
a second virtualization device coupled to the computer and a first virtualization device,
wherein the second virtualization device is configured to, in response to a request from the host:
obtain information of the first logical volume sent from the first virtualization device, and
provide a second logical volume to the computer, wherein the second logical volume includes a second virtual storage area mapped to the all or the part of the storage area, which is also mapped to the first virtual storage area in the first logical volume, of the logical volume,
wherein the second logical volume is provided to the computer with the first identification number included in the information obtained from the first virtualization device, but a port number of the second logical volume provided to the computer is different from a port number of the first logical volume provided to the computer.
3. A storage system according to claim 2 , wherein the first logical volume and the second logical volume are volumes of access targets by the computer.
4. A storage system according to claim 2 , wherein each of the first and second identification numbers is a logical unit number for identifying volumes of access targets by the computer.
5. A storage system according to claim 2 ,
wherein the computer includes a plurality of host bus adapters,
wherein the first virtualization device includes the first port,
wherein the second virtualization device includes the second port,
wherein the first path is identified by the combination of a first host bus adapter number of identifying at least one of the host bus adapters, the first port number, and the first identification number for identifying the first logical volume, and
wherein the second path is identified by the combination of a second host bus adapter number for identifying at least one of the host bus adapters, the second port number, and the first identification number for identifying the first logical volume.
6. A storage system according to claim 2 , wherein the computer is configured to switch the first path to the second path in case that a request to the first logical volume through the first path is not processed by the first virtualization device.
7. A storage system according to claim 2 ,
wherein information regarding an owner right, which indicates authority to use the logical volume, can be transferred by the first virtualization device to the second virtualization device, while both the first path and the second path are defined.
8. A storage system according to claim 2 ,
wherein the storage system further includes an external storage system coupled to the first virtualization device and the second virtualization system through a storage area network, and
wherein the logical volume is in the external storage device.
9. A storage system according to claim 2 ,
wherein the first virtualization device holds first mapping information indicating mapping between the first logical volume and the all or the part of the storage area in the logical volume, and
wherein the second virtualization device holds second mapping information indicating mapping between the second logical volume and the all or the part of the storage area in the logical volume.
10. In a storage system comprising at least one physical storage devices providing a logical volume; a first virtualization device coupled to a computer and being configured to provide a first logical volume with a first identification number to the computer, wherein the first logical volume includes a first virtual storage area mapped to all or a part of a storage area of the logical volume, and a second virtualization device coupled to the computer and a first virtualization device,
wherein the second virtualization device is configured to perform a method, in response to a request from the host, the method comprising:
obtaining information of the first logical volume sent from the first virtualization device,
providing a second logical volume to the computer, wherein the second logical volume includes a second virtual storage area mapped to the all or the part of the storage area, which is also mapped to the first virtual storage area in the first logical volume, of the logical volume, and
providing the second logical volume to the computer with the first identification number included in the information obtained from the first virtualization device, but a port number of the second logical volume provided to the computer is different from a port number of the first logical volume provided to the computer.
11. A method according to claim 10 , wherein the first logical volume and the second logical volume are volumes of access targets by the computer.
12. A method according to claim 10 , wherein each of the first and second identification numbers is a logical unit number for identifying volumes of access targets by the computer.
13. A method according to claim 10 ,
wherein the computer includes a plurality of host bus adapters,
wherein the first virtualization device includes the first port,
wherein the second virtualization device includes the second port,
wherein the first path is identified by the combination of a first host bus adapter number of identifying at least one of the host bus adapters, the first port number, and the first identification number for identifying the first logical volume, and
wherein the second path is identified by the combination of a second host bus adapter number for identifying at least one of the host bus adapters, the second port number, and the first identification number for identifying the first logical volume.
14. A method according to claim 10 , wherein the computer is configured to switch the first path to the second path in case that a request to the first logical volume through the first path is not processed by the first virtualization device.
15. A method according to claim 10 ,
wherein information regarding an owner right, which indicates authority to use the logical volume, can be transferred by the first virtualization device to the second virtualization device, while both the first path and the second path are defined.
16. A method according to claim 10 ,
wherein the storage system further includes an external storage system coupled to the first virtualization device and the second virtualization system through a storage area network, and
wherein the logical volume is in the external storage device.
17. A method according to claim 10 ,
wherein the first virtualization device holds first mapping information indicating mapping between the first logical volume and the all or the part of the storage area in the logical volume, and
wherein the second virtualization device holds second mapping information indicating mapping between the second logical volume and the all or the part of the storage area in the logical volume.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/912,297 US20130275690A1 (en) | 2005-05-24 | 2013-06-07 | Storage system and operation method of storage system |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005150868A JP5057656B2 (en) | 2005-05-24 | 2005-05-24 | Storage system and storage system operation method |
JP2005-150868 | 2005-05-24 | ||
US11/181,877 US20060271758A1 (en) | 2005-05-24 | 2005-07-15 | Storage system and operation method of storage system |
US12/367,706 US8180979B2 (en) | 2005-05-24 | 2009-02-09 | Storage system and operation method of storage system |
US13/222,569 US8484425B2 (en) | 2005-05-24 | 2011-08-31 | Storage system and operation method of storage system including first and second virtualization devices |
US13/912,297 US20130275690A1 (en) | 2005-05-24 | 2013-06-07 | Storage system and operation method of storage system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/222,569 Continuation US8484425B2 (en) | 2005-05-24 | 2011-08-31 | Storage system and operation method of storage system including first and second virtualization devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130275690A1 true US20130275690A1 (en) | 2013-10-17 |
Family
ID=36450094
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/181,877 Abandoned US20060271758A1 (en) | 2005-05-24 | 2005-07-15 | Storage system and operation method of storage system |
US12/367,706 Expired - Fee Related US8180979B2 (en) | 2005-05-24 | 2009-02-09 | Storage system and operation method of storage system |
US12/830,865 Expired - Fee Related US7953942B2 (en) | 2005-05-24 | 2010-07-06 | Storage system and operation method of storage system |
US13/222,569 Active US8484425B2 (en) | 2005-05-24 | 2011-08-31 | Storage system and operation method of storage system including first and second virtualization devices |
US13/912,297 Abandoned US20130275690A1 (en) | 2005-05-24 | 2013-06-07 | Storage system and operation method of storage system |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/181,877 Abandoned US20060271758A1 (en) | 2005-05-24 | 2005-07-15 | Storage system and operation method of storage system |
US12/367,706 Expired - Fee Related US8180979B2 (en) | 2005-05-24 | 2009-02-09 | Storage system and operation method of storage system |
US12/830,865 Expired - Fee Related US7953942B2 (en) | 2005-05-24 | 2010-07-06 | Storage system and operation method of storage system |
US13/222,569 Active US8484425B2 (en) | 2005-05-24 | 2011-08-31 | Storage system and operation method of storage system including first and second virtualization devices |
Country Status (4)
Country | Link |
---|---|
US (5) | US20060271758A1 (en) |
EP (3) | EP1727033A1 (en) |
JP (1) | JP5057656B2 (en) |
CN (2) | CN101271382B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160070478A1 (en) * | 2014-09-10 | 2016-03-10 | Fujitsu Limited | Storage control device and storage control method |
WO2016209313A1 (en) * | 2015-06-23 | 2016-12-29 | Hewlett-Packard Development Company, L.P. | Task execution in a storage area network (san) |
Families Citing this family (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8032701B1 (en) | 2004-03-26 | 2011-10-04 | Emc Corporation | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network |
US8281022B1 (en) | 2000-06-30 | 2012-10-02 | Emc Corporation | Method and apparatus for implementing high-performance, scaleable data processing and storage systems |
US8219681B1 (en) | 2004-03-26 | 2012-07-10 | Emc Corporation | System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network |
US7770059B1 (en) | 2004-03-26 | 2010-08-03 | Emc Corporation | Failure protection in an environment including virtualization of networked storage resources |
US8627005B1 (en) | 2004-03-26 | 2014-01-07 | Emc Corporation | System and method for virtualization of networked storage resources |
US7818517B1 (en) | 2004-03-26 | 2010-10-19 | Emc Corporation | Architecture for virtualization of networked storage resources |
US8140665B2 (en) * | 2005-08-19 | 2012-03-20 | Opnet Technologies, Inc. | Managing captured network traffic data |
JP4842593B2 (en) * | 2005-09-05 | 2011-12-21 | 株式会社日立製作所 | Device control takeover method for storage virtualization apparatus |
US7702851B2 (en) | 2005-09-20 | 2010-04-20 | Hitachi, Ltd. | Logical volume transfer method and storage network system |
JP2007094578A (en) * | 2005-09-27 | 2007-04-12 | Fujitsu Ltd | Storage system and its component replacement processing method |
JP2007280089A (en) * | 2006-04-07 | 2007-10-25 | Hitachi Ltd | Transfer method for capacity expansion volume |
JP2007280111A (en) * | 2006-04-07 | 2007-10-25 | Hitachi Ltd | Storage system and performance tuning method thereof |
JP4963892B2 (en) | 2006-08-02 | 2012-06-27 | 株式会社日立製作所 | Storage system control device that can be a component of a virtual storage system |
JP4331742B2 (en) | 2006-10-25 | 2009-09-16 | 株式会社日立製作所 | Computer system, computer and method for managing performance based on I / O allocation ratio |
JP5057366B2 (en) | 2006-10-30 | 2012-10-24 | 株式会社日立製作所 | Information system and information system data transfer method |
JP4897499B2 (en) | 2007-01-19 | 2012-03-14 | 株式会社日立製作所 | Storage system or storage migration method |
JP2008217364A (en) * | 2007-03-02 | 2008-09-18 | Nec Corp | File input/output regulation control system, method, and program |
US7877556B2 (en) * | 2007-03-30 | 2011-01-25 | Hitachi, Ltd. | Method and apparatus for a unified storage system |
US8990527B1 (en) * | 2007-06-29 | 2015-03-24 | Emc Corporation | Data migration with source device reuse |
JP2009026255A (en) * | 2007-07-24 | 2009-02-05 | Hitachi Ltd | Data migration method, data migration system, and data migration program |
JP4958673B2 (en) * | 2007-07-26 | 2012-06-20 | 株式会社日立製作所 | Storage system and management method thereof |
US20090089498A1 (en) * | 2007-10-02 | 2009-04-02 | Michael Cameron Hay | Transparently migrating ongoing I/O to virtualized storage |
JP2009146106A (en) | 2007-12-13 | 2009-07-02 | Hitachi Ltd | Storage system having function which migrates virtual communication port which is added to physical communication port |
WO2009081953A1 (en) * | 2007-12-26 | 2009-07-02 | Canon Anelva Corporation | Sputtering apparatus, sputter film forming method, and analyzer |
JP4674242B2 (en) * | 2008-02-05 | 2011-04-20 | 富士通株式会社 | Virtualization switch, computer system, and data copy method |
US20090240880A1 (en) | 2008-03-21 | 2009-09-24 | Hitachi, Ltd. | High availability and low capacity thin provisioning |
US8347046B2 (en) * | 2008-04-15 | 2013-01-01 | Microsoft Corporation | Policy framework to treat data |
US8156297B2 (en) * | 2008-04-15 | 2012-04-10 | Microsoft Corporation | Smart device recordation |
US8082411B1 (en) * | 2008-04-30 | 2011-12-20 | Netapp, Inc. | Method and system for logical unit substitution |
US8032730B2 (en) * | 2008-05-15 | 2011-10-04 | Hitachi, Ltd. | Method and apparatus for I/O priority control in storage systems |
JP5272185B2 (en) * | 2008-09-26 | 2013-08-28 | 株式会社日立製作所 | Computer system and storage system |
WO2010082452A1 (en) | 2009-01-13 | 2010-07-22 | パナソニック株式会社 | Control device and control method for elastic actuator and control program |
CN101877136A (en) * | 2009-04-30 | 2010-11-03 | 国际商业机器公司 | Method, equipment and system for processing graphic object |
US8499098B2 (en) * | 2009-10-09 | 2013-07-30 | Hitachi, Ltd. | Storage system and storage system communication path management method |
US8639769B2 (en) | 2009-12-18 | 2014-01-28 | International Business Machines Corporation | Handling of data transfer in a LAN-free environment |
CN101788889B (en) * | 2010-03-03 | 2011-08-10 | 浪潮(北京)电子信息产业有限公司 | Memory virtualization system and method |
JP5551245B2 (en) * | 2010-03-19 | 2014-07-16 | 株式会社日立製作所 | File sharing system, file processing method, and program |
WO2011125106A1 (en) * | 2010-04-05 | 2011-10-13 | Hitachi, Ltd. | Storage system configured from plurality of storage modules and method for switching coupling configuration of storage modules |
JP5065434B2 (en) * | 2010-04-06 | 2012-10-31 | 株式会社日立製作所 | Management method and management apparatus |
US8713288B2 (en) | 2010-06-17 | 2014-04-29 | Hitachi, Ltd. | Storage system comprising multiple microprocessors and method for sharing processing in this storage system |
WO2012007999A1 (en) | 2010-07-16 | 2012-01-19 | Hitachi, Ltd. | Storage control apparatus and storage system comprising multiple storage control apparatuses |
JP5602572B2 (en) * | 2010-10-06 | 2014-10-08 | 富士通株式会社 | Storage device, data copying method, and storage system |
US8627033B2 (en) * | 2010-12-20 | 2014-01-07 | Microsoft Corporation | Storage device migration and redirection |
JP5512833B2 (en) * | 2010-12-22 | 2014-06-04 | 株式会社日立製作所 | Storage system including a plurality of storage devices having both a storage virtualization function and a capacity virtualization function |
IL210169A0 (en) | 2010-12-22 | 2011-03-31 | Yehuda Binder | System and method for routing-based internet security |
WO2012093417A1 (en) | 2011-01-05 | 2012-07-12 | Hitachi, Ltd. | Storage system comprising multiple storage control apparatus units for processing an i/o command from a host |
US9021198B1 (en) * | 2011-01-20 | 2015-04-28 | Commvault Systems, Inc. | System and method for sharing SAN storage |
JP5455945B2 (en) * | 2011-02-14 | 2014-03-26 | 株式会社東芝 | Arbitration device, storage device, information processing device, and program |
WO2012114384A1 (en) * | 2011-02-25 | 2012-08-30 | Hitachi, Ltd. | Storage system and method of controlling the storage system |
WO2012172601A1 (en) | 2011-06-14 | 2012-12-20 | Hitachi, Ltd. | Storage system comprising multiple storage control apparatus |
US9740435B2 (en) * | 2012-02-27 | 2017-08-22 | Fujifilm North America Corporation | Methods for managing content stored in cloud-based storages |
WO2013160933A1 (en) * | 2012-04-23 | 2013-10-31 | Hitachi, Ltd. | Computer system and virtual server migration control method for computer system |
US11055124B1 (en) * | 2012-09-30 | 2021-07-06 | EMC IP Holding Company LLC | Centralized storage provisioning and management across multiple service providers |
WO2014054070A1 (en) * | 2012-10-03 | 2014-04-10 | Hitachi, Ltd. | Management system for managing a physical storage system, method of determining a resource migration destination of a physical storage system, and storage medium |
DE102012110164B4 (en) * | 2012-10-24 | 2021-08-19 | Fujitsu Ltd. | Computer arrangement |
JP6357780B2 (en) * | 2013-02-06 | 2018-07-18 | 株式会社リコー | Network system and information notification method |
US20140281673A1 (en) * | 2013-03-15 | 2014-09-18 | Unisys Corporation | High availability server configuration |
JP6193373B2 (en) | 2013-03-18 | 2017-09-06 | 株式会社日立製作所 | Hybrid storage system and storage control method |
US9250809B2 (en) * | 2013-03-18 | 2016-02-02 | Hitachi, Ltd. | Compound storage system and storage control method to configure change associated with an owner right to set the configuration change |
JP6186787B2 (en) * | 2013-03-25 | 2017-08-30 | 富士通株式会社 | Data transfer device, data transfer system, data transfer method and program |
JP2014215666A (en) * | 2013-04-23 | 2014-11-17 | 富士通株式会社 | Control system, control device, and control program |
JP6209863B2 (en) * | 2013-05-27 | 2017-10-11 | 富士通株式会社 | Storage control device, storage control method, and storage control program |
WO2015028090A1 (en) * | 2013-08-30 | 2015-03-05 | Nokia Solutions And Networks Oy | Methods and apparatus |
US10048895B2 (en) | 2013-12-06 | 2018-08-14 | Concurrent Ventures, LLC | System and method for dynamically load balancing storage media devices based on a mid-range performance level |
US8954615B1 (en) * | 2013-12-06 | 2015-02-10 | Concurrent Ventures, LLC | System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on temperature ranges |
US9436404B2 (en) | 2013-12-06 | 2016-09-06 | Concurrent Ventures, LLC | System and method for dynamically load balancing across storage media devices having fast access rates |
US8954614B1 (en) * | 2013-12-06 | 2015-02-10 | Concurrent Ventures, LLC | System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on temperature |
US8984172B1 (en) * | 2013-12-06 | 2015-03-17 | Concurrent Ventures, LLC | System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on storage media device fill percentage |
US9274722B2 (en) * | 2013-12-06 | 2016-03-01 | Concurrent Ventures, LLP | System, method and article of manufacture for monitoring, controlling and improving storage media system performance |
US10235096B2 (en) | 2013-12-06 | 2019-03-19 | Concurrent Ventures, LLC | System and method for dynamically load balancing storage media devices based on an average or discounted average sustained performance level |
JP2015184895A (en) * | 2014-03-24 | 2015-10-22 | 富士通株式会社 | Storage control device, storage device, and storage control program |
JP6336813B2 (en) * | 2014-04-16 | 2018-06-06 | 富士通株式会社 | Storage virtualization apparatus, storage virtualization apparatus control method, and control program |
US9804789B2 (en) * | 2015-06-24 | 2017-10-31 | Vmware, Inc. | Methods and apparatus to apply a modularized virtualization topology using virtual hard disks |
US10101915B2 (en) | 2015-06-24 | 2018-10-16 | Vmware, Inc. | Methods and apparatus to manage inter-virtual disk relations in a modularized virtualization topology using virtual hard disks |
US9928010B2 (en) | 2015-06-24 | 2018-03-27 | Vmware, Inc. | Methods and apparatus to re-direct detected access requests in a modularized virtualization topology using virtual hard disks |
US10126983B2 (en) | 2015-06-24 | 2018-11-13 | Vmware, Inc. | Methods and apparatus to enforce life cycle rules in a modularized virtualization topology using virtual hard disks |
CN107038333B (en) * | 2016-01-21 | 2021-10-26 | 株式会社堀场制作所 | Management device for measuring equipment |
US10437477B2 (en) * | 2017-07-20 | 2019-10-08 | Dell Products, Lp | System and method to detect storage controller workloads and to dynamically split a backplane |
JP6878369B2 (en) * | 2018-09-03 | 2021-05-26 | 株式会社日立製作所 | Volume allocation management device, volume allocation management method, and volume allocation management program |
JP2021047786A (en) | 2019-09-20 | 2021-03-25 | 富士通株式会社 | Storage controller, storage device, and determination program |
US20230370424A1 (en) * | 2022-05-13 | 2023-11-16 | Cisco Technology, Inc. | Optimal data plane security & connectivity for secured connections |
US20240231620A9 (en) * | 2022-10-21 | 2024-07-11 | Dell Products L.P. | Virtual container storage interface controller |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030065780A1 (en) * | 2001-06-28 | 2003-04-03 | Maurer Charles F. | Data storage system having data restore by swapping logical units |
US20030101317A1 (en) * | 2001-11-28 | 2003-05-29 | Hitachi, Ltd. | Disk array system capable of taking over volumes between controllers |
US20030229645A1 (en) * | 2002-06-06 | 2003-12-11 | Hitachi, Ltd. | Data mapping management apparatus |
US6732104B1 (en) * | 2001-06-06 | 2004-05-04 | Lsi Logic Corporatioin | Uniform routing of storage access requests through redundant array controllers |
Family Cites Families (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3021955B2 (en) * | 1992-05-13 | 2000-03-15 | 富士通株式会社 | Duplicate file system operation method |
JPH09197367A (en) * | 1996-01-12 | 1997-07-31 | Sony Corp | Plasma address display device |
US6886035B2 (en) * | 1996-08-02 | 2005-04-26 | Hewlett-Packard Development Company, L.P. | Dynamic load balancing of a network of client and server computer |
JP3410010B2 (en) | 1997-12-24 | 2003-05-26 | 株式会社日立製作所 | Subsystem migration method and information processing system |
US6067545A (en) * | 1997-08-01 | 2000-05-23 | Hewlett-Packard Company | Resource rebalancing in networked computer systems |
US6101508A (en) * | 1997-08-01 | 2000-08-08 | Hewlett-Packard Company | Clustered file management for network resources |
US5889520A (en) * | 1997-11-13 | 1999-03-30 | International Business Machines Corporation | Topological view of a multi-tier network |
JP3726484B2 (en) | 1998-04-10 | 2005-12-14 | 株式会社日立製作所 | Storage subsystem |
EP1095373A2 (en) * | 1998-05-15 | 2001-05-02 | Storage Technology Corporation | Caching method for data blocks of variable size |
JP4412685B2 (en) | 1998-09-28 | 2010-02-10 | 株式会社日立製作所 | Storage controller and method of handling data storage system using the same |
JP2000316132A (en) | 1999-04-30 | 2000-11-14 | Matsushita Electric Ind Co Ltd | Video server |
US6766430B2 (en) * | 2000-07-06 | 2004-07-20 | Hitachi, Ltd. | Data reallocation among storage systems |
US6457109B1 (en) * | 2000-08-18 | 2002-09-24 | Storage Technology Corporation | Method and apparatus for copying data from one storage system to another storage system |
US6675268B1 (en) | 2000-12-11 | 2004-01-06 | Lsi Logic Corporation | Method and apparatus for handling transfers of data volumes between controllers in a storage environment having multiple paths to the data volumes |
US20030079018A1 (en) * | 2001-09-28 | 2003-04-24 | Lolayekar Santosh C. | Load balancing in a storage network |
JP2005505819A (en) | 2001-09-28 | 2005-02-24 | マランティ ネットワークス インコーポレイテッド | Packet classification in storage systems |
US6976134B1 (en) * | 2001-09-28 | 2005-12-13 | Emc Corporation | Pooling and provisioning storage resources in a storage network |
US7707304B1 (en) | 2001-09-28 | 2010-04-27 | Emc Corporation | Storage switch for storage area network |
US7185062B2 (en) * | 2001-09-28 | 2007-02-27 | Emc Corporation | Switch-based storage services |
US7404000B2 (en) * | 2001-09-28 | 2008-07-22 | Emc Corporation | Protocol translation in a storage system |
US7421509B2 (en) * | 2001-09-28 | 2008-09-02 | Emc Corporation | Enforcing quality of service in a storage network |
US7864758B1 (en) | 2001-09-28 | 2011-01-04 | Emc Corporation | Virtualization in a storage system |
JP4220174B2 (en) | 2002-04-08 | 2009-02-04 | 株式会社日立製作所 | Storage system content update method |
JP2003316522A (en) | 2002-04-26 | 2003-11-07 | Hitachi Ltd | Computer system and method for controlling the same system |
JP4704659B2 (en) * | 2002-04-26 | 2011-06-15 | 株式会社日立製作所 | Storage system control method and storage control device |
US7103727B2 (en) | 2002-07-30 | 2006-09-05 | Hitachi, Ltd. | Storage system for multi-site remote copy |
JP4214832B2 (en) * | 2002-07-30 | 2009-01-28 | 株式会社日立製作所 | Storage system |
JP2004102374A (en) | 2002-09-05 | 2004-04-02 | Hitachi Ltd | Information processing system having data transition device |
US7774466B2 (en) * | 2002-10-17 | 2010-08-10 | Intel Corporation | Methods and apparatus for load balancing storage nodes in a distributed storage area network system |
JP2004178253A (en) * | 2002-11-27 | 2004-06-24 | Hitachi Ltd | Storage device controller and method for controlling storage device controller |
JP2004220450A (en) | 2003-01-16 | 2004-08-05 | Hitachi Ltd | Storage device, its introduction method and its introduction program |
JP4409181B2 (en) * | 2003-01-31 | 2010-02-03 | 株式会社日立製作所 | Screen data generation method, computer, program |
JP2004302751A (en) * | 2003-03-31 | 2004-10-28 | Hitachi Ltd | Method for managing performance of computer system and computer system managing performance of storage device |
JP4462852B2 (en) * | 2003-06-23 | 2010-05-12 | 株式会社日立製作所 | Storage system and storage system connection method |
JP2005018161A (en) | 2003-06-23 | 2005-01-20 | Adtex:Kk | Storage system, control method and program |
JP2005055963A (en) * | 2003-08-05 | 2005-03-03 | Hitachi Ltd | Volume control method, program performing it, and storage device |
JP4307202B2 (en) | 2003-09-29 | 2009-08-05 | 株式会社日立製作所 | Storage system and storage control device |
US20050257014A1 (en) * | 2004-05-11 | 2005-11-17 | Nobuhiro Maki | Computer system and a management method of a computer system |
JP3781378B2 (en) * | 2005-01-04 | 2006-05-31 | 株式会社日立製作所 | Storage subsystem |
US7337350B2 (en) | 2005-02-09 | 2008-02-26 | Hitachi, Ltd. | Clustered storage system with external storage systems |
-
2005
- 2005-05-24 JP JP2005150868A patent/JP5057656B2/en not_active Expired - Fee Related
- 2005-07-15 US US11/181,877 patent/US20060271758A1/en not_active Abandoned
- 2005-12-07 EP EP05257515A patent/EP1727033A1/en not_active Ceased
- 2005-12-07 EP EP10008469.8A patent/EP2246777B1/en not_active Ceased
- 2005-12-07 EP EP15180033.1A patent/EP2975513B1/en not_active Ceased
-
2006
- 2006-03-03 CN CN200810095344.XA patent/CN101271382B/en active Active
- 2006-03-03 CN CNB2006100568049A patent/CN100395694C/en active Active
-
2009
- 2009-02-09 US US12/367,706 patent/US8180979B2/en not_active Expired - Fee Related
-
2010
- 2010-07-06 US US12/830,865 patent/US7953942B2/en not_active Expired - Fee Related
-
2011
- 2011-08-31 US US13/222,569 patent/US8484425B2/en active Active
-
2013
- 2013-06-07 US US13/912,297 patent/US20130275690A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6732104B1 (en) * | 2001-06-06 | 2004-05-04 | Lsi Logic Corporatioin | Uniform routing of storage access requests through redundant array controllers |
US20030065780A1 (en) * | 2001-06-28 | 2003-04-03 | Maurer Charles F. | Data storage system having data restore by swapping logical units |
US20030101317A1 (en) * | 2001-11-28 | 2003-05-29 | Hitachi, Ltd. | Disk array system capable of taking over volumes between controllers |
US20030229645A1 (en) * | 2002-06-06 | 2003-12-11 | Hitachi, Ltd. | Data mapping management apparatus |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160070478A1 (en) * | 2014-09-10 | 2016-03-10 | Fujitsu Limited | Storage control device and storage control method |
JP2016057872A (en) * | 2014-09-10 | 2016-04-21 | 富士通株式会社 | Storage control device and control program |
WO2016209313A1 (en) * | 2015-06-23 | 2016-12-29 | Hewlett-Packard Development Company, L.P. | Task execution in a storage area network (san) |
Also Published As
Publication number | Publication date |
---|---|
US20060271758A1 (en) | 2006-11-30 |
EP1727033A1 (en) | 2006-11-29 |
EP2246777A3 (en) | 2010-12-22 |
EP2246777B1 (en) | 2015-10-28 |
JP5057656B2 (en) | 2012-10-24 |
US20110314250A1 (en) | 2011-12-22 |
CN100395694C (en) | 2008-06-18 |
US20090150608A1 (en) | 2009-06-11 |
JP2006330895A (en) | 2006-12-07 |
US20100274963A1 (en) | 2010-10-28 |
US8180979B2 (en) | 2012-05-15 |
US8484425B2 (en) | 2013-07-09 |
EP2975513B1 (en) | 2018-10-10 |
CN101271382B (en) | 2015-06-10 |
US7953942B2 (en) | 2011-05-31 |
EP2975513A1 (en) | 2016-01-20 |
CN1869914A (en) | 2006-11-29 |
EP2246777A2 (en) | 2010-11-03 |
CN101271382A (en) | 2008-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8484425B2 (en) | Storage system and operation method of storage system including first and second virtualization devices | |
CN102209952B (en) | Storage system and method for operating storage system | |
US8683157B2 (en) | Storage system and virtualization method | |
US7673107B2 (en) | Storage system and storage control device | |
US7603507B2 (en) | Storage system and method of storage system path control | |
US9619171B2 (en) | Storage system and virtualization method | |
US7660946B2 (en) | Storage control system and storage control method | |
US20070277011A1 (en) | Storage system and data management method | |
EP1837751A2 (en) | Storage system, storage extent release method and storage apparatus | |
US20080126437A1 (en) | File sharing system, file sharing device and file sharing volume migration method | |
US20100070731A1 (en) | Storage system having allocation-on-use volume and power saving function | |
US20060248306A1 (en) | Storage apparatus and storage system | |
JP2006184949A (en) | Storage control system | |
JP2005250925A (en) | Memory control system | |
JP2009129261A (en) | Storage system and method of searching for connection path of storage system to external volume | |
JP5335848B2 (en) | Storage system and storage system operation method | |
US7424572B2 (en) | Storage device system interfacing open-system host computer input/output interfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |