US20060271758A1 - Storage system and operation method of storage system - Google Patents

Storage system and operation method of storage system Download PDF

Info

Publication number
US20060271758A1
US20060271758A1 US11/181,877 US18187705A US2006271758A1 US 20060271758 A1 US20060271758 A1 US 20060271758A1 US 18187705 A US18187705 A US 18187705A US 2006271758 A1 US2006271758 A1 US 2006271758A1
Authority
US
United States
Prior art keywords
storage device
connection source
source storage
logical volume
connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/181,877
Other languages
English (en)
Inventor
Masataka Innan
Akira Murotani
Akinobu Shimada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INNAN, MASATAKA, MUROTANI, AKIRA, SHIMADA, AKINOBU
Publication of US20060271758A1 publication Critical patent/US20060271758A1/en
Priority to US12/367,706 priority Critical patent/US8180979B2/en
Priority to US12/830,865 priority patent/US7953942B2/en
Priority to US13/222,569 priority patent/US8484425B2/en
Priority to US13/912,297 priority patent/US20130275690A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0637Permissions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes

Definitions

  • the present invention relates to a storage system and an operation method of a storage system.
  • This storage system is configured by including a storage device such as a disk array device.
  • a storage device is configured by disposing a plurality of memory apparatuses in an array to provide a memory area based on RAID (Redundant Array of Inexpensive Disks).
  • RAID Redundant Array of Inexpensive Disks
  • At least one or more logical volumes are formed on a physical memory area provided by the memory apparatus group, and this logical volume is provided to a host computer (hereinafter abbreviated as “host”). By transmitting a write command or read command, the host is able to write and read data into and from the logical volume.
  • Data to be managed by companies and others is increasing daily.
  • companies and others for example, equip the storage system with a new storage device to expand the storage system.
  • Two methods can be considered for introducing a new storage device to the storage system.
  • One method is to replace the old storage device with a new storage device.
  • Another method is to make the old storage device and new storage device coexist.
  • the present applicant has proposed technology of connecting a host and a first storage device and connecting the first storage device and a second storage device so that the first storage device will act over and process the access request from the host (Japanese Patent Laid-Open Publication No. 2004-005370).
  • the first storage device will also receive and process commands targeting the second storage device. If necessary, the first storage device issues a command to the second storage device, receives the processing result thereof, and transmits this to the host.
  • the performance of the storage system is improved by making the first storage device and second storage device coexist without wasting any memory resource. Nevertheless, even with this kind of reinforced storage system, the processing performance may deteriorate during the prolonged operation thereof.
  • the first storage device may be replaced with a different high-performance storage device, or a separate first storage device may be added to the existing first storage device.
  • the addition or replacement of the first storage device cannot be conducted as with the addition of the first storage device described in the foregoing document. This is because the first storage device is serially connected to the second storage device and uses the memory resource of the second storage device, and the configuration of the storage system is already complicated. The first storage device cannot be simply added or replaced by only focusing attention on the first storage device.
  • the present invention was devised in view of the foregoing problems, and an object of the present invention is to provide a storage system and an operation method of a storage system configured by hierarchizing a plurality of storage devices for improving the processing performance thereof relatively easily. Another object of the present invention is to provide a storage system and an operation method of a storage system for improving the processing performance by enabling the shared use of one or a plurality of connection destination storage devices by a plurality of connection source storage devices. Other objects of the present invention will become clear from the detailed description of the preferred embodiments described later.
  • the storage system has a plurality of connection source storage devices capable of respectively providing a logical volume to a host device; a connection destination storage device respectively connected to each of the connection source storage devices and having a separate logical volume; and a direction unit for directing the connection destination of the separate logical volume.
  • each of the connection source storage devices is configured by respectively having: a management information memory unit for storing management information for managing the separate logical volume; and a control unit for connecting the logical volume and the separate logical volume via an intermediate volume based on the management information stored in the management information memory unit; wherein the connection destination of the separate logical volume can be switched among each of the connection source storage devices based on the designation from the direction unit.
  • the logical volume of the connection source storage device can be connected to a separate logical volume of the connection destination storage device via an intermediate volume. This connection may be made based on the management information stored in the management information memory unit.
  • connection destination storage device when focusing on the connection source storage device, the connection destination storage device is an external storage device positioned outside the connection source storage device, and the separate logical volume of the connection destination storage device is an external volume positioned outside the connection source storage device. Therefore, in the following explanation, for ease of under the present invention, the connection destination storage device may be referred to as an external storage device, and the separate logical volume may be referred to as an external volume, respectively.
  • the direction unit designates to which logical volume of the connection source storage device the external volume should be connected. Based on this designation, the connection designation of the external volume is switched among the respective connection source storage devices. In other words, when an external volume is connected to a logical volume of one connection source storage device via an intermediate volume, when the direction unit designates the switch to the other connection source storage device, the external volume is connected to a logical volume of the other connection source storage device via an intermediate volume.
  • connection source storage devices may exclusively use one or a plurality of external volumes. Accordingly, for example, when there are numerous access requests to a specific external volume, such high-load external volume is transferred to a separate connection source storage device in order to disperse the load, and the processing performance of the overall storage system can be improved thereby.
  • connection destination of the separate logical volume is switchable among each of the connection source storage devices without stopping the access from the host device to the logical volume.
  • the access destination of the host device is switched among each of the connection source storage devices according to the switching of the connection destination of the separate logical volume.
  • the access destination of the host device will also be switched from one connection source storage device to the other connection source storage device.
  • connection source storage device that becomes the switching source among each of the connection source storage devices rejects the processing of access from the host device to the separate logical volume, and destages unwritten data relating to the separate logical volume.
  • connection source storage device that becomes the switching source issues a destage completion report to the connection source storage device that becomes the switching destination; and upon receiving the destage completion report, the connection source storage device that becomes the switching destination performs the processing of access from the host device to the separate logical volume.
  • the dirty data before transfer (before switching) is written in a physical memory apparatus configuring the external volume of the transfer target to maintain the consistency of data.
  • connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices are respectively selected based on the monitoring result of the monitoring unit.
  • the load status for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate, data traffic and so on may be considered.
  • IOPS input/output per second
  • CPU usage rate CPU usage rate
  • cache memory usage rate cache memory usage rate
  • data traffic data traffic and so on
  • the load status for instance, input/output per second (IOPS), CPU usage rate, cache memory usage rate, data traffic and so on
  • IOPS input/output per second
  • CPU usage rate CPU usage rate
  • cache memory usage rate cache memory usage rate
  • data traffic data traffic and so on
  • a management terminal to be connected to each of the connection source storage devices is further provided, wherein the direction unit and the monitoring unit are respectively provided to the management terminal.
  • the storage system has a plurality of connection source storage devices to be used by at least one or more host devices, and at least one or more connection destination storage devices to be connected to each of the connection source storage devices, wherein the host device and each of the connection source storage devices are respectively connected via a first communication network, and each of the connection source storage devices and the connection destination storage device are connected via a second communication network separated from the first communication network
  • connection destination storage device has a separate logical volume to be logically connected to a logical volume of each of the connection source storage devices.
  • each of the connection source storage devices has a control unit for creating the logical volume and connecting the logical volume and the separate logical volume via an intermediate volume based on management information; and a memory used by the control unit and for storing the management information.
  • the management terminal to be connected to each of the connection source storage devices has a monitoring unit for respectively monitoring the load status of each of the connection source storage devices, and a direction unit for respectively selecting the connection source storage device that becomes the switching source and the connection source storage device that becomes the switching destination among each of the connection source storage devices based on the monitoring result of the monitoring unit.
  • the management terminal switches the connection destination of the separate logical volume from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination based on the designation from the direction unit;
  • the entirety of the second management information is stored in the connection source storage device selected as the switching source, and only the second management information relating to the separate logical volume in which the connection destination is switched is transferred from the connection source storage device selected as the switching source to the connection source storage device selected as the switching destination.
  • the operation method of a storage system is a method of operating a storage system having a first connection source storage device and a second connection source storage device capable of respectively providing a logical volume to a host device via a first communication network, and a connection destination storage device connected to each of the first and second connection source storage device via a second communication network, comprising the following steps.
  • the plurality of separate logical volumes are respectively connected to one or a plurality of logical volumes of the first connection source storage device via an intermediate volume of the first connection source storage device based on the management information for respectively connecting to a plurality of separate logical volumes of the connection destination storage device, and the first connection source storage device is made to process the access request from the host device.
  • the second connection source storage device is connected to the host device via the first communication network, to the connection destination storage device via the second communication network, and to the first connection source storage device via a third communication network.
  • a separate logical volume is selected to be transferred to the second connection source storage device among the plurality of separate logical volumes used by the first connection source storage device.
  • attribute information relating to the separate logical volume selected as the transfer target among the management information of the first connection source storage device is transferred from the first connection source storage device to the second connection source storage device via the third communication network.
  • the whole or a part of the means, functions and steps of the present invention may sometimes be configured as a computer program to be executed with a computer system.
  • a computer program such computer program, for instance, may be fixed in various storage mediums and distributed, or transmitted via a communication network.
  • FIG. 3 is a block diagram showing the hardware configuration of the storage system
  • FIG. 4 is an explanatory diagram showing the frame format of the memory configuration of the storage system
  • FIG. 6 is an explanatory diagram showing the respective configurations of the management table and attribute table to be used by a second virtualization storage device
  • FIG. 7 is an explanatory diagram showing the configuration of the path definition information and the situation of the host path being switched based on this path definition information
  • FIG. 9 is an explanatory diagram showing the processing in the case of operating in the asynchronous transfer mode.
  • FIG. 10 is an explanatory diagram showing the processing in the case of operating in the synchronous transfer mode
  • FIG. 15 is a flowchart showing the access processing to be executed with the second virtualization storage device, which is the transfer destination;
  • FIG. 17 is a flowchart showing the processing for the second virtualization storage device, which is the transfer destination, to connect with the external volume, which is the transfer target;
  • FIG. 1 is an explanatory diagram of the configuration showing the overall schematic of an embodiment of the present invention.
  • this storage system may be configured by having a plurality of virtualization storage devices 1 , 2 , a plurality of external storage devices 3 , a plurality of host devices (hereinafter referred to as a “host”) 4 , an upper level SAN (Storage Area Network) 5 , a lower level SAN 6 , a management terminal 7 , and a device-to-device LAN (Local Area Network) 8 .
  • SAN Storage Area Network
  • the first virtualization storage device 1 is used for virtualizing a volume 3 A of the external storage device 3 and providing this to the host 4 .
  • This first virtualization storage device 1 for instance, has a control unit 1 A, a first management table 1 B, a second management table 1 C, a logical volume 1 D, and an intermediate volume 1 E.
  • control unit 1 A corresponds to a “control unit”
  • first management table 1 B corresponds to “first management information”
  • second management table 1 C corresponds to “second management information
  • the logical volume 1 D corresponds to a “logical volume”
  • the intermediate volume 1 E corresponds to an “intermediate volume”.
  • the first management table 1 B is used for identifying the respective external volumes 3 A in the storage system and connecting a desired external volume 3 A to the logical volume 1 D.
  • the second management table 1 C is used for managing other attribute information such as the copy status or difference management information (difference bitmap) of the respective external volumes 3 A.
  • the second virtualization storage device 2 may be configured the same as the first virtualization storage device 1 .
  • the second virtualization storage device 2 as with the first virtualization storage device 1 , is able to connect the whole or a part of the respective external volumes 3 A to the logical volume 2 D via the intermediate volume 2 E.
  • the second virtualization storage device 2 as with the first virtualization storage device 1 , is able to provide the external volume 3 A to the host 4 as though it is one's own internal volume.
  • the size of the second management table 2 C is smaller than the size of the second management table 1 C of the first virtualization storage device 1 .
  • the table size of the second management table 2 C is smaller than that of the second management table 1 C.
  • IOPS input/output per second
  • CPU usage rate CPU usage rate
  • cache memory usage rate cache memory usage rate
  • a user such as a system administrator is able to comprehend the load status of the respective virtualization storage devices 1 , 2 based on the monitoring result of the monitoring unit 7 A, and thereby determine the disposition of the volumes.
  • the second virtualization storage device 2 is added to the storage system (S 1 ).
  • the user or a corporate engineer selling the second virtualization storage device 2 respectively connects the second virtualization storage device 2 to the upper level SAN 5 and lower level SAN 6 (S 2 A, S 2 B). Further, the second virtualization storage device 2 is connected to the first virtualization storage device 1 via the device-to-device LAN 8 (S 3 ).
  • the data used by the host 4 in reality, is stored in a prescribed external volume 3 A. Before the transfer of the volume, the host 4 is accessing a prescribed external volume 3 A from the logical volume 1 D of the first virtualization storage device 1 via the intermediate volume 1 E. The host 4 is totally unaware that such data is stored in a prescribed external volume 3 A.
  • the host 10 may be configured by having an HBA (Host Bus Adapter) 11 , a volume management unit 12 , and an application program 13 (abbreviated as “application” in the diagrams).
  • HBA Hyper Bus Adapter
  • application application program 13
  • the upper level network CN 1 is configured as an IP_SAN
  • insubstitute for the HBA 11 for instance, a LAN card equipped with a TCP/IP offload engine may be used.
  • the volume management unit 12 manages the path information and the like to the volume to be accessed.
  • the first virtualization storage device 100 A may be configured by having a host connection interface (abbreviated as “I/F” in the drawings) 111 T, a controller 101 A, and an external storage connection interface 111 E.
  • I/F host connection interface
  • the first virtualization storage device 100 A has a logical volume 164 as described later, the hierarchical memory configuration will be described later together with FIG. 4 .
  • the host connection interface 111 T is used for connecting to the respective hosts 10 via the upper level communication network CN 1 .
  • the external storage connection interface 111 E is used for connecting to the respective external storage devices 200 via the lower level communication network CN 2 .
  • the second virtualization storage device 100 B may be configured by having a host connection interface 111 T, a controller 101 B, and an external storage connection interface 111 E. And, a management table T 1 B and attribute table T 2 B are stored in the control memory 140 used by the controller 101 B.
  • Each CHA 110 is assigned a network address (e.g., IP address or WWN) for identifying the respective CHAs 110 , and each CHA 110 may also individually function as a NAS (Network Attached Storage).
  • a network address e.g., IP address or WWN
  • each CHA 110 may also individually function as a NAS (Network Attached Storage).
  • NAS Network Attached Storage
  • each CHA 110 receives and processes the request from each host 10 individually.
  • a prescribed CHA 110 is provided with an interface (target port) 111 T for communicating with the host 10
  • the other CHAs 110 are provided with an interface (externally connected port) 111 E for communicating with the external storage device 200 .
  • the cache memory 130 stores the data received from the host 10 or external storage device 200 . Further, the cache memory 130 stores data read from the disk drive 161 . As described later, the memory space of the cache memory 130 is used to create a virtual, intermediate memory apparatus (V-VOL).
  • V-VOL virtual, intermediate memory apparatus
  • one or a plurality of disk drives 161 may be used as the cache disk.
  • the cache memory 130 and control memory 140 may be configured to be separate memories, or a part of the memory area of the same memory may be used as the cache area, and the other memory area may be used as the control area.
  • connection control unit 150 mutually connects the respective CHAs 110 , respective DKAs 120 , cache memory 130 and control memory 140 .
  • the connection control unit 150 can be configured as a crossbar switch or the like.
  • the second virtualization storage device 100 B can also be configured the same as the first virtualization storage device 100 A, the explanation thereof is omitted. Nevertheless, the respective virtualization storage devices 100 A, 100 B do not have to be configured the same.
  • the external storage device 200 may be configured approximately the same as the virtualization storage devices 100 A, 100 B, or may be configured more simple than the respective virtualization storage devices 100 A, 100 B.
  • the upper level network CN 1 connecting the host 10 and respective virtualization storage devices 100 A, 100 B and the lower level network CN 2 mutually connecting the respective storage devices 100 A, 100 B, 200 are respectively configured as a separate communication network. Therefore, large quantities of data can be transferred with the lower level network CN 2 without directly influencing the upper level network CN 1 .
  • FIG. 4 is an explanatory diagram showing the memory configuration of the storage system. Foremost, the configuration of the virtualization storage devices 100 A, 100 B is explained taking the first virtualization storage device 100 A as an example.
  • the memory configuration of the first virtualization storage device 100 A can be broadly classified into a physical memory hierarchy and a logical memory hierarchy.
  • the physical memory hierarchy is configured from a PDEV (Physical Device) 161 , which is a physical disk.
  • PDEV corresponds to the foregoing disk drive 161 .
  • the logical memory hierarchy may be configured from a plurality of (e.g., two types of) hierarchies.
  • One logical hierarchy may be configured from a VDEV (Virtual Device) 162 , and a virtual VDEV (hereinafter sometimes referred to as “V-VOL”) 163 which is treated like the VDEV 162 .
  • V-VOL virtual VDEV
  • the other logical hierarchy may be configured from a LDEV (Logical Device) 164 .
  • the VDEV 162 is configured by grouping a prescribed number of PDEVs 161 such as in a set of fours ( 3 D+ 1 P), or a set of eights ( 7 D+ 1 P).
  • the memory areas provided respectively from each PDEV 161 belonging to the group are assembled to form a single RAID storage area. This RAID memory area becomes the VDEV 162 .
  • the V-VOL 163 is a virtual intermediate memory apparatus that does not require a physical memory area.
  • the V-VOL 163 is not directly associated with a physical memory area, and is a virtual existence to become the receiver for mapping an LU (Logical Unit) of the external storage controller device 200 .
  • This V-VOL 163 corresponds to an intermediate volume.
  • At least one or more LDEVs 164 may be provided on the VDEV 162 or V-VOL 163 .
  • the LDEV 164 may be configured by dividing the VDEV 162 in a fixed length.
  • the host 10 is an open host, by the LDEV 164 being mapped with the LU 165 , the host 10 will recognize the LDEV 164 as a single physical disk.
  • An open host can access a desired LDEV 164 by designating the LUN (Logical Unit Number) or logical block address.
  • LUN Logical Unit Number
  • a mainframe host will directly recognize the LDEV 164 .
  • the LU 165 is a device that can be recognized as a logical unit of SCSI.
  • Each LU 165 is connected to the host 10 via the target port 111 T.
  • At least one or more LDEVs 164 may be respectively associated with each LU 165 .
  • the LU size can be virtually expanded.
  • a CMD (Command Device) 166 is a dedicated LU to be used for transferring commands and statuses between the I/O control program operating on the host 10 and the storage device 100 .
  • a command from the host 10 is written in the CMD 166 .
  • the first virtualization storage device 100 executes the processing according to the command written in the CMD 166 , and writes the execution result thereof as the status in the CMD 166 .
  • the host device 10 reads and confirms the status written in the CMD 166 , and writes the processing contents to be executed subsequently in the CMD 166 .
  • the host device 10 is able to give various designations to the first virtualization storage device 100 A via the CMD 166 .
  • the command received from the host device 10 may also be processed directly by the first virtualization storage device 100 A without being stored in the CMD 166 .
  • the CMD may be created as a virtual device without defining the actual device (LU) and configured to receive and process the command from the host device 10 .
  • the CHA 110 writes the command received from the host device 10 in the control memory 140
  • the CHA 110 or DKA 120 processes this command stored in the control memory 140 .
  • the processing results are written in the control memory 140 , and transmitted from the CHA 110 to the host device 10 .
  • An external storage device 200 is connected to an initiator port (External Port) 111 E for external connection of the first virtualization storage device 100 A via the lower level network CN 2 .
  • the external storage device 200 has a plurality of PDEVs 220 , a VDEV 230 set on a memory area provided by the PDEV 220 , and one or more LDEVs 240 that can be set on the VDEV 230 . And, each LDEV 240 is respectively associated with the LU 250 .
  • the PDEV 220 corresponds to the disk drive 220 of FIG. 3 .
  • the LDEV 240 corresponds to a “separate logical volume”, and corresponds to the external volume 3 A of FIG. 1 .
  • the LU 250 (i.e., LDEV 240 ) of the external storage device 200 is mapped to the V-VOL 163 .
  • the “LDEV 1 ”, “LDEV 2 ” of the external storage device 200 are respectively mapped to the “V-VOL 1 ”, “V-VOL 2 ” of the first virtualization storage device 100 A via the “LU 1 ”, “LU 2 ” of the external storage device 200 .
  • “V-VOL 1 ”, “V-VOL 2 ” are respectively mapped to the “LDEV 3 ”, “LDEV 4 ”, and the host device 10 is thereby able to use these volumes via the “LU 3 ”, “LU 4 ”.
  • the VDEV 162 , V-VOL 163 may adopt the RAID configuration.
  • a single disk drive 161 may be assigned to a plurality of VDEVs 162 , V-VOLs 163 (slicing), and a single VDEV 162 , V-VOL 163 may be formed from a plurality of disk drives 161 (striping).
  • the second virtualization storage device 100 B may have the same hierarchical memory configuration as the first virtualization storage device 100 A, the explanation thereof is omitted.
  • FIG. 5 is an explanatory diagram showing the schematic configuration of the management table T 1 A and attribute table T 2 A used by the first virtualization storage device 100 A.
  • Each of these tables T 1 A, T 2 A may be stored in the control memory 140 .
  • the management table T 1 A is used for uniformly managing the respective external volumes 240 dispersed in the storage system.
  • the management table T 1 A may be configured by respectively associating a network address (WWN: World Wide Name) for connected to the respective external volumes 240 , a number (LUN: Logical Unit Number) of the respective external volumes 240 , volume size of the respective external volumes 240 , an external volume number, owner right information and transfer status flag.
  • WWN World Wide Name
  • LUN Logical Unit Number
  • an external volume number is identifying information for uniquely specifying the respective external volumes 240 in the storage system.
  • Owner right information is information for specifying the virtualization storage devices having the authority to use such external volume. When “0” is set in the owner right information, it shows that such external volume 240 is unused. When “1” is set in the owner right information, it shows that one's own device has the usage authorization to use such external volume 240 . Further, when “ ⁇ 1” is set in the owner right information, it shows that the other virtualization storage devices have the usage authorization to use such external volume 240 .
  • the first virtualization storage device 100 A has the usage authorization thereof.
  • the second virtualization storage device 100 B has the usage authorization thereof.
  • the owner right information is set as “1” in one management table regarding a certain external volume 240
  • the ownership right information of such external volume is set to “ ⁇ 1” in the other management table.
  • the affiliation of such external volume 240 can be specified.
  • the case number assigned to the respective virtualization storage devices may also be set.
  • identifying information capable of uniquely specifying the respective virtualization storage devices in the storage system may be used as the owner right information.
  • the transfer status flag is information showing that the external volume 240 is being transferred from one virtualization storage device to the other virtualization storage device.
  • “1” is set in the transfer status flag, this shows that the owner right of such external volume 240 is being changed.
  • “0” is set in the transfer status flag, this shows that such external volume 240 is in a normal state, and the owner right is not being changed.
  • Path definition information is information for showing, via which port of which CHA 110 , the logical volume 164 connected to such external volume 240 is to be accessed by the host 10 .
  • a plurality of paths may be set in the path definition information.
  • One path is the normally used primary path, and the other path is an alternate path to be used when there is failure in the primary path.
  • the table size of the attribute table T 2 A will be enormous. According, when the entirety of this attribute table T 2 A is to be transferred to the second virtualization storage device 100 B, the control memory 140 of the second virtualization storage device 100 B will be compressed. Thus, in the present embodiment, among the information stored in the attribute table T 2 A, only the information relating to the volume to be transferred to the second virtualization storage device 100 B is transferred to the second virtualization storage device 100 B. In other words, attribute information is transferred to the necessary extent. Thereby, the data volume to be transferred can be reduced, the time required for creating the attribute table can be shortened, and the compression of the memory resource (control memory 140 ) of the second virtualization storage device 100 B, which is the transfer destination, can be prevented.
  • information such as the device type (disc device or tape device, etc.), vendor name, identification number of the respective storage devices and soon may also be managed.
  • Such information may be managed with either the management table T 1 A or attribute table T 2 A.
  • the attribute table T 2 B is also configured by associating an LU number, path definition information, replication configuration information, replication status information and replication bitmap information. Nevertheless, as described above, in order to effectively use the memory resource of the second virtualization storage device 100 B, it should be noted that only the attribute information of the volume under the control of the second virtualization storage device 100 B is registered in the management table T 2 B.
  • FIG. 7 is an explanatory diagram showing the schematic configuration of the path setting information T 3 to be used by the volume management unit 12 of the host 10 .
  • This path setting information T 3 may be stored in the memory of the host 10 or a local disk.
  • the path setting information T 3 includes information relating to the primary path to be used in normal times, and information relating to the alternate path to be used in abnormal times.
  • Each path for instance, is configured by including information for specifying the HBA 11 to be used, port number of the access destination, and LU number for identifying the volume of the access target.
  • the alternate path described first is a normal alternate path
  • the subsequently described alternate path is a path unique to the present embodiment.
  • the second alternate path is a path set upon transferring the volume from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
  • FIG. 7 shows a frame format of the situation of switching from the primary path to the alternate path.
  • the volume 420 of “# 0 ” is transferred from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
  • the host 10 Before the transfer, by accessing the Port # 0 from the HBA # 0 as shown with the thick line in FIG. 7 , the host 10 is able to read and write data from and into the logical volume of the first virtualization storage device 100 A.
  • the external volume 240 is accessed from the Port # 1 based on the access from the host 10 .
  • the second alternate path is a path to the second virtualization storage device 100 B, which is the volume transfer destination.
  • the second virtualization storage device 100 B processes this access request, and returns the processing result to the host 10 .
  • the processible state of the access request means that even when the access request from the host 10 is processed, inconsistency in the data stored in the volume will not occur. This will be described in detail later.
  • the host 10 when the host 10 is unsuccessful in accessing via the primary path, it switches to the first alternate path, and, when it is unsuccessful in accessing via the first alternate path, it switches to the second alternate path. Accordingly, until the access request of the host 10 is accepted, some time (path switching time) will be required. Nevertheless, this path switching time is not wasteful time. This is because, as described later, destage processing to the transferred volume can be performed during such path switching time. In the present embodiment, merely by adding a new path to the path setting information T 3 stored in the host 10 , the access destination of the host 10 can be switched.
  • FIG. 8 is a flowchart showing the outline of the processing for searching the external volume existing in the storage system and registering this in the management table T 1 A.
  • FIG. 8 shows an example of a case where the first virtualization storage device 100 A executes the processing.
  • the first virtualization storage device 100 A issues a command (“Test Unit Ready”) toward the respective external storage devices 200 for confirming the existence thereof (S 11 ).
  • Each external storage device 200 operating normally will return a Ready reply having a Good status as the response to such command (S 12 ).
  • the first virtualization storage device 100 A issues a “Read Capacity” command to each external storage device 200 (S 15 ).
  • Each external storage device 200 transmits the size of the external volume 240 to the first virtualization storage device 100 A (S 16 ).
  • the first virtualization storage device 100 A transmits a “Report LUN” command to each external storage device 200 (S 17 ).
  • Each external storage device 200 transmits the LUN quantity and LUN number to the first virtualization storage device 100 A (S 18 ).
  • the first virtualization storage device 100 A registers the information acquired from each external storage device 200 in the management table T 1 A and attribute table T 2 A, respectively. As described above, the first virtualization storage device 100 A is able to respectively create the management table T 1 A and attribute table T 2 A by issuing a plurality of inquiry commands.
  • the configuration of the storage system may change by one of the external storage devices 200 being removed, or a new external storage device 200 being added.
  • the first virtualization storage device 100 A is able to detect such change in configuration based on command and notifications such as RSCN (Registered State Change Notification), LIP (Loop Initialization Primitive), SCR (State Change Registration) or SCN (State Change Notification).
  • RSCN Registered State Change Notification
  • LIP Loop Initialization Primitive
  • SCR State Change Registration
  • SCN State Change Notification
  • the method of the virtualization storage devices 100 A, 100 B using the external volume 240 to process the access request from the host 10 is explained.
  • the first virtualization storage device 100 A processes the access request
  • the second virtualization storage device 100 B may also perform the same processing.
  • the processing method of a write command is explained.
  • the method for processing the write command two methods; namely, the synchronous transfer mode and asynchronous transfer mode may be considered.
  • the first virtualization storage device 100 A when the first virtualization storage device 100 A receives a write command from the host 10 , the first virtualization storage device 100 A stores the write data received from the host 10 in the cache memory 130 , and thereafter transfers the write data to the external storage device 200 via the communication network CN 2 .
  • the external storage device 200 receives the write data and stores this in the cache memory, it transmits a reply signal to the first virtualization storage device 100 A.
  • the first virtualization storage device 100 A receives the reply signal from the external storage device 200 , it transmits a write completion report to the host 10 .
  • the first virtualization storage device 100 A when the first virtualization storage device 100 A receives a write command from the host 10 , it stores the write data in the cache memory 130 , and thereafter immediately issues a write completion report to the host 10 . After issuing the write completion report to the host 10 , the first virtualization storage device 100 A transfers the write data to the external storage device 200 .
  • the write completion report to the host 10 and the data transfer to the external storage device 200 are conducted asynchronously. Accordingly, in the case of the asynchronous transfer mode, the write completion report can be transmitted to the host 10 quickly irrelevant to the distance between the first virtualization storage device 100 A and external storage device 200 .
  • the asynchronous transfer mode is suitable when the distance between the first virtualization storage device 100 A and external storage device 200 is relatively long.
  • FIG. 9 is an explanatory diagram showing the case of the a synchronous transfer mode.
  • the virtualization storage devices 100 A, 100 B are not differentiated, and will be referred to as the “virtualization storage device 100 ”.
  • the management tables T 1 A, T 1 B are not differentiated, and will be referred to as the “management table T 1 ”.
  • the host 10 issues a write command to a prescribed LU 165 of the virtualization storage devices 100 (S 31 ).
  • the LU 165 is associated with the LU 250 of the external storage device 200 via the V-VOL 163 .
  • the LU 165 of the virtualization storage devices 100 is an access target from the host 10 , but the external LU 250 is actually storing the data. Therefore, for instance, the LU 165 may be referred to as an “access destination logical memory apparatus” and the LU 250 may be referred to as a “data storage destination logical memory apparatus”, respectively.
  • the virtualization storage devices 100 When the virtualization storage devices 100 receives a write command from the host 10 , it specifies the LU targeted by such write command, refers to the management table T 1 and determines whether this LU is associated with an external volume. When this is a write command to an LU associated with an external volume, the virtualization storage device 100 transmits a write command to the external storage device 200 having such external volume (S 32 ).
  • the host 10 transmits the write data with the LU 165 as the write target to the virtualization storage devices 100 (S 33 ).
  • the virtualization storage device 100 temporarily stores the write data received from the host 10 in the cache memory 130 (S 34 ). After the virtualization storage device 100 stores the write data in the cache memory 130 , it reports the completion of writing to the host 10 (S 35 ).
  • the virtualization storage device 100 transmits the write data stored in the cache memory 130 to the external storage device 200 (S 36 ).
  • the external storage device 200 stores the write data received from the virtualization storage device 100 in the cache memory.
  • the external storage device 200 reports the completion of writing to the virtualization storage device 100 (S 37 ).
  • the external storage device 200 looks out for a period with few I/O, and writes the write data stored in the cache memory in the memory apparatus 220 (destage processing). In the asynchronous transfer mode, after write data is received from the host 10 , the write completion can be sent to the host 10 in a short reply time ⁇ 1 .
  • FIG. 10 shows a case of the synchronous transfer mode.
  • the virtualization storage device 100 Upon receiving the write command issued from the host 10 (S 41 ), the virtualization storage device 100 specifies the external volume (LU 250 ) associated with the access destination volume (LU 165 ) of the write command, and issues a write command to such external volume (S 42 ).
  • the virtualization storage device 100 When the virtualization storage device 100 receives the write data from the host 10 (S 43 ), it stores this write data in the cache memory 130 (S 44 ). The virtualization storage device 100 transfers the write data stored in the cache memory 130 to the external storage device 200 such that it is written in the external volume (S 45 ). After storing the write data in the cache memory, the external storage device 200 reports the completion of writing to the virtualization storage device 100 (S 46 ). When the virtualization storage device 100 confirms the completion of writing in the external storage device 200 , it reports the completion of writing to the host 10 (S 47 ). In the synchronous transfer mode, since the report of the write completion to the host 10 is made upon waiting for the processing in the external storage device 200 , the reply time ⁇ 2 will become long. The reply time ⁇ 2 of the synchronous transfer mode is longer than the reply time ⁇ 1 of the asynchronous transfer mode ( ⁇ 2 ⁇ 1 ).
  • the respective virtualization storage devices 100 A, 100 B are able to incorporate and use the external volume 240 of the external storage device 200 as though it is a virtual internal volume.
  • the external volume 240 may also be transferred from the second virtualization storage device 100 B to the first virtualization storage device 100 A.
  • FIG. 11 is a flowchart showing the processing for designating the transfer of the volume to the respective virtualization storage devices 100 A, 100 B.
  • the user discovers whether there is a high-load CPU based on the performance information displayed on the screen of the management terminal 20 (S 53 ).
  • This CPU represents the CPU built in the CHA 110 .
  • the user confirms that every CPU of other CHAs 110 is of a load that is greater than a prescribed value (S 54 ).
  • the user determines the transfer of the external volume 240 under the control of such CHA 110 (S 55 ). Subsequently, the user sets a path of the transfer destination (S 56 ). In other words, the user defines the path information regarding which port the host 10 will use for the access in the second virtualization storage device 100 B, which is the transfer destination (S 56 ). The defined path information is added to the host 10 . Finally, the user designates the transfer of such external volume 240 to the respective virtualization storage devices 100 A, 100 B (S 57 ).
  • the user specifies the external volume that is being the bottleneck in the first virtualization storage device 100 A, which is the transfer source (switching source) (S 53 to S 55 ) based on the monitoring result of the monitoring unit 21 (S 51 , S 52 ), and designates the start of transfer by defining the path of the transfer destination (S 56 , S 57 ).
  • the transfer source switching source
  • FIG. 12 is an explanatory diagram showing an example of a screen showing the monitoring result of the monitoring unit 21 .
  • the monitoring unit 21 is able to respectively acquire performance information from the respective virtualization storage devices 100 A, 100 B, and display such performance information upon performing statistical processing or creating a graphical chart thereof.
  • the selection unit G 11 it is possible to select which load status regarding which resource among the various resources in the storage system is to be displayed.
  • the resource for instance, “network”, “storage”, “switch” and soon may be considered.
  • the user may further select one of the virtualization storage devices 100 A, 100 B. Further, when the user selects one of the virtualization storage devices 100 A, 100 B, the user may make a more detailed selection. As such detailed selection, “port” and “LU” may be considered. As described above, the user is able to select in detail the desired target for confirming the load status.
  • the overall status of the selected virtualization storage device can be displayed as a list among the virtualization storage devices 100 A, 100 B.
  • a more detailed monitoring target status such as the “port” and “LU”
  • the load status can be displayed as a graph.
  • the user is able to relatively easily determine which part of which virtualization storage device is a bottleneck based on the performance monitoring screen as shown in FIG. 12 .
  • the user is able to determine the volume to be transferred based on such determination.
  • FIG. 13 is a flowchart showing the situation of newly adding a second virtualization storage device 100 B to the storage system in a state where the first virtualization storage device 100 A is in operation, and transferring one or a plurality of volumes from the first virtualization storage device 100 A to the second virtualization storage device 100 B.
  • the first virtualization storage device 100 A is abbreviated as the “first storage”
  • the second virtualization storage device 100 B is abbreviated as the “second storage”, respectively.
  • the user will be able to comprehend the load status of the first virtualization storage device 100 A with the methods described with reference to FIG. 11 and FIG. 12 . As a result, the user will be able to determine the additional injection of the second virtualization storage device 100 B.
  • the user or engineer of the vendor performs physical connection procedures of the newly introduced second virtualization storage device 100 B (S 61 ).
  • the host connection interface 111 T of the second virtualization storage device 100 B is connected to the upper level network CN 1
  • the external storage connection interface 111 E of the second virtualization storage device 100 B is connected to the lower level network CN 2
  • the SVP 170 of the second virtualization storage device 100 B is connected to the network CN 3 .
  • the second virtualization storage device 100 B connects the designated external volume 240 to the V-VOL 163 via the interface 11 E (S 65 ).
  • the user selects the logical volume 164 to be accessed from the host 10 as the transfer target.
  • the external volume 240 connected to such logical volume 164 will be reconnected to a separate logical volume 164 of the transfer destination storage device ( 100 B).
  • the virtualization storage devices 100 A, 100 B connect the external volume 240 to the logical volume 164 via the V-VOL 163 , and are able to use this as though it is one's own internal memory apparatus.
  • the volume management unit 12 of the host 10 adds the path information for accessing the transferred volume to the path setting information T 3 (S 66 ).
  • path information for accessing the logical volume 164 connected to the external volume 240 via a prescribed port of the second virtualization storage device 100 B is set.
  • the first virtualization storage device 100 A starts the destage processing without processing the access request (S 74 ). Access processing in the transfer source before the completion of transfer will be described later with reference to FIG. 14 .
  • the second virtualization storage device 100 B receives a notice indicating the completion of destage processing from the first virtualization storage device 100 A (S 75 ).
  • the host 10 refers to the path setting information T 3 , switches to a different path (S 76 ), and reissues the command (S 77 ).
  • the switch shall be from the primary path passing through the first virtualization storage device 100 A to the second alternate path passing through the second virtualization storage device 100 B.
  • FIG. 14 is a flowchart showing the details of S 74 in FIG. 13 .
  • the first virtualization storage device 100 A which is the transfer source storage device, receives a command from the host 10 (S 81 : YES), it analyzes the access target of such command.
  • the first virtualization storage device 100 A determines whether the command in which the logical volume 164 connected to the external volume 240 of its own usage authorization is the access target (S 82 ). In other words, the first virtualization storage device 100 A determines whether the command is an access request relating to the external volume 240 in which it has the owner right.
  • the first virtualization storage device 100 A starts the destage processing of dirty data regarding the external volume 240 in which the access was requested from the host 10 (S 84 ). And, when the destage processing is complete (S 85 : YES), the first virtualization storage device 100 A notifies the second virtualization storage device 100 B to such effect (S 86 ).
  • the first virtualization storage device 100 A is processing the write command in the asynchronous transfer mode. Accordingly, the first virtualization storage device 100 A reports the completion of writing to the host 10 at the time the write data received from the host 10 is stored in the cache memory 130 . The write data stored in the cache memory 130 is transferred to the external storage device 200 in a prescribed timing, and reflected in the external volume 240 .
  • the first virtualization storage device 100 A which is the transfer source, with perform destage processing without processing the access request from the host 10 .
  • the first virtualization storage device 100 A When it is a read command, the first virtualization storage device 100 A reads the data requested from the host 10 from the external volume 240 (S 92 ), and transfers this data to the host 10 (S 93 ). Incidentally, when reading data from the external volume 240 , the management table T 1 A is referred to. Further, when the data requested from the host 10 already exists on the cache memory 130 (when the data has been sliced), the first virtualization storage device 100 A transfers the data stored in the cache memory 130 to the host 10 without accessing the external volume 240 .
  • FIG. 15 is a flowchart showing the details of S 78 in FIG. 13 .
  • the second virtualization storage device 100 B which is the transfer destination, receives a command from the host 10 (S 101 : YES), it analyzes the access target of such command.
  • the second virtualization storage device 100 B determines whether the access target of the host 10 is a logical volume 164 connected to the external volume 240 under the control of the second virtualization storage device 100 B (S 102 ). In other words, the second virtualization storage device 100 B determines whether the command is an access request relating to the external volume 240 in which it has the owner right thereof.
  • the second virtualization storage device 100 B determines whether this is an access request relating to the volume in which it has the owner right thereof (S 102 : YES), it determines whether the destage processing performed by the first virtualization storage device 100 A regarding the external volume 240 connected to the logical volume 164 thereof is complete (S 103 ). In other words, the second virtualization storage device 100 B determines whether a destage completion notification has been acquired from the first virtualization storage device 100 A regarding such volume.
  • the second virtualization storage device 100 B rejects the command processing (S 104 ). This is in order to maintain the consistency of data regarding the transfer target volume.
  • the second virtualization storage device 100 B reads the data requested from the host 10 from the external volume 240 (or cache memory 130 ) (S 110 ), and transfers this data to the host 10 (S 111 ).
  • the first virtualization storage device 100 A When the first virtualization storage device 100 A receives a transfer designation from the management terminal 20 , it changes the owner right of the external volume designated as the transfer target from “1” to “ ⁇ 1”, and notifies this change to the second virtualization storage device 100 B (S 122 ).
  • the first virtualization storage device 100 A When the first virtualization storage device 100 A receives a notice from the second virtualization storage device 100 B, similarly, it sets “1” in the transfer status flag relating to the transfer target volume and updates the management table T 1 A (S 126 ). And, the first virtualization storage device 100 A starts the destage processing of dirty data relating to the transfer target volume (S 127 ).
  • the first virtualization storage device 100 A will reject such processing (S 129 ).
  • the host 10 When the access processing is rejected by the first virtualization storage device 100 A, the host 10 refers to the path setting information T 3 and switches the path (S 130 ).
  • the explanation is regarding a case of switching from the primary path passing through the first virtualization storage device 100 A to the alternate path passing through the second virtualization storage device 100 B.
  • the host 10 After switching the path, the host 10 reissues the command (S 131 ).
  • This command may be a write command or a read command, and let it be assumed that a write command has been issued for the sake of convenience of explanation.
  • the second virtualization storage device 100 B When the second virtualization storage device 100 B receives a write command from the host 10 (S 132 ), it receives write data transmitted from the host 10 after the write command, and stores this in the cache memory 130 (S 132 ). After storing the write data in the cache memory 130 , the second virtualization storage device 100 B reports the completion of writing to the host 10 (S 133 ). The host 10 receives a processing completion notice from the second virtualization storage device 100 B (S 134 ).
  • the first virtualization storage device 100 A notifies the completion of the destage processing to the second virtualization storage device 100 B (S 136 ).
  • the second virtualization storage device 100 B receives this destage completion notice (S 137 ), it resets the transfer status flag relating to the transfer target volume (S 138 ). Thereby, the transfer of the volume is completed while maintaining the consistency of the volume.
  • the second virtualization storage device 100 B performs the normal access processing (S 140 ).
  • the second virtualization storage device 100 B may reject the processing of the read command until the destage processing by the first virtualization storage device 100 A is complete.
  • the user determines the introduction of the second virtualization storage device 100 B based on the load status of the first virtualization storage device 100 A, and adds the second virtualization storage device 100 B to the storage system.
  • a plurality of virtualization storage devices 100 A, 100 B may be used to manage each of the external volumes 240 . Accordingly, the load in the storage system can be dispersed and the processing performance of the overall storage system can be improved.
  • the external volume 240 can be transferred between the respective virtualization storage devices 100 A, 100 B without stopping the access from the host 10 . Therefore, the volume can be transferred via online without having to shut down the host 10 , and the usability will improve.
  • the virtualization storage device 100 A which is the transfer source, is configured such that it can reject the access request from the host 10 until the destage processing relating to the transfer target external volume 240 is complete. Therefore, the volume can be transferred while maintaining the consistency of data.
  • the second embodiment of the present invention is now explained with reference to FIG. 19 .
  • the present embodiment corresponds to a modified example of the foregoing first embodiment.
  • the storage system autonomously disperses the load between the respective virtualization storage devices 100 A, 100 B.
  • FIG. 19 is a flowchart of the transfer designation processing according to the present embodiment.
  • This transfer designation processing for example, can be executed with the management terminal 20 .
  • the management terminal 20 acquires the performance information from the respective virtualization storage devices 100 A, 100 B (S 161 ).
  • the management terminal 20 based on each type of performance information, respectively calculates the loads LS 1 , LS 2 of the respective virtualization storage devices 100 A, 100 B (S 162 ). These loads, for example, may be calculated based on the input/output per second, CPU usage rate, cache memory usage rate and the like.
  • the management terminal 20 compares the load LS 1 of the first virtualization storage device 100 A and the load LS 2 of the second virtualization storage device 100 B (S 163 ). When the first load LS 1 is greater than the second load LS 2 (LS 1 >LS 2 ), the management terminal 20 determines the logical volume (external volume) to the transferred from the first virtualization storage device 100 A to the second virtualization storage device 100 B (S 164 ). The management terminal 20 , for instance, may select the volume of the highest load in the device.
  • the management terminal 20 judges whether the transfer timing has arrived (S 165 ), and, when the transfer timing has arrived (S 165 : YES), it defines the path information of the transfer destination (S 166 ), and respectively issues a transfer designation to the respective virtualization storage devices 100 A, 100 B (S 166 ). For example, a time frame with low access frequency from the host 10 may be pre-selected as the transfer timing.
  • the management terminal 20 determines the volume to be transferred from the second virtualization storage device 100 B to the first virtualization storage device 100 A (S 168 ).
  • the management terminal 20 looks out for a prescribed transfer timing (S 169 : YES), defines the path of the transfer destination (S 170 ), and respectively issues a transfer designation to the respective virtualization storage devices 100 A, 100 B (S 171 ).
  • the present invention is not limited to the embodiments described above. Those skilled in the art may make various additions and modifications within the scope of the present invention.
  • the configuration may also be such that all external volumes are transferred to the second virtualization storage device, and the first virtualization storage device may be entirely replaced with the second virtualization storage device.
  • the present invention is not limited thereto, and the configuration may be such that the function of the management terminal is built in one of the virtualization storage devices.
  • the virtualization storage devices are operated in an asynchronous transfer mode
  • these may also be operated in a synchronous transfer mode.
  • the memory contents of the external volume will always be updated to be the latest contents, such memory contents may be transferred between the respective virtualization storage devices quickly without having to wait for the completion of the destage processing at the transfer source.
  • the logical volume 164 of the transfer source and the logical volume 164 of the transfer destination will be set to be of the same size.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
US11/181,877 2005-05-24 2005-07-15 Storage system and operation method of storage system Abandoned US20060271758A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US12/367,706 US8180979B2 (en) 2005-05-24 2009-02-09 Storage system and operation method of storage system
US12/830,865 US7953942B2 (en) 2005-05-24 2010-07-06 Storage system and operation method of storage system
US13/222,569 US8484425B2 (en) 2005-05-24 2011-08-31 Storage system and operation method of storage system including first and second virtualization devices
US13/912,297 US20130275690A1 (en) 2005-05-24 2013-06-07 Storage system and operation method of storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-150868 2005-05-24
JP2005150868A JP5057656B2 (ja) 2005-05-24 2005-05-24 ストレージシステム及びストレージシステムの運用方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/367,706 Continuation US8180979B2 (en) 2005-05-24 2009-02-09 Storage system and operation method of storage system

Publications (1)

Publication Number Publication Date
US20060271758A1 true US20060271758A1 (en) 2006-11-30

Family

ID=36450094

Family Applications (5)

Application Number Title Priority Date Filing Date
US11/181,877 Abandoned US20060271758A1 (en) 2005-05-24 2005-07-15 Storage system and operation method of storage system
US12/367,706 Expired - Fee Related US8180979B2 (en) 2005-05-24 2009-02-09 Storage system and operation method of storage system
US12/830,865 Expired - Fee Related US7953942B2 (en) 2005-05-24 2010-07-06 Storage system and operation method of storage system
US13/222,569 Active US8484425B2 (en) 2005-05-24 2011-08-31 Storage system and operation method of storage system including first and second virtualization devices
US13/912,297 Abandoned US20130275690A1 (en) 2005-05-24 2013-06-07 Storage system and operation method of storage system

Family Applications After (4)

Application Number Title Priority Date Filing Date
US12/367,706 Expired - Fee Related US8180979B2 (en) 2005-05-24 2009-02-09 Storage system and operation method of storage system
US12/830,865 Expired - Fee Related US7953942B2 (en) 2005-05-24 2010-07-06 Storage system and operation method of storage system
US13/222,569 Active US8484425B2 (en) 2005-05-24 2011-08-31 Storage system and operation method of storage system including first and second virtualization devices
US13/912,297 Abandoned US20130275690A1 (en) 2005-05-24 2013-06-07 Storage system and operation method of storage system

Country Status (4)

Country Link
US (5) US20060271758A1 (de)
EP (3) EP2246777B1 (de)
JP (1) JP5057656B2 (de)
CN (2) CN101271382B (de)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070055840A1 (en) * 2005-09-05 2007-03-08 Yasutomo Yamamoto Control method of device in storage system for virtualization
US20070070535A1 (en) * 2005-09-27 2007-03-29 Fujitsu Limited Storage system and component replacement processing method thereof
US20070245081A1 (en) * 2006-04-07 2007-10-18 Hitachi, Ltd. Storage system and performance tuning method thereof
US20080177947A1 (en) * 2007-01-19 2008-07-24 Hitachi, Ltd. Storage system and storage migration method
US20090031320A1 (en) * 2007-07-26 2009-01-29 Hirotaka Nakagawa Storage System and Management Method Thereof
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US20090259795A1 (en) * 2008-04-15 2009-10-15 Microsoft Corporation Policy framework to treat data
US20090259802A1 (en) * 2008-04-15 2009-10-15 Microsoft Corporation Smart device recordation
US20100082934A1 (en) * 2008-09-26 2010-04-01 Hitachi, Ltd. Computer system and storage system
US20100180077A1 (en) * 2005-09-20 2010-07-15 Hitachi, Ltd. Logical Volume Transfer Method and Storage Network System
WO2010082452A1 (ja) 2009-01-13 2010-07-22 パナソニック株式会社 弾性体アクチュエータの制御装置及び制御方法、並びに、制御プログラム
US8082411B1 (en) * 2008-04-30 2011-12-20 Netapp, Inc. Method and system for logical unit substitution
WO2012007999A1 (en) 2010-07-16 2012-01-19 Hitachi, Ltd. Storage control apparatus and storage system comprising multiple storage control apparatuses
US8171241B2 (en) 2006-10-25 2012-05-01 Hitachi, Ltd. Computer system, computer and method for managing performance based on I/O division ratio
US20120159100A1 (en) * 2010-12-20 2012-06-21 Microsoft Corporation Storage device migration and redirection
WO2012172601A1 (en) 2011-06-14 2012-12-20 Hitachi, Ltd. Storage system comprising multiple storage control apparatus
US20130227047A1 (en) * 2012-02-27 2013-08-29 Fujifilm North America Corporation Methods for managing content stored in cloud-based storages
US20140223004A1 (en) * 2013-02-06 2014-08-07 Tadashi Honda Network system and information reporting method
US20140289205A1 (en) * 2013-03-25 2014-09-25 Fujitsu Limited Data transfer apparatus, system, and method
US8880810B2 (en) 2010-12-22 2014-11-04 Hitachi, Ltd. Storage system comprising multiple storage apparatuses with both storage virtualization function and capacity virtualization function
US8937965B2 (en) 2007-12-13 2015-01-20 Hitachi, Ltd. Storage system comprising function for migrating virtual communication port added to physical communication port
US9015410B2 (en) 2011-01-05 2015-04-21 Hitachi, Ltd. Storage control apparatus unit and storage system comprising multiple storage control apparatus units
US20150160887A1 (en) * 2013-12-06 2015-06-11 Concurrent Ventures, LLC System, method and article of manufacture for monitoring, controlling and improving storage media system performance
US9177157B2 (en) 2010-12-22 2015-11-03 May Patents Ltd. System and method for routing-based internet security
JP2015204078A (ja) * 2014-04-16 2015-11-16 富士通株式会社 ストレージ仮想化装置、ストレージ仮想化装置の制御方法及び制御プログラム
US20160212038A1 (en) * 2013-08-30 2016-07-21 Nokia Solutions And Networks Oy Methods and apparatus
US9575685B1 (en) * 2007-06-29 2017-02-21 EMC IP Holding Company LLC Data migration with source device reuse
US20170212001A1 (en) * 2016-01-21 2017-07-27 Horiba, Ltd. Management apparatus for measurement equipment
US11403039B2 (en) 2019-09-20 2022-08-02 Fujitsu Limited Storage control device, storage device, and non-transitory computer-readable storage medium for storing determination program
US20240134526A1 (en) * 2022-10-20 2024-04-25 Dell Products L.P. Virtual container storage interface controller

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8219681B1 (en) 2004-03-26 2012-07-10 Emc Corporation System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US8032701B1 (en) 2004-03-26 2011-10-04 Emc Corporation System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US8281022B1 (en) 2000-06-30 2012-10-02 Emc Corporation Method and apparatus for implementing high-performance, scaleable data processing and storage systems
US7770059B1 (en) 2004-03-26 2010-08-03 Emc Corporation Failure protection in an environment including virtualization of networked storage resources
US7818517B1 (en) 2004-03-26 2010-10-19 Emc Corporation Architecture for virtualization of networked storage resources
US8627005B1 (en) 2004-03-26 2014-01-07 Emc Corporation System and method for virtualization of networked storage resources
US8140665B2 (en) * 2005-08-19 2012-03-20 Opnet Technologies, Inc. Managing captured network traffic data
JP2007280089A (ja) * 2006-04-07 2007-10-25 Hitachi Ltd 容量拡張ボリュームの移行方法
JP4963892B2 (ja) * 2006-08-02 2012-06-27 株式会社日立製作所 仮想ストレージシステムの構成要素となることが可能なストレージシステムの制御装置
JP5057366B2 (ja) 2006-10-30 2012-10-24 株式会社日立製作所 情報システム及び情報システムのデータ転送方法
JP2008217364A (ja) * 2007-03-02 2008-09-18 Nec Corp ファイル入出力調整制御システム、ファイル入出力調整制御方法、ファイル入出力調整制御プログラム
US7877556B2 (en) * 2007-03-30 2011-01-25 Hitachi, Ltd. Method and apparatus for a unified storage system
JP2009026255A (ja) * 2007-07-24 2009-02-05 Hitachi Ltd データ移行方法、データ移行システム、及びデータ移行プログラム
WO2009081953A1 (ja) * 2007-12-26 2009-07-02 Canon Anelva Corporation スパッタ装置、スパッタ成膜方法及び分析装置
JP4674242B2 (ja) * 2008-02-05 2011-04-20 富士通株式会社 仮想化スイッチ、コンピュータシステムおよびデータコピー方法
US20090240880A1 (en) * 2008-03-21 2009-09-24 Hitachi, Ltd. High availability and low capacity thin provisioning
US8032730B2 (en) * 2008-05-15 2011-10-04 Hitachi, Ltd. Method and apparatus for I/O priority control in storage systems
CN101877136A (zh) * 2009-04-30 2010-11-03 国际商业机器公司 处理图形对象的方法、设备及系统
WO2011042941A1 (en) * 2009-10-09 2011-04-14 Hitachi, Ltd. Storage system and storage system communication path management method
US8639769B2 (en) * 2009-12-18 2014-01-28 International Business Machines Corporation Handling of data transfer in a LAN-free environment
CN101788889B (zh) * 2010-03-03 2011-08-10 浪潮(北京)电子信息产业有限公司 一种存储虚拟化系统及方法
JP5551245B2 (ja) * 2010-03-19 2014-07-16 株式会社日立製作所 ファイル共有システムおよびファイル処理方法、並びにプログラム
US8312234B2 (en) * 2010-04-05 2012-11-13 Hitachi, Ltd. Storage system configured from plurality of storage modules and method for switching coupling configuration of storage modules
JP5065434B2 (ja) * 2010-04-06 2012-10-31 株式会社日立製作所 管理方法及び管理装置
EP2519887A1 (de) 2010-06-17 2012-11-07 Hitachi, Ltd. Speichersystem mit mehreren mikroprozessoren und verfahren zur geteilten verarbeitung in dem speichersystem
JP5602572B2 (ja) * 2010-10-06 2014-10-08 富士通株式会社 ストレージ装置、データ複写方法およびストレージシステム
US9021198B1 (en) * 2011-01-20 2015-04-28 Commvault Systems, Inc. System and method for sharing SAN storage
JP5455945B2 (ja) * 2011-02-14 2014-03-26 株式会社東芝 調停装置、記憶装置、情報処理装置およびプログラム
WO2012114384A1 (en) * 2011-02-25 2012-08-30 Hitachi, Ltd. Storage system and method of controlling the storage system
US9223501B2 (en) * 2012-04-23 2015-12-29 Hitachi, Ltd. Computer system and virtual server migration control method for computer system
US11055124B1 (en) * 2012-09-30 2021-07-06 EMC IP Holding Company LLC Centralized storage provisioning and management across multiple service providers
JP2015532734A (ja) * 2012-10-03 2015-11-12 株式会社日立製作所 物理ストレージシステムを管理する管理システム、物理ストレージシステムのリソース移行先を決定する方法及び記憶媒体
DE102012110164B4 (de) * 2012-10-24 2021-08-19 Fujitsu Ltd. Rechneranordnung
US20140269739A1 (en) * 2013-03-15 2014-09-18 Unisys Corporation High availability server configuration with n + m active and standby systems
US9250809B2 (en) * 2013-03-18 2016-02-02 Hitachi, Ltd. Compound storage system and storage control method to configure change associated with an owner right to set the configuration change
JP6193373B2 (ja) 2013-03-18 2017-09-06 株式会社日立製作所 複合型ストレージシステム及び記憶制御方法
JP2014215666A (ja) * 2013-04-23 2014-11-17 富士通株式会社 制御システム,制御装置及び制御プログラム
JP6209863B2 (ja) * 2013-05-27 2017-10-11 富士通株式会社 ストレージ制御装置、ストレージ制御方法およびストレージ制御プログラム
US8954614B1 (en) * 2013-12-06 2015-02-10 Concurrent Ventures, LLC System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on temperature
US10048895B2 (en) 2013-12-06 2018-08-14 Concurrent Ventures, LLC System and method for dynamically load balancing storage media devices based on a mid-range performance level
US10235096B2 (en) 2013-12-06 2019-03-19 Concurrent Ventures, LLC System and method for dynamically load balancing storage media devices based on an average or discounted average sustained performance level
US8954615B1 (en) * 2013-12-06 2015-02-10 Concurrent Ventures, LLC System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on temperature ranges
US8984172B1 (en) * 2013-12-06 2015-03-17 Concurrent Ventures, LLC System, method and article of manufacture for monitoring, controlling and improving storage media system performance based on storage media device fill percentage
US9436404B2 (en) 2013-12-06 2016-09-06 Concurrent Ventures, LLC System and method for dynamically load balancing across storage media devices having fast access rates
JP2015184895A (ja) * 2014-03-24 2015-10-22 富士通株式会社 ストレージ制御装置、ストレージ装置、及びストレージ制御プログラム
JP6361390B2 (ja) * 2014-09-10 2018-07-25 富士通株式会社 ストレージ制御装置および制御プログラム
WO2016209313A1 (en) * 2015-06-23 2016-12-29 Hewlett-Packard Development Company, L.P. Task execution in a storage area network (san)
US9804789B2 (en) * 2015-06-24 2017-10-31 Vmware, Inc. Methods and apparatus to apply a modularized virtualization topology using virtual hard disks
US10126983B2 (en) 2015-06-24 2018-11-13 Vmware, Inc. Methods and apparatus to enforce life cycle rules in a modularized virtualization topology using virtual hard disks
US9928010B2 (en) 2015-06-24 2018-03-27 Vmware, Inc. Methods and apparatus to re-direct detected access requests in a modularized virtualization topology using virtual hard disks
US10101915B2 (en) 2015-06-24 2018-10-16 Vmware, Inc. Methods and apparatus to manage inter-virtual disk relations in a modularized virtualization topology using virtual hard disks
US10437477B2 (en) * 2017-07-20 2019-10-08 Dell Products, Lp System and method to detect storage controller workloads and to dynamically split a backplane
JP6878369B2 (ja) * 2018-09-03 2021-05-26 株式会社日立製作所 ボリューム配置管理装置、ボリューム配置管理方法、及びボリューム配置管理プログラム

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5889520A (en) * 1997-11-13 1999-03-30 International Business Machines Corporation Topological view of a multi-tier network
US6101508A (en) * 1997-08-01 2000-08-08 Hewlett-Packard Company Clustered file management for network resources
US20020184463A1 (en) * 2000-07-06 2002-12-05 Hitachi, Ltd. Computer system
US20020184439A1 (en) * 1998-09-28 2002-12-05 Naoki Hino Storage control unit and method for handling data storage system using thereof
US20030079019A1 (en) * 2001-09-28 2003-04-24 Lolayekar Santosh C. Enforcing quality of service in a storage network
US20030079018A1 (en) * 2001-09-28 2003-04-24 Lolayekar Santosh C. Load balancing in a storage network
US20030093541A1 (en) * 2001-09-28 2003-05-15 Lolayekar Santosh C. Protocol translation in a storage system
US20030093567A1 (en) * 2001-09-28 2003-05-15 Lolayekar Santosh C. Serverless storage services
US20030221077A1 (en) * 2002-04-26 2003-11-27 Hitachi, Ltd. Method for controlling storage system, and storage control apparatus
US20040078465A1 (en) * 2002-10-17 2004-04-22 Coates Joshua L. Methods and apparatus for load balancing storage nodes in a distributed stroage area network system
US20040257857A1 (en) * 2003-06-23 2004-12-23 Hitachi, Ltd. Storage system that is connected to external storage
US6976134B1 (en) * 2001-09-28 2005-12-13 Emc Corporation Pooling and provisioning storage resources in a storage network

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3021955B2 (ja) * 1992-05-13 2000-03-15 富士通株式会社 二重化ファイルシステム運用方法
JPH09197367A (ja) * 1996-01-12 1997-07-31 Sony Corp プラズマアドレス表示装置
US6886035B2 (en) * 1996-08-02 2005-04-26 Hewlett-Packard Development Company, L.P. Dynamic load balancing of a network of client and server computer
JP3410010B2 (ja) 1997-12-24 2003-05-26 株式会社日立製作所 サブシステムの移行方法および情報処理システム
US6067545A (en) * 1997-08-01 2000-05-23 Hewlett-Packard Company Resource rebalancing in networked computer systems
JP3726484B2 (ja) 1998-04-10 2005-12-14 株式会社日立製作所 記憶サブシステム
JP2002516446A (ja) * 1998-05-15 2002-06-04 ストーリッジ テクノロジー コーポレーション サイズ可変データブロックのキャッシュ方法
JP2000316132A (ja) 1999-04-30 2000-11-14 Matsushita Electric Ind Co Ltd ビデオサーバ
US6457109B1 (en) * 2000-08-18 2002-09-24 Storage Technology Corporation Method and apparatus for copying data from one storage system to another storage system
US6675268B1 (en) 2000-12-11 2004-01-06 Lsi Logic Corporation Method and apparatus for handling transfers of data volumes between controllers in a storage environment having multiple paths to the data volumes
US6732104B1 (en) * 2001-06-06 2004-05-04 Lsi Logic Corporatioin Uniform routing of storage access requests through redundant array controllers
US7613806B2 (en) * 2001-06-28 2009-11-03 Emc Corporation System and method for managing replication sets of data distributed over one or more computer systems
US7707304B1 (en) 2001-09-28 2010-04-27 Emc Corporation Storage switch for storage area network
US7864758B1 (en) 2001-09-28 2011-01-04 Emc Corporation Virtualization in a storage system
JP2005505819A (ja) 2001-09-28 2005-02-24 マランティ ネットワークス インコーポレイテッド 記憶システムにおけるパケットの分類
JP2003162377A (ja) 2001-11-28 2003-06-06 Hitachi Ltd ディスクアレイシステム及びコントローラ間での論理ユニットの引き継ぎ方法
JP4220174B2 (ja) * 2002-04-08 2009-02-04 株式会社日立製作所 ストレージシステムのコンテンツ更新方法
JP2003316522A (ja) * 2002-04-26 2003-11-07 Hitachi Ltd 計算機システムおよび計算機システムの制御方法
JP4100968B2 (ja) * 2002-06-06 2008-06-11 株式会社日立製作所 データマッピング管理装置
US7103727B2 (en) 2002-07-30 2006-09-05 Hitachi, Ltd. Storage system for multi-site remote copy
JP4214832B2 (ja) * 2002-07-30 2009-01-28 株式会社日立製作所 記憶装置システム
JP2004102374A (ja) 2002-09-05 2004-04-02 Hitachi Ltd データ移行装置を有する情報処理システム
JP2004178253A (ja) * 2002-11-27 2004-06-24 Hitachi Ltd 記憶デバイス制御装置および記憶デバイス制御装置の制御方法
JP2004220450A (ja) 2003-01-16 2004-08-05 Hitachi Ltd ストレージ装置、その導入方法、及びその導入プログラム
JP4409181B2 (ja) * 2003-01-31 2010-02-03 株式会社日立製作所 画面データ生成方法、コンピュータ、プログラム
JP2004302751A (ja) * 2003-03-31 2004-10-28 Hitachi Ltd 計算機システムの性能管理方法、および、記憶装置の性能を管理する計算機システム
JP2005018161A (ja) 2003-06-23 2005-01-20 Adtex:Kk 記憶システム、制御方法、及びプログラム
JP2005055963A (ja) * 2003-08-05 2005-03-03 Hitachi Ltd ボリューム制御方法、この方法を実行するプログラム、及びストレージ装置
JP4307202B2 (ja) * 2003-09-29 2009-08-05 株式会社日立製作所 記憶システム及び記憶制御装置
US20050257014A1 (en) * 2004-05-11 2005-11-17 Nobuhiro Maki Computer system and a management method of a computer system
JP3781378B2 (ja) * 2005-01-04 2006-05-31 株式会社日立製作所 記憶サブシステム
US7337350B2 (en) 2005-02-09 2008-02-26 Hitachi, Ltd. Clustered storage system with external storage systems

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101508A (en) * 1997-08-01 2000-08-08 Hewlett-Packard Company Clustered file management for network resources
US5889520A (en) * 1997-11-13 1999-03-30 International Business Machines Corporation Topological view of a multi-tier network
US20020184439A1 (en) * 1998-09-28 2002-12-05 Naoki Hino Storage control unit and method for handling data storage system using thereof
US20020184463A1 (en) * 2000-07-06 2002-12-05 Hitachi, Ltd. Computer system
US20030079019A1 (en) * 2001-09-28 2003-04-24 Lolayekar Santosh C. Enforcing quality of service in a storage network
US20030079018A1 (en) * 2001-09-28 2003-04-24 Lolayekar Santosh C. Load balancing in a storage network
US20030093541A1 (en) * 2001-09-28 2003-05-15 Lolayekar Santosh C. Protocol translation in a storage system
US20030093567A1 (en) * 2001-09-28 2003-05-15 Lolayekar Santosh C. Serverless storage services
US6976134B1 (en) * 2001-09-28 2005-12-13 Emc Corporation Pooling and provisioning storage resources in a storage network
US20030221077A1 (en) * 2002-04-26 2003-11-27 Hitachi, Ltd. Method for controlling storage system, and storage control apparatus
US20040078465A1 (en) * 2002-10-17 2004-04-22 Coates Joshua L. Methods and apparatus for load balancing storage nodes in a distributed stroage area network system
US20040257857A1 (en) * 2003-06-23 2004-12-23 Hitachi, Ltd. Storage system that is connected to external storage

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070055840A1 (en) * 2005-09-05 2007-03-08 Yasutomo Yamamoto Control method of device in storage system for virtualization
US8214615B2 (en) 2005-09-05 2012-07-03 Hitachi, Ltd. Control method of device in storage system for virtualization
US7673110B2 (en) * 2005-09-05 2010-03-02 Hitachi, Ltd. Control method of device in storage system for virtualization
US8694749B2 (en) 2005-09-05 2014-04-08 Hitachi, Ltd. Control method of device in storage system for virtualization
US20100131731A1 (en) * 2005-09-05 2010-05-27 Yasutomo Yamamoto Control method of device in storage system for virtualization
US20100180077A1 (en) * 2005-09-20 2010-07-15 Hitachi, Ltd. Logical Volume Transfer Method and Storage Network System
US8700870B2 (en) 2005-09-20 2014-04-15 Hitachi, Ltd. Logical volume transfer method and storage network system
US8327094B2 (en) 2005-09-20 2012-12-04 Hitachi, Ltd. Logical volume transfer method and storage network system
US20070070535A1 (en) * 2005-09-27 2007-03-29 Fujitsu Limited Storage system and component replacement processing method thereof
US20070245081A1 (en) * 2006-04-07 2007-10-18 Hitachi, Ltd. Storage system and performance tuning method thereof
US7496724B2 (en) * 2006-04-07 2009-02-24 Hitachi, Ltd. Load balancing in a mirrored storage system
US8171241B2 (en) 2006-10-25 2012-05-01 Hitachi, Ltd. Computer system, computer and method for managing performance based on I/O division ratio
EP1947563A3 (de) * 2007-01-19 2011-11-09 Hitachi, Ltd. Speichersystem und Verfahren zur Speichermigration
US8762672B2 (en) 2007-01-19 2014-06-24 Hitachi, Ltd. Storage system and storage migration method
US20080177947A1 (en) * 2007-01-19 2008-07-24 Hitachi, Ltd. Storage system and storage migration method
US8402234B2 (en) 2007-01-19 2013-03-19 Hitachi, Ltd. Storage system and storage migration method
US9575685B1 (en) * 2007-06-29 2017-02-21 EMC IP Holding Company LLC Data migration with source device reuse
US8452923B2 (en) 2007-07-26 2013-05-28 Hitachi, Ltd. Storage system and management method thereof
US8151047B2 (en) 2007-07-26 2012-04-03 Hitachi, Ltd. Storage system and management method thereof
US20090031320A1 (en) * 2007-07-26 2009-01-29 Hirotaka Nakagawa Storage System and Management Method Thereof
US20090089498A1 (en) * 2007-10-02 2009-04-02 Michael Cameron Hay Transparently migrating ongoing I/O to virtualized storage
US8937965B2 (en) 2007-12-13 2015-01-20 Hitachi, Ltd. Storage system comprising function for migrating virtual communication port added to physical communication port
US20090259795A1 (en) * 2008-04-15 2009-10-15 Microsoft Corporation Policy framework to treat data
US20090259802A1 (en) * 2008-04-15 2009-10-15 Microsoft Corporation Smart device recordation
US8347046B2 (en) * 2008-04-15 2013-01-01 Microsoft Corporation Policy framework to treat data
US8156297B2 (en) * 2008-04-15 2012-04-10 Microsoft Corporation Smart device recordation
US8082411B1 (en) * 2008-04-30 2011-12-20 Netapp, Inc. Method and system for logical unit substitution
US20100082934A1 (en) * 2008-09-26 2010-04-01 Hitachi, Ltd. Computer system and storage system
WO2010082452A1 (ja) 2009-01-13 2010-07-22 パナソニック株式会社 弾性体アクチュエータの制御装置及び制御方法、並びに、制御プログラム
US8463995B2 (en) 2010-07-16 2013-06-11 Hitachi, Ltd. Storage control apparatus and storage system comprising multiple storage control apparatuses
US8756392B2 (en) 2010-07-16 2014-06-17 Hitachi, Ltd. Storage control apparatus and storage system comprising multiple storage control apparatuses
WO2012007999A1 (en) 2010-07-16 2012-01-19 Hitachi, Ltd. Storage control apparatus and storage system comprising multiple storage control apparatuses
US8627033B2 (en) * 2010-12-20 2014-01-07 Microsoft Corporation Storage device migration and redirection
CN102685194A (zh) * 2010-12-20 2012-09-19 微软公司 存储设备迁移和重定向
US20120159100A1 (en) * 2010-12-20 2012-06-21 Microsoft Corporation Storage device migration and redirection
US9177157B2 (en) 2010-12-22 2015-11-03 May Patents Ltd. System and method for routing-based internet security
US9762547B2 (en) 2010-12-22 2017-09-12 May Patents Ltd. System and method for routing-based internet security
US10652214B2 (en) 2010-12-22 2020-05-12 May Patents Ltd. System and method for routing-based internet security
US9634995B2 (en) 2010-12-22 2017-04-25 Mat Patents Ltd. System and method for routing-based internet security
US8880810B2 (en) 2010-12-22 2014-11-04 Hitachi, Ltd. Storage system comprising multiple storage apparatuses with both storage virtualization function and capacity virtualization function
US11303612B2 (en) 2010-12-22 2022-04-12 May Patents Ltd. System and method for routing-based internet security
US11876785B2 (en) 2010-12-22 2024-01-16 May Patents Ltd. System and method for routing-based internet security
US9015410B2 (en) 2011-01-05 2015-04-21 Hitachi, Ltd. Storage control apparatus unit and storage system comprising multiple storage control apparatus units
WO2012172601A1 (en) 2011-06-14 2012-12-20 Hitachi, Ltd. Storage system comprising multiple storage control apparatus
US8489845B2 (en) 2011-06-14 2013-07-16 Hitachi, Ltd. Storage system comprising multiple storage control apparatus
US20130227047A1 (en) * 2012-02-27 2013-08-29 Fujifilm North America Corporation Methods for managing content stored in cloud-based storages
US9740435B2 (en) * 2012-02-27 2017-08-22 Fujifilm North America Corporation Methods for managing content stored in cloud-based storages
US20140223004A1 (en) * 2013-02-06 2014-08-07 Tadashi Honda Network system and information reporting method
US20140289205A1 (en) * 2013-03-25 2014-09-25 Fujitsu Limited Data transfer apparatus, system, and method
US10305784B2 (en) * 2013-08-30 2019-05-28 Nokia Solutions And Networks Oy Methods and apparatus for use in local breakout or offload scenarios
US20160212038A1 (en) * 2013-08-30 2016-07-21 Nokia Solutions And Networks Oy Methods and apparatus
US9274722B2 (en) * 2013-12-06 2016-03-01 Concurrent Ventures, LLP System, method and article of manufacture for monitoring, controlling and improving storage media system performance
US20150160887A1 (en) * 2013-12-06 2015-06-11 Concurrent Ventures, LLC System, method and article of manufacture for monitoring, controlling and improving storage media system performance
JP2015204078A (ja) * 2014-04-16 2015-11-16 富士通株式会社 ストレージ仮想化装置、ストレージ仮想化装置の制御方法及び制御プログラム
US10145752B2 (en) * 2016-01-21 2018-12-04 Horiba, Ltd. Management apparatus for measurement equipment
US20170212001A1 (en) * 2016-01-21 2017-07-27 Horiba, Ltd. Management apparatus for measurement equipment
US11403039B2 (en) 2019-09-20 2022-08-02 Fujitsu Limited Storage control device, storage device, and non-transitory computer-readable storage medium for storing determination program
US20240134526A1 (en) * 2022-10-20 2024-04-25 Dell Products L.P. Virtual container storage interface controller

Also Published As

Publication number Publication date
EP1727033A1 (de) 2006-11-29
EP2975513A1 (de) 2016-01-20
JP2006330895A (ja) 2006-12-07
CN1869914A (zh) 2006-11-29
US20130275690A1 (en) 2013-10-17
EP2246777A3 (de) 2010-12-22
US7953942B2 (en) 2011-05-31
EP2975513B1 (de) 2018-10-10
US20100274963A1 (en) 2010-10-28
US8484425B2 (en) 2013-07-09
EP2246777B1 (de) 2015-10-28
EP2246777A2 (de) 2010-11-03
US20090150608A1 (en) 2009-06-11
JP5057656B2 (ja) 2012-10-24
US20110314250A1 (en) 2011-12-22
CN101271382B (zh) 2015-06-10
US8180979B2 (en) 2012-05-15
CN101271382A (zh) 2008-09-24
CN100395694C (zh) 2008-06-18

Similar Documents

Publication Publication Date Title
US8484425B2 (en) Storage system and operation method of storage system including first and second virtualization devices
EP2399190B1 (de) Speichersystem und verfahren zum betreiben eines speichersystems
US8683157B2 (en) Storage system and virtualization method
US7603507B2 (en) Storage system and method of storage system path control
US7152149B2 (en) Disk array apparatus and control method for disk array apparatus
US7673107B2 (en) Storage system and storage control device
US7660946B2 (en) Storage control system and storage control method
US9619171B2 (en) Storage system and virtualization method
US7827193B2 (en) File sharing system, file sharing device and file sharing volume migration method
EP1837751A2 (de) Speichersystem, Speicherbereichfreigabeverfahren und Speichervorrichtung
US20100070731A1 (en) Storage system having allocation-on-use volume and power saving function
JP2006184949A (ja) 記憶制御システム
JP4497957B2 (ja) 記憶制御システム
JP2009129261A (ja) ストレージシステム及びストレージシステムの外部ボリューム接続経路探索方法
JP5335848B2 (ja) ストレージシステム及びストレージシステムの運用方法
US7424572B2 (en) Storage device system interfacing open-system host computer input/output interfaces

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:INNAN, MASATAKA;MUROTANI, AKIRA;SHIMADA, AKINOBU;REEL/FRAME:018265/0061

Effective date: 20050630

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION