WO2014199506A1 - ストレージ管理計算機及びストレージ管理方法 - Google Patents
ストレージ管理計算機及びストレージ管理方法 Download PDFInfo
- Publication number
- WO2014199506A1 WO2014199506A1 PCT/JP2013/066419 JP2013066419W WO2014199506A1 WO 2014199506 A1 WO2014199506 A1 WO 2014199506A1 JP 2013066419 W JP2013066419 W JP 2013066419W WO 2014199506 A1 WO2014199506 A1 WO 2014199506A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- setting
- storage area
- logical storage
- resource allocation
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0712—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a virtual computing platform, e.g. logically partitioned systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0727—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0751—Error or fault detection not based on redundancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3485—Performance evaluation by tracing or monitoring for I/O devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/10—Program control for peripheral devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5055—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/51—Discovery or management thereof, e.g. service location protocol [SLP] or web services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/32—Monitoring with visual or acoustical indication of the functioning of the machine
- G06F11/324—Display of status information
- G06F11/327—Alarm or error message display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
Definitions
- the present invention relates to a storage system management technique in a server virtualization environment.
- a virtual host computer (Virtual Machine: hereinafter referred to as “VM”) that is stored in the storage area of a physical host computer and operates on the host computer is used with the I / O (Input / Output) load of the data store.
- I / O Input / Output
- Non-Patent Document 1 that performs load balancing between data stores by moving based on a possible storage capacity has appeared.
- Patent Document 1 discloses a technique for selecting a data center capable of executing an application according to the current performance state as a migration destination candidate when the application needs to be migrated.
- the information is regarded as a performance requirement.
- a VM migration destination configuration that allows the VM to maintain performance requirements is selected.
- a single management computer manages the configuration information of the computer resources of the host computer and the performance information of the logical volume of the storage device, and selects the migration destination of an object that can maintain the VM performance. Yes.
- the storage function set for the logical volume is not considered.
- the computer that manages the host computer does not necessarily have storage function setting information.
- the host management computer uses a VM I / O response performance value to determine a logical volume as a migration destination of VM data.
- the function setting information cannot be grasped by the host management computer, so that the setting of the function related to the VM is affected. May change the resource allocation, and it cannot be detected.
- a management computer provided by the present invention is connected to a host computer and a storage apparatus, and a plurality of logical storage areas provided by the storage apparatus and one logical storage area among the plurality of logical storage areas.
- Configuration information indicating the association between objects stored in the storage area and executed by the host computer and function setting information indicating storage functions set for a plurality of logical storage areas are stored in the memory.
- the first logical storage allocated to the first object before the resource allocation is changed by detecting that the resource allocation to the first object has been changed by the host computer and referring to the configuration information and the function setting information.
- the storage function setting information for the area is acquired, it is determined whether the provision of the storage function related to the first object has been affected by the resource allocation change, and the determination result is output.
- the reliability and performance of the system can be improved in an environment where resource allocation to an object is changed by a host computer.
- FIG. 1 is an overall configuration diagram of a computer system according to the first embodiment.
- FIG. 2 is a diagram illustrating a logical configuration of the host computer according to the first embodiment.
- FIG. 3 is a diagram illustrating a configuration of the disk device according to the first embodiment.
- FIG. 4 is a diagram illustrating an internal configuration of the Memory of the management computer according to the first embodiment.
- FIG. 5 shows a VM event table according to the first embodiment.
- FIG. 6 shows a storage port performance measurement value table according to the first embodiment.
- FIG. 7 shows a storage media catalog performance table according to the first embodiment.
- FIG. 8 shows a VM data configuration information table according to the first embodiment.
- FIG. 9 shows a volume physical logical storage area correspondence table according to the first embodiment.
- FIG. 1 is an overall configuration diagram of a computer system according to the first embodiment.
- FIG. 2 is a diagram illustrating a logical configuration of the host computer according to the first embodiment.
- FIG. 3 is a diagram illustrating
- FIG. 10 is a diagram showing a state shown in the volume physical logical storage area correspondence table according to the first embodiment.
- FIG. 11 shows a volume resource information table according to the first embodiment.
- FIG. 12 shows an external storage configuration information table according to the first embodiment.
- FIG. 13 shows a storage copy configuration information table according to the first embodiment.
- FIG. 14 shows an event influence table according to the first embodiment.
- FIG. 15 shows a storage setting application state table according to the first embodiment.
- FIG. 16 is a flowchart showing processing of the setting abnormality specifying program according to the first embodiment.
- FIG. 17 shows an example of a GUI according to the first embodiment.
- FIG. 18 shows an example of a GUI according to the first embodiment.
- FIG. 19 is a flowchart showing processing of the setting abnormality handling program according to the second embodiment.
- FIG. 19 is a flowchart showing processing of the setting abnormality handling program according to the second embodiment.
- FIG. 20 shows an example of a GUI according to the second embodiment.
- FIG. 21 is a flowchart illustrating processing of the unnecessary setting cancellation program according to the second embodiment.
- FIG. 22 is a flowchart illustrating the processing of the setting abnormality handling program according to the third embodiment.
- aaa table an expression such as “aaa table”.
- these pieces of information may be expressed using a data structure other than a table. Therefore, the “aaa table” or the like may be referred to as “aaa information” to indicate that it does not depend on the data structure.
- program is used as the subject.
- the program performs processing determined by being executed by the processor using the memory and the communication port (communication control device)
- the processor is used as the subject.
- the explanation may be as follows. Further, the processing disclosed with the program as the subject may be processing performed by a computer such as a management server or an information processing apparatus. Further, part or all of the program may be realized by dedicated hardware.
- Various programs may be installed in each computer by a program distribution server or a storage medium that can be read by the computer.
- Each computer has an input / output device.
- input / output devices include a display, a keyboard, and a pointer device, but other devices may be used.
- a serial interface or an Ethernet interface is used as the input / output device, and a display computer having a display or keyboard or pointer device is connected to the interface, and the display information is transmitted to the display computer.
- the display computer may perform the display, or the input may be replaced by the input / output device by receiving the input.
- FIG. 1 shows the overall configuration of a computer system according to the first embodiment.
- the computer system of this embodiment includes a host computer 1000 and a storage device 3000.
- the host computer 1000 and the storage apparatus 3000 are connected via a data network 1500.
- the computer system further includes a host management computer 2000 and a management computer 2500.
- the host computer 1000, the host management computer 2000, the management computer 2500, and the storage device 3000 are connected via a management network 1550.
- the data network 1500 is, for example, a SAN (Storage Area Network), but may be an IP (Internet Protocol) network or a data communication network other than these.
- the management network 1550 is an IP network, for example, but may be a data communication network such as a SAN. Further, the data network 1500 and the management network 1550 may be the same network.
- the host computer 1000, the host management computer 2000, and the management computer 2500 may be the same computer.
- the host computer 1000 includes a control device such as a CPU (Central Processing Unit) 1010, a storage device such as a Memory 1100, a management interface (MI / F) 1210, and a communication interface (CI / F) 1200.
- the host computer 1000 may have an input / output device (keyboard, display device, etc.).
- the CPU 1010 executes a program stored in the Memory 1100. In the following, anything called a CPU calls and executes a program stored in a Memory connected to the CPU.
- the M-I / F 1210 is an interface with the management network 1550 and transmits / receives data and control commands to / from the storage device 3000, the host management computer 2000, and the management computer 2500.
- the C-I / F 1200 is an interface with the data network 1500, and transmits / receives data and control commands to / from the storage apparatus 3000.
- the host management computer 2000 includes a CPU 2010, a display device 2050 (display unit) such as an LCD (Liquid Crystal Display), a Memory 2100, and an I / F 2200. Note that the host management computer 2000 may have an input device (such as a keyboard).
- the I / F 2200 is an interface with the management network 1550 and transmits / receives data and control commands to / from the storage apparatus 3000, the host computer 1000, and the management computer 2500.
- the VM management program is a program for managing configuration information of the VM 1001 described later, and transmits / receives various types of information to / from the management computer 2500 via the I / F.
- the VM management table stores performance information and configuration information of the VM 1001.
- the performance information of the VM 1001 is, for example, a response time that is periodically measured and a write data amount (MB / Sec) to a storage area per unit time.
- a VM / storage information acquisition program of the management computer 2500 described later acquires various types of information from the VM management table via the I / F.
- the management computer 2500 includes a CPU 2510, a display device 2550 (display unit) such as an LCD, a Memory 2600, and an I / F 2700.
- the management computer 2500 may have an input device (such as a keyboard).
- the I / F 2700 is an interface with the management network 1550 and transmits / receives data and control commands to / from the storage apparatus 3000, the host computer 1000, and the host management computer 2000. Details of the program stored in the Memory 2600 will be described later.
- the storage apparatus 3000 includes a disk controller 3100 and a disk device 3500.
- the disk controller 3100 includes a CPU 3110, a Memory 3300, an MI / F 3001, a storage port HI / F (Host-Interface) 3101, and a DI / F (Disk-Interface) 3050.
- the M-I / F 3001 is an interface with the management network 1550 and transmits / receives data and control commands to / from the host computer 1000, the host management computer 2000, and the management computer 2500.
- the HI / F 3101 is an interface with the data network 1500 and transmits / receives data and control commands to / from the host computer 1000.
- the DI / F 3050 transmits / receives data and control commands to / from the disk device 3500.
- the disk device 3500 has a plurality of physical storage media 3501.
- a storage configuration management program In the Memory 3300, a storage configuration management program, a storage performance management program, a storage setting program, and a storage configuration / performance table (not shown) are stored.
- the storage configuration management program is a program for managing the configuration information of the storage device 3000, and communicates with a VM / storage information acquisition program 2610 of the management computer 2500 described later to transmit and receive various types of information.
- the storage performance management program is a program for managing performance information of the storage device 3000.
- the performance information of the storage device 3000 is, for example, IOPS (Input Output Per Second) for each page that is periodically measured and the amount of data written to the storage area (MB / Sec).
- IOPS Input Output Per Second
- the storage performance management program communicates with a VM / storage information acquisition program 2610 of the management computer 2500, which will be described later, and transmits and receives various types of information.
- the storage setting program is a program for executing various settings of the storage device 3000.
- various settings of the storage device 3000 include settings for securing a cache area for temporarily storing data to be read / written for the logical volume 3510 and the physical resource 3521, and for the logical volume 3510. And settings for improving the access performance to the host computer 1000, such as settings for securing a processor that executes read / write processing.
- Each program stores information to be managed in a storage configuration / performance table.
- the host management computer 2000 is configured to manage the host computer 1000. However, the host computer 1000 may execute the processing described as being performed by the host management computer 2000. The management computer 2000 may be read as the host computer 1000. Further, the data network 1500 may include a switch such as an FC (FibrebChannel) switch, which is connected to each of the CI / F 1200 of the host computer 1000 and the DI / F 3050 of the storage device 3000. Data and control commands may be sent and received between them.
- FC FibrebChannel
- FIG. 2 shows the logical configuration of the host computer 1000.
- the host computer 1000 has a hypervisor (hereinafter referred to as “HV”) 1002 that can logically generate a VM 1001 and execute the VM 1001.
- the HV 1002 can control a plurality of VMs 1001 at a time.
- Each of the plurality of VMs 1001 can execute an application as if it were a stand-alone physical computer.
- the host management computer 2000 or the host computer 1000 issues a resource allocation command for the VM 1001 using, for example, the technique described in Non-Patent Document 1.
- a resource allocation change of the storage device 3000 such as a logical volume is determined based on the I / O amount and the logical volume capacity, and is instructed to the management computer 2500 or the storage device 3000.
- the host computer 1000 is instructed to change the resource allocation of the host computer 1000 based on the performance information of the VM 1001.
- the change in resource allocation of the host computer 1000 includes, for example, addition of a CPU allocated to the VM 1001 and migration of a VM to another host computer 1000.
- FIG. 3 is a schematic diagram showing the configuration of a logical volume 3510 created using the storage device 3000.
- the storage apparatus 3000 creates a logical volume 3510 using a plurality of physical storage media 3501 in the disk device 3500.
- the storage apparatus 3000 may create a plurality of logical volumes 3510.
- the plurality of logical volumes 3510 may be the same type or different types.
- the pool 1120 has one or more physical resources 3521 or one or more virtual resources 3522.
- a logical device on a RAID (Redundant Array of Independent Disks) constructed using a plurality of physical storage media 3501 is defined as a physical resource 3521.
- a logical device created in an external storage apparatus (external connection storage system) that is another storage apparatus connected to the storage apparatus 3000 is defined as a virtual resource 3522.
- the pool 1120 is a group of logical devices for collectively managing physical resources 3521 or virtual resources 3522 from a certain management viewpoint. From the viewpoint of management, for example, there is a RAID type. In FIG.
- the physical resource 3521 and the virtual resource 3522 are mixed in the pool 1120, but the pool 1120 including only the physical resource 3521 or the pool 1120 including only the virtual resource 3522 may be used.
- the plurality of physical storage media 3501 are, for example, HDDs such as SATA (Serial Advanced Technology Attachment) and SAS (serial attached SCSI: small computer system interface), SSD (Solid State), and the like.
- a plurality of types of media having different performances may be allocated to the plurality of physical resources 3521 and the virtual resources 3522, respectively.
- the storage device 3000 may have a plurality of pools 1120.
- the normal logical volume 3510A is a logical volume created using a physical resource 3521 that uses the physical storage medium 3501.
- the externally connected volume 3510B is a logical volume created using a virtual resource 3522, with the actual storage area existing in the external storage apparatus.
- the Thin Provisioning volume 3510C is a logical volume whose capacity can be dynamically expanded. Thin provisioning is a technology that makes it possible to allocate a part of a physical storage area (hereinafter referred to as a segment) to a logical volume, dynamically expand the storage area, and effectively use the storage area.
- the physical resource 3521 or the virtual resource 3522 provides a segment to be allocated to the Thin Provisioning volume 3510C.
- the capacity of the Thin Provisioning volume 3510C is dynamically expanded by allocating segments from the physical resource 3521 or virtual resource 3522 included in the pool 1120 to the Thin Provisioning volume 3510C. can do.
- the dynamic thin provisioning volume 3510D is a logical volume whose capacity can be expanded dynamically in the same manner as the thin provisioning volume 3510C. Furthermore, after a segment is assigned to the Dynamic Thin Provisioning volume 3510D, the segment is dynamically changed to another segment with different responsiveness and reliability according to the access status to the Dynamic Thin Provisioning volume 3510D. Is possible.
- the thin provisioning volume 3510C and the dynamic thin provisioning volume 3510D are assigned segments from both the physical resource 3521 and the virtual resource 3522, but the segments are assigned from only one of them. May be.
- FIG. 4 shows the configuration of information stored in the Memory 2600 of the management computer 2500.
- programs including a VM / storage information acquisition program 2610, a VM event management program 2640, a copy configuration simulation program 2650, and a setting abnormality specifying program 2670 are stored.
- the Memory 2600 includes a VM event table 2700, a storage port catalog performance table 2710, a storage media catalog performance table 2730, a VM data configuration information table 2760, a volume physical logical storage area correspondence table 2800, and a volume resource information table 2900.
- an information table including an external storage configuration information table 2950, storage copy configuration information 2960, an event influence table 2970, and a storage setting application state table 2980.
- FIG. 5 shows the structure of the VM event table 2700.
- the VM event table 2700 is a table for managing event information related to the VM 1001 specified by the host management computer 2000. This table also includes an event ID 2701 indicating the target event, an event type 2702 indicating the type of the event, a VMID 2703 indicating the target VM, an event content 2704 indicating the content of the target event, And an occurrence time 2705 indicating the occurrence time. Furthermore, this table defines the correspondence between the information contained in it.
- the event type 2702 indicates, for example, a type of data movement as an event type that means an event in which VM data moves to an arbitrary storage area.
- the event type 2702 indicates a type called host migration as an event type meaning an event in which a VM moves to an arbitrary host computer 1000, for example.
- host migration the storage area in which the VM data is stored does not change before and after the migration, and computer resources such as CPU and memory used by the VM change.
- the event content 2704 indicates, for example, detailed event information including resource ID information of resources other than the VM related to the event.
- detailed event information such as migration from the logical volume 001 to the logical volume 008 including the identifiers of the VM migration source and VM migration destination data storage logical volumes is the event content 2704.
- FIG. 6 shows the configuration of the storage port catalog performance table 2710.
- the storage port catalog performance table 2710 is a table for managing catalog performance information of the HI / F 3101 that is a storage port of the storage apparatus 3000.
- this table shows that the Response Time is set by increasing the storage ID 2711 of the storage device 3000 having the target storage port, the port ID 2712 indicating the target storage port, and the write data transfer amount (MB / s) of the target storage port.
- it has a high load judgment standard 2714 that is a write data transfer amount when it starts to become longer.
- this table defines the correspondence between the information contained in it. For example, the value shown in the catalog or manual of the target storage device can be used as the high load determination criterion 2714.
- FIG. 7 shows the configuration of the storage media catalog performance table 2730.
- the storage media catalog performance table 2730 is a table for managing the catalog performance information of the media of resources constituting the logical volume 3510 of the storage device 3000.
- a resource constituting the logical volume 3510 is a physical resource 3521 or a virtual resource 3522.
- the storage media catalog performance table 2730 includes a storage ID 2731 indicating the storage device 3000 having the target resource, a resource type 2732 of the target resource, and a Write rate 2734 that is the write speed (MB / s) of the target resource. Have. Furthermore, this table defines the correspondence between the information contained in it.
- FIG. 8 shows the configuration of the VM data configuration information table 2760.
- the VM data configuration information table 2760 is a table for managing the association between the VM 1001 and the logical volume 3510 in which the actual data of the VM 1001 is stored.
- the VM data configuration information table 2760 includes the HVID 2761 indicating the HV 1002 holding the target VM 1001 and the target VM 1001.
- VMID 2762 to indicate, storage ID 2767 to indicate the target storage device 3000 for storing the data of the target VM 1001, storage port ID 2768 to indicate the storage port of the target storage device 3000, logical Vol ID 2769 to indicate the target logical volume 3510, and target LBA information 2770 indicating the range of LBA (Logical Block Addressing) in the logical volume 3510 and the logical volume targeted by the target VM
- a write data amount 2771 indicating the amount of data to be written
- last update time 2772 indicating the time when the configuration information of the target VM was last updated.
- this table specifies the correspondence between the information contained in it.
- the entry whose VMID 2762 is null indicates a state in which the logical volume 3510 is allocated to the host computer 1000 having the HV 1002, but the logical volume 3510 is not used by any VM 1001.
- An entry whose LBA 2770 is null indicates a state in which a logical volume 3510 is allocated to the host computer 1000 but a page is not yet allocated to the logical volume 3510.
- FIG. 9 shows the configuration of the volume physical logical storage area correspondence table 2800.
- the volume physical logical storage area correspondence table 2800 is a table that manages the correspondence between the LBA in the logical volume 3510 provided by the storage apparatus 3000 and the LBA in the physical resource 3521 or the virtual resource 3522. This table also includes a storage ID 2807 indicating the target storage device 3000, a logical Vol ID 2801 indicating the target logical volume 3510, a page ID 2802 indicating the target page of the target logical volume 3510, and the LBA range of the target page.
- LBA information 2803 indicating the resource ID 2804 indicating the pool 1120 to which the resource used in the target page belongs, the resource ID 2805 indicating the resource used in the target page And LBA information 2806 indicating the range of the LBA being registered.
- the write data amount 2771 stores the write data amount for the VM storage area acquired from the host management computer 2000.
- FIG. 10 is a schematic diagram showing the state of the logical volume 3510 and the pool 1120 managed by the volume physical logical storage area correspondence table 2800.
- a PRA which is a physical resource 3521 whose resource ID 2805 is “0” and a physical resource 3521 whose resource ID 2805 is “1” in the PA which is the pool 1120 whose pool ID 2804 is “1”.
- a PRC which is a physical resource 3521 whose resource ID 2805 is indicated by “2”.
- an LVA that is a logical volume 3510 whose logical Vol ID 2801 is indicated by “0” is created.
- the PRA 1000 address to 1999 address corresponds to the LVA address 0 to 999 address.
- PRB addresses 0 to 999 correspond to LVA addresses 1000 to 1999.
- the PRC address 0 to 1999 corresponds to the LVA address 2000 to address 3999.
- no physical resources 3521 are allocated from the LVA address 4000 to 19999999.
- a dynamic thin provisioning volume 3510D is used as the logical volume 3510, the correspondence between the logical volume 3510 and the pool 1120 is dynamically changed.
- FIG. 11 shows the configuration of the volume resource information table 2900.
- the volume resource information table 2900 is a table for managing information on the physical resource 3521 and the virtual resource 3522.
- This table also includes a storage ID 2901 indicating the target storage device 3000, a pool ID 2902 indicating the pool 1120 to which the target resource belongs, a resource ID 2903 indicating the target resource, and a target such as SATA, SSD, or FC (Fibre Channel).
- a resource configuration 2905 indicating whether the target resource is a physical resource 3521 or a virtual resource 3522.
- this table defines the correspondence between the information contained in it.
- a resource configuration 2905 in the volume resource information table 2900 indicates a physical resource 3521 that is a resource inside the target storage apparatus 3000 as “internal”, and a virtual resource that is a resource in an external storage apparatus connected to the target storage apparatus 3000. 3522 is indicated as “external”.
- FIG. 12 shows the configuration of the external storage configuration information table 2950.
- the external storage configuration information table 2950 is a table for managing the configuration of the virtual resource 3522.
- This table also includes a storage ID 2951 indicating the target storage device 3000, a resource ID 2952 indicating the target virtual resource 3522, and an HI / F 3101 that is a storage port used for external connection in the target storage device 3000.
- Port ID 2953 indicating the target external storage ID 2954 indicating the target external storage device, external storage resource ID 2955 indicating the physical resource 3521 in the target external storage device corresponding to the target virtual resource 3522, and external connection in the external storage device
- an external storage port ID 2956 indicating the HI / F 3101 which is the port used for the storage.
- this table defines the correspondence between the information contained in it.
- FIG. 13 shows the configuration of the storage copy configuration information table 2960.
- the storage copy configuration information table 2960 is a table for managing the copy configuration of the storage apparatus 3000. This table includes an ID 2961 indicating a copy pair, a P-storage ID 2962 indicating the primary storage device of the target copy pair, a PVOLID 2962 indicating the primary logical volume of the target copy pair, and the target copy pair P-Pool ID 2964 indicating the pool in the primary storage device, the copy type 2965 indicating the type of target copy, the S-storage ID 2966 indicating the secondary storage device of the target copy pair, The SVOLID 2967 indicating the secondary logical volume of the target copy pair, the S-Pool ID 2968 indicating the pool in the secondary storage device among the pools constituting the target copy pair, and the copy type of the target copy pair are asynchronous. Remote copy line band when remote It has a copy line bandwidth 29600 indicating the target recovery point when the copy type of the copy pair subject is asynchronous remote (her
- RPO represents a target value indicating how close the data (state) at the time of failure can be resumed in order to restore the computer system in which the failure or disaster occurred to the original state.
- the P-Pool ID 2964 is the same as the pool indicated by the pool ID 2902 in the volume resource information table 2900, and the use differs depending on the copy type 2965.
- the pool is used as a buffer area for temporarily storing write data from the host computer 1000, and when the copy type is Snapshot, it is used as a journal data storage area.
- the S-Pool ID 2968 is the same as the pool indicated by the pool ID 2902 in the volume resource information table 2900, and the use differs depending on the copy type 2965.
- the pool is used as a buffer area for temporarily storing write data from the host computer 1000 transferred from the primary storage apparatus.
- the value of the copy line bandwidth 29600 is updated by the VM / storage information acquisition program 2610, a value input by the user to the management computer 2500 as copy line bandwidth information when the copy setting is performed may be used.
- the value of the RPO 2969 is updated by the VM / storage information acquisition program 2610.
- the configuration information of the target copy pair is sent to the copy configuration simulation program 2650. May be specified to calculate RPO, and the resulting value may be used.
- FIG. 14 shows the configuration of the event influence table 2970.
- the event influence table 2970 is a table for managing the influence of the event stored in the VM event table 2700 on the VM 1001 of the host computer 1000.
- the effect on the VM 1001 is, for example, that the data is moved to a logical volume that is not set to the same storage setting as that set for the logical volume before the migration, the RPO is not satisfied, and the VM 1001 as a result of the event. This means that the required condition is not met.
- This table also has a related event ID 2971 indicating an event that is the cause of the influence of the target, a target resource ID 2972 indicating the target resource, and a post-event write data amount 2974 indicating the write data amount after the target event. .
- this table defines the correspondence between the information contained in it.
- the post-event write data amount 2974 is one index for measuring the influence of the target resource due to the event. If the post-event write data amount 2974 is an event on the host computer side such as host migration or CPU addition, the VM / storage information acquisition program 2610 of the management computer 2500 acquires from the VM management table of the host management computer 2000 after the target event. Stored value is stored. For data movement, the value calculated by the management computer 2500 with reference to the event content 2704 of the VM event management table 2700 may be used. An example of the calculation method will be described later.
- FIG. 15 shows the configuration of the storage setting application state table 2980.
- the storage setting application status table 2980 is a table for managing setting information of storage functions applied to the logical volume 3510 of the storage apparatus 3000. This table also includes an ID 2981 indicating an identifier of the target storage setting, a target storage ID 2982 indicating the storage device 3000 having the logical volume 3510 to which the target storage setting is applied, and a target logical Vol ID 2983 indicating the target logical volume.
- a setting type 2984 indicating the type of the target storage setting, and an application condition 2985 indicating a condition satisfying the purpose of the target storage setting.
- the setting type 2984 includes, for example, “asynchronous remote copy” indicating the setting of the asynchronous remote copy, “synchronous local copy” indicating the setting of the synchronous local copy, and WORM for the data in the target logical volume. (Write Once Read Many) access right is set to “WORM”, the data in the target logical volume is encrypted, “Encryption”, and the target logical volume is snapshotted Stored is a “Snapshot” value indicating that the setting for generating is performed.
- an application condition is used as a condition for realizing a predefined RPO requirement in the storage configuration already set in the target logical volume shown in the storage copy configuration information table 2960.
- a condition using the value of the amount of data written from the host computer 1000 to the target logical volume may be stored.
- the information in this table is preset by an administrator, for example.
- the storage function is set to the logical volume at the request of the host computer administrator to the storage device administrator or at the discretion of the storage device administrator.
- the storage function is set and managed by the existing technology of the management computer 2500.
- the function corresponding to local copy may be realized by the function of the host management computer 2000.
- the VM / storage information acquisition program 2610 acquires configuration information and performance information about the VM 1001 from the host management computer 2000. Further, configuration information and performance information of the storage device 3000 are acquired from the storage device 3000. The VM / storage information acquisition program 2610 associates the acquired configuration information of each storage device 3000 with the configuration information and performance information of each VM 1001, and stores them in the VM data configuration information table 2760.
- the storage physical configuration information etc. stored in the volume physical logical storage area correspondence table 2800, the volume resource information table 2900, the external storage configuration information table 2950, the storage copy configuration information table 2960, and the storage setting application status table 2980 are stored. Perform the update process.
- the acquired storage performance information is stored in the storage port catalog performance table 2710 and the storage media catalog performance table 2730.
- the VM / storage information acquisition program 2610 may periodically execute the above process, or may be executed when a user operation is received by the input device.
- the VM event monitoring program 2640 acquires the event information of the VM 1001 from the host management computer 2000, stores the acquired event information in the VM event table 2700 and the event influence table 2970, and further sets the acquired event information in the setting error. The specific program 2670 is notified.
- the VM event monitoring program 2640 acquires the event information of the VM 1001 from the host management computer 2000
- the VM event monitoring program 2640 acquires the performance information of the target VM 1001 of the event from the VM management table of the host management computer 2000, and the event influence table 2970
- the post-event write data amount may be updated.
- the copy configuration simulation program 2650 calculates an RPO from the configuration information of the specified copy pair, and calculates a configuration necessary to achieve the RPO from the partial configuration information of the specified copy pair and the RPO. It is.
- the RPO calculation method includes storage configuration information other than the acquired RPO, information of the storage port catalog performance table 2710, a storage media catalog performance table 2730, and a VM data configuration information table.
- volume physical logical storage area correspondence table 2800 the volume resource information table 2900, and the external storage configuration information table 2950, the performance information of the storage medium and the storage port constituting the copy pair
- a method of calculating from the performance information of the remote copy line bandwidth, the capacity information of the buffer area and the amount of data written by the VM 1001 may be used.
- the configuration necessary for achieving the RPO may also be obtained by a method for calculating the copy configuration from the RPO as described in the above document.
- the setting abnormality identification program 2670 acquires the event information notified from the VM event monitoring program 2640 of the management computer 2500, and performs processing for notifying the influence of the event on the storage setting of the VM 1001 based on the acquired event information. .
- FIG. 16 shows the processing of the setting abnormality identification program 2670.
- the setting abnormality specifying program 2670 determines whether or not an event indicating data movement is notified from the VM event monitoring program 2640 (S4000).
- the setting abnormality identification program 2670 refers to the VM event table 2700, and acquires the identification information of the VM migration source and VM migration destination volumes (S4005).
- the setting abnormality specifying program 2670 calculates a performance change predicted value by the event based on the acquired volume identification information, and the event influence table 2974 as the post-event write data amount 2974 together with the related event ID 2971 and the target resource ID 2972. It stores in 2970 (S4015).
- each resource and its type constituting the post-event data storage volume indicated by the acquired identification information are specified.
- the write rate 2734 of the storage media catalog performance table 2730 is referenced to calculate the write rate expected for the volume. For example, the value obtained by multiplying the Write rate 2734 of each resource type constituting the volume by the ratio of the LBA amount of the resource to the LBA amount of the entire volume is obtained, and the values are added to all the resources constituting the volume. Then, the expected value of Write rate of the volume is calculated.
- the external storage port ID 2956 to be used is acquired by referring to the external storage configuration information table 2950.
- the storage port catalog performance table 2710 is referenced to acquire the high load determination criterion 2714 for the acquired storage port ID. Compare the high load criterion and the write rate of the resource type of the resource, adopt the one with the smaller value, and multiply the adopted value by the ratio of the LBA amount of the resource to the LBA amount of the entire volume May be used to calculate the expected value of the write rate of the volume.
- the value of the high load judgment criterion 2714 of the port used when accessing the storage area from the host computer after the migration is acquired.
- the high load criterion 2714 may be compared with the above-mentioned expected write rate value of the volume after movement, and the smaller value may be used as the write data amount after the event.
- the catalog performance values of the port and the resource are compared because the data writing performance is affected by the performance value of the resource with the lowest performance that becomes a bottleneck among the related resources.
- the write data amount of the target VM before the event may be compared with the port or volume performance value adopted above, and the smaller one may be stored as the write data amount after the event. This is because when the expected performance value of the volume or port after the event is sufficiently high, the write data amount of the target VM can be adopted as it is.
- the write data amount of the target VM before the event is acquired with reference to the VM data configuration information table 2760 based on the VM migration source volume identification information acquired in S4005.
- the VM data configuration information table 2760 has been updated before this processing, so the information before the VM move May be obtained, the process may be interrupted and an error message may be output to the display device.
- the VM / storage information acquisition program 2610 of the management computer 2500 acquires the write data amount after the target event from the VM management table of the host management computer 2000, and stores the value in the event influence table 2970.
- the setting abnormality specifying program 2670 refers to the VM event table 2700 and the storage setting application state table 2980 to determine whether or not storage setting has been performed on the logical volume that has stored the VM data before the event. Determination is made (S4020). If the result of S4020 is No, the process ends.
- the setting abnormality specifying program 2670 refers to the VM event table 2700 and the storage setting application state table 2980, and the same storage setting is set in the logical volume in which the VM data after the event is stored. It is determined whether or not there is (S4025). When the result of S4025 is No, the setting abnormality specifying program 2670 notifies the influence on the VM due to the storage setting applied to the VM being canceled by the event (S4035). In step S4035, the setting abnormality identification program 2670 may send the notification to the host management computer 2000 so that the host management computer 2000 outputs the notification to the display device 2050 or the display computer 2550 of the management computer 2500. It is good and you may make it output to both display apparatuses. Details of the notification contents of the influence on the VM will be described later.
- the setting abnormality identification program 2670 refers to the event influence table 2970 and the storage setting application state table 2980, and whether the write data amount after the event satisfies the application condition of the storage setting. Is determined (S4030). When the result of S4030 is Yes, the setting abnormality identification program 2670 ends the process.
- the setting abnormality specifying program 2670 refers to the VM data configuration information table 2760, the event influence table 2970, and the storage setting application state table 2980, and the storage setting is applied after the event. Another VM is searched (S4040).
- the setting abnormality identification program 2670 notifies the influence on the VM to which the storage setting is applied due to the change in the write data amount (S4045).
- the setting abnormality specifying program 2670 may execute the notification on the host management computer 2000 and cause the host management computer 2000 to display the notification on the display device 2050, or display the notification on the display device 2550 of the management computer 2500.
- the present invention may be implemented for both display devices.
- the influence on the VM is, for example, an increase / decrease in RPO due to a change in the amount of write data caused by a change in resource allocation to the VM, and information on how the RPO increases as a result of the increase in the amount of write data. is there.
- the management computer 2500 may perform the process of FIG. 16 when receiving the instruction instead of S4000. .
- FIG. 17 is an example of a graphical user interface (GUI) that outputs the notification in S4035 of the setting abnormality identification program 2670.
- GUI graphical user interface
- FIG. 18 is an example of a graphical user interface (GUI) that outputs the notification of S4045 of the setting abnormality identification program 2670.
- GUI graphical user interface
- the management computer provided by this embodiment, when the VM resource allocation is changed by the host computer, it is possible to notify the influence of the change on the provision of the storage function related to the VM.
- the setting abnormality specifying program 2670 of the management computer 2500 receives the VM event from the host management computer and notifies the influence of the event on the storage setting.
- the management computer 2500 further includes a setting abnormality countermeasure program and an unnecessary setting cancellation program (not shown). Based on the information on the setting status of the storage function after the event specified by the setting abnormality specifying program 2670, when the setting of the storage function (storage setting) applied to the VM is canceled, or the application condition of the storage setting is When it is no longer satisfied, the storage function setting is restored. If there is a logical volume whose storage setting is no longer necessary as a result of the event, there is no stored VM data, and the corresponding storage setting is released from the logical volume.
- the setting abnormality handling program can determine whether or not countermeasures can be taken when the storage setting for the VM is canceled or the application condition for the storage setting is not satisfied. In this case, a countermeasure method and a process for notifying the effect of the countermeasure on the VM 1001 are performed.
- FIG. 19 shows the processing of the setting abnormality countermeasure program.
- the setting error handling program receives event information and performance change information related to whether or not storage settings are canceled due to the event or application conditions of the storage settings.
- the setting abnormality specifying program 2670 determines whether or not the storage setting applied to the VM by the event has been released (S5000).
- the setting error handling program determines whether or not the storage setting application condition is not satisfied by the event (S5005).
- the setting error handling program ends this flow. If the result of S5005 is No, the setting error handling program ends this flow. If the result of S5005 is Yes, the setting error handling program calculates a setting method for setting the storage function again for the VM (S5020). In S5020 in the case where the result of S5005 is Yes, the setting error handling program reads the post-event write data amount of event information, storage configuration information, information in the storage port catalog performance table 2710, storage media catalog performance table 2730, and VM.
- Specify information in the data configuration information table 2760, the volume physical logical storage area correspondence table 2800, the volume resource information table 2900, and the external storage configuration information table 2950, and the configuration necessary to realize the defined RPO requirement The calculation is instructed to the copy configuration simulation program 2650, and a storage function setting method using the configuration as a setting parameter is acquired.
- the setting error handling program determines whether there is another VM to which the storage setting is applied after the event (S5010). Specifically, based on the event information and the VM data configuration information table 2760, it is determined whether or not data of another VM exists in the data migration source logical volume after the data migration.
- the setting error handling program proceeds to S5020.
- the setting error countermeasure program instructs the copy configuration simulation program 2650 to calculate the configuration necessary to realize the RPO requirement defined for the target VM, and the configuration A setting method is acquired using S as a setting parameter (S5020).
- the setting abnormality countermeasure program notifies the unnecessary setting cancellation program 2690 to perform processing to acquire the presence / absence of unnecessary settings and the setting cancellation method (S5013). Details of the processing of the unnecessary setting cancellation program will be described later.
- the setting error handling program determines whether the storage setting is configured by a plurality of logical volumes (S5015).
- the storage setting is configured by a plurality of logical volumes, for example, when there is a data copy SVOL as in the remote copy setting, or when the pool 1120 is configured as the thin provisioning volume 3510C. .
- the determination is made based on whether or not the target logical volume is related to other resources and logical volumes.
- the setting error handling program calculates the setting of the storage function released by the event for the migration destination logical volume as a setting method (S5020).
- the setting error handling program searches for resources that satisfy the configuration calculated in S5020 (S5030). Search by referring to information in the storage port catalog performance table 2710, storage media catalog performance table 2730, VM data configuration information table 2760, volume physical logical storage area correspondence table 2800, volume resource information table 2900, and external storage configuration information table 2950 To implement.
- the setting error handling program determines whether or not the corresponding resource is found as a result of the search in S5030 (S5035). If the result of S5035 is Yes, the setting error handling program proceeds to S5040 described below. If the result of S5035 is No, the setting error countermeasure program notifies the display device that the storage setting has been canceled or that no countermeasure can be taken when the application condition for the storage setting is not satisfied (S5045). ).
- the setting error handling program selects a handling storage setting to be applied to the post-event VM data storage logical volume (S5025). Based on the volume physical logical storage area correspondence table 2800 and the storage copy configuration information table 2960, information on the storage function set in the data migration source logical volume is acquired, and the role of the data migration source logical volume is changed to the data migration destination. Select the storage settings to replace the logical volume.
- the primary side of the copy pair is the data migration destination logical volume.
- the same settings as those set for the logical volume of the data migration source are used.
- the setting error handling program calculates the required time for setting the handling storage (S5040).
- the management computer 2500 manages the storage setting contents and the history information of the required time for storage setting, and the average value may be used as the required time, or the management computer 2500 may have the capacity of the target logical volume and the performance of the storage port. Alternatively, the required time may be calculated.
- the setting error handling program notifies that the storage function requested for the VM can be set again after the event and the effect of the storage setting on the VM (S5050).
- the influence of the storage setting on the VM is, for example, the content indicating that the required time calculated in S5040 is expected to be completed before the setting is completed, and the time when the VM resource allocation operation is performed during the storage setting. The content indicates that the possibility that the setting will not be completed is increased.
- the setting abnormality handling program displays a GUI for setting whether or not to implement the handling storage setting on the input / output device of the management computer 2500 or the host management computer 2000 when notifying the influence. Accept input.
- a notification indicating that there is a storage setting that is no longer necessary is acquired in S5013, it may be displayed together and an input of an instruction indicating whether or not to cancel the storage setting may be received. Details of the GUI will be described later.
- the setting error handling program determines whether or not an instruction to execute storage setting has been input (S5060). If the result of S5060 is Yes, the setting error handling program instructs the selected storage setting to the storage setting program 3340 of the storage device 3000, and performs the setting (S5070). When the result of S5060 is No, that is, when an instruction indicating that the execution is unnecessary is accepted or when a certain time has elapsed without the instruction being input, the setting abnormality countermeasure program ends this flow.
- the management computer 2500 can quickly take measures against the cancellation of the storage setting required for the VM caused by the resource allocation change of the VM 1001. Note that this processing is particularly effective when the VM 1001 data occupies one logical volume because there is no need to consider the influence on the other VM data due to the setting abnormality handling. When other VM data is stored in the same logical volume, data movement or policy change for data may be considered using existing technology.
- FIG. 20 is an example of a graphical user interface (GUI) that outputs the notification of the setting abnormality countermeasure program S5050 in the second embodiment.
- the setting abnormality handling window 5500 is an example of an expression form for realizing this embodiment.
- the setting abnormality handling window 5500 includes a setting maintenance determination radio button 5510, an unnecessary setting release determination radio button 5520, and a setting execution determination button 5530. There is.
- a VM data migration event with a VM ID of VM100 occurs, the snapshot setting is applied to the logical volume 3510 in which the VM100 data was stored before the migration, and the data migration destination logical volume
- An output example when the snapshot setting is not applied to 3510 and the data of the VM 1001 is not stored on the logical volume 3510 stored before the migration is shown.
- the setting maintenance determination radio button 5510a when the setting maintenance determination radio button 5510a is selected, it indicates that the snapshot setting is performed on the data migration destination logical volume 3510.
- the setting maintenance determination radio button 5510b is selected, the data migration destination logical volume 3510 is selected. Indicates that the Snapshot setting is not performed.
- the unnecessary setting release determination radio button 5520a When the unnecessary setting release determination radio button 5520a is selected, a setting for canceling the snapshot function applied to the logical volume 3510 in which the VM data was stored before the migration is performed, and the unnecessary setting release determination radio button is executed.
- the snapshot setting applied to the logical volume 3510 in which the VM data was stored before the migration is not canceled.
- the setting execution determination button 5530a the result of S5060 of the setting abnormality countermeasure program is determined as Yes, and when the setting execution determination button 5530b is selected, the result of S5060 of the setting abnormality countermeasure program is determined as No. . Then, the item selected in the setting abnormality handling window 5500 becomes the content of the storage setting executed in S5070 of the setting abnormality handling program.
- the management computer 2500 When the management computer 2500 receives the resource allocation change instruction from the host management computer 2000, when the processing by the setting abnormality identification program and the processing of FIG. It is good also as accepting selection about the setting. Further, the GUI related to the setting maintenance determination and the GUI related to the unnecessary setting cancellation determination may be displayed on different input / output devices.
- the unnecessary setting cancellation program determines whether there is a storage setting that is no longer needed because the VM data no longer exists in the logical volume based on the processing result of the setting error handling program of the management computer 2500, and is made unnecessary. Processing for notifying the presence or absence of the storage setting and the method for canceling the storage setting.
- FIG. 21 shows the processing of the unnecessary setting cancellation program.
- the unnecessary setting cancellation program requests a storage setting for canceling the storage setting and notifies the storage setting (S6100). If the storage setting determined to be unnecessary in S6000 is a copy pair, the requested storage setting is a storage setting for deleting the copy pair or pool. Information about what kind of storage setting is necessary for canceling each storage setting may be provided in advance in the management computer 2500. When the result of S6000 is No, the unnecessary setting cancellation program notifies that there are no unnecessary settings (S6200).
- the allocated resource may be deleted and the allocated resource can be reused regardless of the presence or absence of storage settings. .
- the resource can be effectively used by the processing of the unnecessary setting cancellation program.
- the setting abnormality countermeasure program of the management computer 2500 is exemplified to receive an input indicating determination of countermeasure execution and to perform setting based on the input content.
- the setting abnormality handling program S5050 based on the VM state acquired by the VM / storage information acquisition program 2610 and the reason for the occurrence of the event acquired by the VM event monitoring program 2640, an instruction input by the user It is determined whether to execute the process without accepting the condition, or to execute the process based on the received input.
- the VM event table 2700 in the third embodiment additionally has an occurrence reason 2706 (not shown) indicating the reason for the occurrence of an event.
- the reason 2706 is a value of “maintenance” indicating that the VM 1001 on the host computer has been moved to another logical volume in order to maintain the host computer 1000.
- a value “load distribution” indicating that the data movement of the VM has been performed based on the data movement rule managed by the host management computer 2000, and “manual” indicating that the user manually inputs to the host management computer 2000 Can be used.
- the VM event monitoring program 2640 acquires information on the cause of the event from the host management computer 2000 and stores it in the VM event table 2700.
- the VM data configuration information table 2760 in the third embodiment further has a VM state 2773 (not shown) indicating the state of the VM.
- the VM state 2773 uses, for example, a value “On” indicating that the VM is activated, a value “Off” indicating that the VM is not activated, and a value “maintenance” indicating that the VM is in a maintenance state. be able to.
- the VM / storage information acquisition program 2610 of the third embodiment further acquires information indicating the state of the VM and stores it in the VM data configuration information table 2760.
- FIG. 22 shows the processing of the setting abnormality handling program of the third embodiment. Hereinafter, differences from the second embodiment will be described.
- the setting error countermeasure program determines whether or not the state of the target VM is maintenance or Off (S5080).
- the VM data configuration information table 2760 is referred to, the VM state 2773 of the target VM is acquired, and the determination is performed. If the result of S5080 is Yes, the setting error handling program proceeds to S5050.
- the setting error handling program determines whether the reason for the occurrence of the target event is load distribution (S5090).
- S5090 specifically, the VM event table 2700 is referred to, the cause 2706 of the target event occurrence is acquired, and the determination is performed.
- the processing of the setting error handling program of the third embodiment when the reason for VM migration is load balancing, it is possible to automatically perform storage setting without using the user's setting instruction input as a condition It becomes.
- the CPU 2510 of the management computer 2500 implements various functions such as the setting abnormality identification program 2670 and the setting abnormality countermeasure program based on the various programs stored in the management computer 2500. It is not limited to such an embodiment.
- the CPU 2510 may be provided in another device separate from the management computer 2500, and various functions may be realized in cooperation with the CPU 2510.
- various programs stored in the management computer 2500 may be provided in another device separate from the management computer 2500, and various functions may be realized by calling the program to the CPU 2510.
- each step in the processing of the management computer 2500 or the like does not necessarily have to be processed in time series in the order described as a flowchart. That is, each step in the processing of the management computer 2500 or the like may be executed in parallel even if it is a different processing.
- the host management computer 2000 changes the resource allocation to the VM
- the function of improving the data reliability of the storage apparatus 3000, the function of controlling access to the data, and the like can be used.
- a computer system can be realized.
- Host computer 1001 VM (Virtual Machine) 1002: HV (Hypervisor) 2000: Host management computer 2500: Management computer 3000: Storage device 3510: Logical volume
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (11)
- ホスト計算機とストレージ装置とに接続され、
前記ストレージ装置により提供される複数の論理記憶領域と、前記複数の論理記憶領域の中の1つの論理記憶領域に格納されホスト計算機により実行されるオブジェクトとを関連付けて示す構成情報と、
前記複数の論理記憶領域に対して設定されているストレージ機能を示す機能設定情報と、
を格納するメモリと、
前記メモリに接続され、
前記ホスト計算機により、第1のオブジェクトに対するリソースの割り当てが変更されたことを検知し、
前記構成情報及び前記機能設定情報を参照して、前記リソース割り当て変更前に前記第1のオブジェクトに割り当てられていた第1の論理記憶領域に対するストレージ機能の設定情報を取得し、
前記第1のオブジェクトに関する前記ストレージ機能の提供に、前記リソース割り当て変更によって影響が生じたか否かを判定し、
前記判定の結果を出力する、CPUと、
を備えることを特徴とする管理計算機。 - 前記第1のオブジェクトは、仮想マシンである、
請求項1に記載の管理計算機。 - 前記リソース割り当て変更は、前記第1のオブジェクトのデータが、前記第1の論理記憶領域から第2の論理記憶領域へ移動されたことを含む
請求項2に記載の管理計算機。 - 前記CPUは、
検知した前記リソース割り当て変更が、前記第1のオブジェクトのデータの、前記第1の論理記憶領域から前記第2の論理記憶領域への移動であった場合に、
前記構成情報及び前記機能設定情報を参照することにより、前記第1の論理記憶領域に対して設定されているストレージ機能が、前記第2の論理記憶領域に対して設定されているか否かを判定し、
前記判定が否定的である場合に、前記判定の結果を出力する
請求項3に記載の管理計算機。 - 前記CPUは、
前記判定が否定的である場合に、
前記第2の論理記憶領域に対して、前記ストレージ機能の設定を実施する
請求項4に記載の管理計算機。 - 前記CPUは、
前記第1のオブジェクトのデータの移動後、前記第1の論理記憶領域に他のオブジェクトが格納されているか否かを判定し、
他のオブジェクトが格納されていない場合は、前記第1の論理記憶領域に対する前記ストレージ機能の設定を解除するよう制御する
請求項5に記載の管理計算機。 - 前記CPUは、
前記構成情報を参照して、前記第1のオブジェクトのデータの移動後、前記第1の論理記憶領域に他のオブジェクトが格納されているか否かを判定し、
他のオブジェクトが格納されておらず、かつ、前記第1の論理記憶領域に対する前記ストレージ機能の設定が第3の論理記憶領域との関連付けによって構成されている場合には、
前記第2の論理記憶領域に対する前記ストレージ機能の設定に、前記第3の論理記憶領域を用いるよう制御する
請求項5に記載の管理計算機。 - 前記リソース割り当て変更は、前記第1のオブジェクトへ割り当てられたホスト計算機の計算機リソースの変更を含む
請求項2に記載の管理計算機。 - 前記第1の論理記憶領域に対して設定されたストレージ機能は、リモートコピー機能であり、
前記メモリは、さらに、前記ストレージ装置及び前記ホスト計算機の性能情報を格納し、
前記CPUは、
検知した前記リソース割り当て変更が、前記第1のオブジェクトへ割り当てられたホスト計算機の計算機リソースの変更であった場合に、
前記性能情報に基づき、前記リソース割り当てが変更されたことによる、前記第1の論理記憶領域に関する障害発生時のデータ復旧目標ポイント(Recovery Point Objective)の値の変化を算出し、
前記値の変化に関する情報を出力する
請求項8に記載の管理計算機。 - 前記ホスト計算機を管理するホスト管理計算機にさらに接続され、
前記CPUは、
前記ホスト計算機により、第1のオブジェクトに対するリソースの割り当てが変更されたことを検知する代わりに、前記ホスト管理計算機が前記ホスト計算機に対して前記リソース割り当て変更を指示したことを検知することにより、
前記構成情報及び前記機能設定情報を参照して、前記リソース割り当て変更前に前記第1のオブジェクトに割り当てられていた第1の論理記憶領域に対するストレージ機能の設定情報を取得し、
前記第1のオブジェクトに関する前記ストレージ機能の提供に、前記リソース割り当て変更によって影響が生じたか否かを判定し、
前記判定の結果を出力する
請求項8に記載の管理計算機。 - ホスト計算機とストレージ装置とに接続された管理計算機による管理方法であって、
前記ホスト計算機により、第1のオブジェクトに対するリソースの割り当てが変更されたことを検知し、
前記ストレージ装置により提供される複数の論理記憶領域と、前記複数の論理記憶領域の中の1つの論理記憶領域に格納され前記ホスト計算機により実行されるオブジェクトとを関連付けて示す構成情報と、前記複数の論理記憶領域に対して設定されているストレージ機能を示す機能設定情報とを参照して、前記リソース割り当て変更前に前記第1のオブジェクトに割り当てられていた第1の論理記憶領域に対するストレージ機能の設定情報を取得し、
前記第1のオブジェクトに関する前記ストレージ機能の提供に、前記リソース割り当て変更によって影響が生じたか否かを判定し、
前記判定の結果を出力する、
ことを特徴とする管理方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1513849.8A GB2524932A (en) | 2013-06-14 | 2013-06-14 | Storage management calculator, and storage management method |
JP2015522363A JP5953433B2 (ja) | 2013-06-14 | 2013-06-14 | ストレージ管理計算機及びストレージ管理方法 |
PCT/JP2013/066419 WO2014199506A1 (ja) | 2013-06-14 | 2013-06-14 | ストレージ管理計算機及びストレージ管理方法 |
DE112013006476.6T DE112013006476T5 (de) | 2013-06-14 | 2013-06-14 | Speicher-Management-Rechner und Speicher-Management-Verfahren |
CN201380070306.8A CN104919429B (zh) | 2013-06-14 | 2013-06-14 | 存储管理计算机及存储管理方法 |
US14/766,212 US9807170B2 (en) | 2013-06-14 | 2013-06-14 | Storage management calculator, and storage management method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/066419 WO2014199506A1 (ja) | 2013-06-14 | 2013-06-14 | ストレージ管理計算機及びストレージ管理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014199506A1 true WO2014199506A1 (ja) | 2014-12-18 |
Family
ID=52021836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/066419 WO2014199506A1 (ja) | 2013-06-14 | 2013-06-14 | ストレージ管理計算機及びストレージ管理方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9807170B2 (ja) |
JP (1) | JP5953433B2 (ja) |
CN (1) | CN104919429B (ja) |
DE (1) | DE112013006476T5 (ja) |
GB (1) | GB2524932A (ja) |
WO (1) | WO2014199506A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016162916A1 (ja) * | 2015-04-06 | 2016-10-13 | 株式会社日立製作所 | 管理計算機およびリソース管理方法 |
WO2017163322A1 (ja) * | 2016-03-23 | 2017-09-28 | 株式会社日立製作所 | 管理計算機、および計算機システムの管理方法 |
WO2018051467A1 (ja) * | 2016-09-15 | 2018-03-22 | 株式会社日立製作所 | ストレージ管理サーバ、ストレージ管理サーバの制御方法及び計算機システム |
CN109144403A (zh) * | 2017-06-19 | 2019-01-04 | 阿里巴巴集团控股有限公司 | 一种用于云盘模式切换的方法与设备 |
JP2021503650A (ja) * | 2017-11-17 | 2021-02-12 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 階層型ストレージ・データ移動に基づいてクラウド・コンピューティング・システムにおいてクラウド資源を割り当てるための方法、システム、コンピュータ・プログラムおよび記録媒体 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015104811A1 (ja) * | 2014-01-09 | 2015-07-16 | 株式会社日立製作所 | 計算機システム及び計算機システムの管理方法 |
CN105611979B (zh) * | 2014-03-18 | 2018-08-21 | 株式会社东芝 | 具备试验区域的层级化存储系统、存储控制器及介质 |
JP6439299B2 (ja) * | 2014-07-09 | 2018-12-19 | 株式会社バッファロー | 情報処理システム、ネットワークストレージデバイス及びプログラム |
US9575856B2 (en) * | 2014-08-29 | 2017-02-21 | Vmware, Inc. | Preventing migration of a virtual machine from affecting disaster recovery of replica |
US10944764B2 (en) * | 2015-02-13 | 2021-03-09 | Fisher-Rosemount Systems, Inc. | Security event detection through virtual machine introspection |
US9926913B2 (en) | 2015-05-05 | 2018-03-27 | General Electric Company | System and method for remotely resetting a faulted wind turbine |
US11240305B2 (en) * | 2016-07-28 | 2022-02-01 | At&T Intellectual Property I, L.P. | Task allocation among devices in a distributed data storage system |
US10359967B2 (en) * | 2016-08-10 | 2019-07-23 | Hitachi, Ltd. | Computer system |
JP6836386B2 (ja) * | 2016-12-21 | 2021-03-03 | ファナック株式会社 | パスワード不要のユーザ認証装置 |
US10496531B1 (en) | 2017-04-27 | 2019-12-03 | EMC IP Holding Company LLC | Optimizing virtual storage groups by determining and optimizing associations between virtual devices and physical devices |
CN109492425B (zh) * | 2018-09-30 | 2021-12-28 | 南京中铁信息工程有限公司 | 一种在分布式文件系统上的worm一写多读技术应用方法 |
JP7141905B2 (ja) * | 2018-10-12 | 2022-09-26 | 株式会社日立産機システム | コントロール装置及びコントロール方法 |
US11989415B2 (en) * | 2022-08-10 | 2024-05-21 | Hewlett Packard Enterprise Development Lp | Enabling or disabling data reduction based on measure of data overwrites |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013038510A1 (ja) * | 2011-09-13 | 2013-03-21 | 株式会社日立製作所 | 仮想ボリュームに割り当てられた要求性能に基づく制御を行うストレージシステムの管理システム及び管理方法 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7843906B1 (en) * | 2004-02-13 | 2010-11-30 | Habanero Holdings, Inc. | Storage gateway initiator for fabric-backplane enterprise servers |
JP4699808B2 (ja) * | 2005-06-02 | 2011-06-15 | 株式会社日立製作所 | ストレージシステム及び構成変更方法 |
JP4842593B2 (ja) * | 2005-09-05 | 2011-12-21 | 株式会社日立製作所 | ストレージ仮想化装置のデバイス制御引継ぎ方法 |
JP5037881B2 (ja) * | 2006-04-18 | 2012-10-03 | 株式会社日立製作所 | ストレージシステム及びその制御方法 |
JP5042660B2 (ja) * | 2007-02-15 | 2012-10-03 | 株式会社日立製作所 | ストレージシステム |
US7801994B2 (en) | 2007-11-29 | 2010-09-21 | Hitachi, Ltd. | Method and apparatus for locating candidate data centers for application migration |
JP4598817B2 (ja) | 2007-12-18 | 2010-12-15 | 株式会社日立製作所 | 計算機システム及びデータ消失回避方法 |
JP5111204B2 (ja) * | 2008-03-31 | 2013-01-09 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの管理方法 |
JP5107833B2 (ja) * | 2008-08-29 | 2012-12-26 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの制御方法 |
US8799609B1 (en) * | 2009-06-30 | 2014-08-05 | Emc Corporation | Error handling |
JP2011070345A (ja) * | 2009-09-25 | 2011-04-07 | Hitachi Ltd | 計算機システム、計算機システムの管理装置、計算機システムの管理方法 |
US8381217B1 (en) * | 2010-04-30 | 2013-02-19 | Netapp, Inc. | System and method for preventing resource over-commitment due to remote management in a clustered network storage system |
JP5421201B2 (ja) * | 2010-07-20 | 2014-02-19 | 株式会社日立製作所 | 計算機システムを管理する管理システム及び管理方法 |
US8543701B2 (en) * | 2011-05-23 | 2013-09-24 | Hitachi, Ltd. | Computer system and its control method |
US20150363422A1 (en) * | 2013-01-10 | 2015-12-17 | Hitachi, Ltd. | Resource management system and resource management method |
-
2013
- 2013-06-14 CN CN201380070306.8A patent/CN104919429B/zh active Active
- 2013-06-14 WO PCT/JP2013/066419 patent/WO2014199506A1/ja active Application Filing
- 2013-06-14 JP JP2015522363A patent/JP5953433B2/ja not_active Expired - Fee Related
- 2013-06-14 US US14/766,212 patent/US9807170B2/en active Active
- 2013-06-14 GB GB1513849.8A patent/GB2524932A/en not_active Withdrawn
- 2013-06-14 DE DE112013006476.6T patent/DE112013006476T5/de not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013038510A1 (ja) * | 2011-09-13 | 2013-03-21 | 株式会社日立製作所 | 仮想ボリュームに割り当てられた要求性能に基づく制御を行うストレージシステムの管理システム及び管理方法 |
Non-Patent Citations (1)
Title |
---|
YUKINORI SAKASHITA: "Proposal of Automatic Optimization of Data Location for Virtual Environment", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN , IPSJ JOURNAL, vol. 54, no. 3, 15 March 2013 (2013-03-15), pages 1131 - 1140 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016162916A1 (ja) * | 2015-04-06 | 2016-10-13 | 株式会社日立製作所 | 管理計算機およびリソース管理方法 |
JPWO2016162916A1 (ja) * | 2015-04-06 | 2017-12-07 | 株式会社日立製作所 | 管理計算機およびリソース管理方法 |
WO2017163322A1 (ja) * | 2016-03-23 | 2017-09-28 | 株式会社日立製作所 | 管理計算機、および計算機システムの管理方法 |
WO2018051467A1 (ja) * | 2016-09-15 | 2018-03-22 | 株式会社日立製作所 | ストレージ管理サーバ、ストレージ管理サーバの制御方法及び計算機システム |
JPWO2018051467A1 (ja) * | 2016-09-15 | 2018-11-22 | 株式会社日立製作所 | ストレージ管理サーバ、ストレージ管理サーバの制御方法及び計算機システム |
CN109144403A (zh) * | 2017-06-19 | 2019-01-04 | 阿里巴巴集团控股有限公司 | 一种用于云盘模式切换的方法与设备 |
CN109144403B (zh) * | 2017-06-19 | 2021-10-01 | 阿里巴巴集团控股有限公司 | 一种用于云盘模式切换的方法与设备 |
JP2021503650A (ja) * | 2017-11-17 | 2021-02-12 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 階層型ストレージ・データ移動に基づいてクラウド・コンピューティング・システムにおいてクラウド資源を割り当てるための方法、システム、コンピュータ・プログラムおよび記録媒体 |
JP7160449B2 (ja) | 2017-11-17 | 2022-10-25 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 階層型ストレージ・データ移動に基づいてクラウド・コンピューティング・システムにおいてクラウド資源を割り当てるための方法、システム、コンピュータ・プログラムおよび記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2014199506A1 (ja) | 2017-02-23 |
CN104919429B (zh) | 2018-01-09 |
GB2524932A (en) | 2015-10-07 |
US9807170B2 (en) | 2017-10-31 |
JP5953433B2 (ja) | 2016-07-20 |
DE112013006476T5 (de) | 2015-11-12 |
US20150373119A1 (en) | 2015-12-24 |
CN104919429A (zh) | 2015-09-16 |
GB201513849D0 (en) | 2015-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5953433B2 (ja) | ストレージ管理計算機及びストレージ管理方法 | |
US9423966B2 (en) | Computer system, storage management computer, and storage management method | |
US8346934B2 (en) | Method for executing migration between virtual servers and server system used for the same | |
US9201779B2 (en) | Management system and management method | |
US8984221B2 (en) | Method for assigning storage area and computer system using the same | |
US9285993B2 (en) | Error handling methods for virtualized computer systems employing space-optimized block devices | |
US9003150B2 (en) | Tiered storage system configured to implement data relocation without degrading response performance and method | |
JP6151795B2 (ja) | 管理計算機および計算機システムの管理方法 | |
WO2011074284A1 (ja) | 仮想計算機の移動方法、仮想計算機システム及びプログラムを格納した記憶媒体 | |
WO2014073045A1 (ja) | 計算機システム、ストレージ管理計算機及びストレージ管理方法 | |
WO2016121005A1 (ja) | 管理計算機および計算機システムの管理方法 | |
US20140181804A1 (en) | Method and apparatus for offloading storage workload | |
US20170235677A1 (en) | Computer system and storage device | |
WO2014174570A1 (ja) | ストレージ管理計算機、ストレージ管理方法、およびストレージシステム | |
US20150153959A1 (en) | Storage system and the storage system management method | |
WO2013061376A1 (en) | Storage system and data processing method in storage system | |
US20170371587A1 (en) | Computer system, and data migration method in computer system | |
US10210035B2 (en) | Computer system and memory dump method | |
US9130880B2 (en) | Management system and information acquisition method | |
US10992751B1 (en) | Selective storage of a dataset on a data storage device that is directly attached to a network switch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13886755 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 1513849 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20130614 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1513849.8 Country of ref document: GB |
|
ENP | Entry into the national phase |
Ref document number: 2015522363 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14766212 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112013006476 Country of ref document: DE Ref document number: 1120130064766 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13886755 Country of ref document: EP Kind code of ref document: A1 |
|
ENPC | Correction to former announcement of entry into national phase, pct application did not enter into the national phase |
Ref country code: GB |