WO2015068299A1 - 管理計算機および計算機システムの管理方法 - Google Patents
管理計算機および計算機システムの管理方法 Download PDFInfo
- Publication number
- WO2015068299A1 WO2015068299A1 PCT/JP2013/080394 JP2013080394W WO2015068299A1 WO 2015068299 A1 WO2015068299 A1 WO 2015068299A1 JP 2013080394 W JP2013080394 W JP 2013080394W WO 2015068299 A1 WO2015068299 A1 WO 2015068299A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- configuration change
- computer
- alternative
- management
- plan
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/142—Reconfiguring to eliminate the error
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/805—Real-time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/85—Active fault masking without idle spares
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
Definitions
- the present invention relates to a management computer and a management method for a computer system.
- Patent Document 2 manages the status of a virtual logical volume (hereinafter referred to as VVOL) configured with a plurality of virtual areas, and detects a VVOL in an inappropriate status.
- VVOL virtual logical volume
- a VVOL is associated with a pool for providing a storage area.
- Patent Document 2 an attempt is made to improve the situation by moving a VVOL in an inappropriate situation from the current pool to another pool.
- a configuration change plan may be drafted and executed to improve the storage system status.
- the I / O (Input / Output) path from the application to the physical disk of the storage apparatus changes, or the state of the computer resource related to the I / O processing changes. It may change. Therefore, even if the configuration change plan generated based on the state before the change occurs is performed as scheduled, the effect as originally expected may not be obtained.
- the prepared configuration change plan may not achieve the intended purpose.
- the present invention has been made in view of the above problems, and one of its purposes is a configuration change that is set in advance so as to be executed according to predetermined conditions in consideration of changes in the state of the computer and the storage device. It is an object of the present invention to provide a management computer and a management method of a computer system that can correct the system.
- a management computer is a management computer connected to a computer and a storage apparatus, and includes first configuration information indicating a plurality of logical storage areas provided by the storage apparatus, and a plurality of logical storage areas Configuration indicating second configuration information indicating operation requirements of a predetermined object stored in the first logical storage area and executed by the computer, and a first configuration change plan scheduled to be executed by the computer or the storage device
- a second configuration that has a memory for storing change plan information and a microprocessor connected to the memory, and the microprocessor is set in advance so as to be executed in accordance with a predetermined condition in the computer or the storage device
- FIG. 1 is an explanatory diagram showing an outline of the present embodiment.
- FIG. 2 is a configuration diagram of the computer system.
- FIG. 3 is an explanatory diagram showing an outline of the storage structure.
- FIG. 4 is an example of a table for managing volumes.
- FIG. 5 is an example of a table for managing a pool.
- FIG. 6 is an example of a table for managing drives.
- FIG. 7 is an example of a table for managing communication ports.
- FIG. 8 is an example of a table for managing the microprocessor.
- FIG. 9 is an example of a table for managing configuration changes that are automatically performed.
- FIG. 10 is an example of a table for managing virtual machines.
- FIG. 11 is an example of a table for managing the configuration change plan.
- FIG. 12 is an example of a table for managing the configuration changing unit.
- FIG. 13 is a flowchart showing a control process for correcting the contents of the automatic configuration change.
- FIG. 14 is a flowchart showing processing for
- aaa table various types of information may be described using the expression “aaa table”, but the various types of information may be expressed using a data structure other than a table. In order to show that it does not depend on the data structure, the “aaa table” can be called “aaa information”.
- control device included in the computer.
- CPU Central Processing Unit
- the processing is simply described with the storage device as the subject, it indicates that the controller included in the storage device is executing.
- the control device and the controller may be the processor itself, or may include a hardware circuit that performs a part or all of the processing performed by the control device or the controller.
- the computer program may be installed in each computer or storage system from the program source.
- the program source may be, for example, a program distribution server or a storage medium.
- the created configuration change plan cannot be dynamically modified according to the actual situation. This is because it cannot be implemented without obtaining approval from the person in charge with the revised content.
- a configuration change plan that has been approved in advance may be wasted. If the system configuration at the time of creating the configuration change plan is different from the system configuration after the dynamic configuration change is implemented, even if the approved configuration change plan is executed as planned, the expected effect may not be obtained. There is sex.
- the configuration change plan can be executed after adjusting to the current system configuration.
- the responsibility of the system management department The configuration change plan cannot be executed without approval from the administrator.
- FIG. 1 shows an outline of the first embodiment.
- the computer system includes, for example, at least one host computer 101 and at least one storage device 102, and the host computer 101 and the storage device 102 are connected to at least one management computer 201 so as to be capable of bidirectional communication.
- the management computer 201 has a storage resource 211 composed of, for example, a main storage device and an auxiliary storage device.
- the storage resource 211 of the management computer 201 includes, for example, service level information and an operation schedule of a VM (Virtual Machine) 111, information on a configuration change plan scheduled to be executed by the host computer 101 or the storage apparatus 102, and , Configuration information from the VM 111 to the storage area of the storage apparatus 102 is stored.
- VM Virtual Machine
- the automatic configuration change control program 810 of the management computer 201 is a control program for correcting the content of the configuration change (second configuration change) that is automatically executed according to the current state of the computer system.
- the automatic configuration change control program 810 detects a sign that an automatic configuration change will be performed from the host computer 101 and the storage apparatus 102 that are the management targets (S1). A method for detecting the sign will be described later.
- the automatic configuration change control program 810 includes, from the storage resource 211, service level information and operation schedule set in the VM 111, configuration change plan information to be executed, configuration information from the VM 111 to the storage area of the storage apparatus 102, and , Get. Based on the information acquired from the storage resource 211, the automatic configuration change control program 810 calculates the influence on the configuration change plan to be executed when the automatic configuration change is performed (S2).
- the configuration change plan to be executed may be simply referred to as a configuration change plan.
- the automatic configuration change control program 810 determines that the configuration change plan cannot satisfy the expected effect value as a result of the calculation, the automatic configuration change control program 810 corrects the content of the automatic configuration change in consideration of the configuration change plan. For example, the automatic configuration change control program 810 automatically uses the service level information and the operation schedule set in the VM 111, the configuration change plan information, and the configuration information from the VM 111 to the storage area of the storage device 102. Generate a configuration change alternative. The alternative is generated so as to satisfy the expected effect value of the configuration change plan while maintaining the service level (S3). When the automatic configuration change control program 810 generates an alternative, it can also instruct the host computer 101 and the storage apparatus 102 to execute the alternative (S3). As described later, when an alternative that satisfies the expected effect value of the configuration change plan cannot be generated, the automatic configuration change control program 810 can cancel the execution of the configuration change plan.
- FIG. 2 shows a configuration example of a computer system.
- the storage apparatus 102 is connected to the management computer 201 and the host computer 101 via a first communication network 231 such as a LAN (Local Area Network).
- the storage device 102 is connected to the host computer 101 via a second communication network 232 such as a SAN (Storage Area Network).
- the second communication network 232 can include one or more switches 233. Note that the first communication network 231 and the second communication network 232 may be integrally formed.
- the storage apparatus 102 includes, for example, a plurality of physical storage device groups 309 and a controller 251 connected to the physical storage device group 309.
- the physical storage device group 309 is composed of one or more physical storage devices.
- SSD Solid State Drive
- SAS Serial Attached SCSI
- SATA Serial ATA (Advanced Technology Attachment)
- MRAM Magnetic Random Access Memory
- Phase Change Memory Phase Change Memory
- ReRAM Residential Random-Access memory
- FeRAM Feroelectric Memory
- the storage device 102 can be provided with a plurality of physical storage device groups having different performances.
- the physical storage device group may be provided from outside the storage apparatus 102.
- the storage apparatus 102 can be connected to a physical storage device group possessed by another storage apparatus and used as if it were a unique storage device group of the storage apparatus 102.
- the controller 251 includes a management interface (hereinafter referred to as MI / F) 241, a communication interface 242 (hereinafter referred to as CI / F) 242, and a device interface (hereinafter referred to as DI / F).
- MI / F management interface
- CI / F communication interface
- DI / F device interface
- the microprocessor may be abbreviated as a processor.
- the MI / F 241 is a communication interface device for communicating with the first protocol, and is, for example, a NIC (Network Interface Card).
- the C-I / F 242 is a communication interface device for communicating with the second protocol.
- the DI / F 245 is a communication interface device for communicating with the physical storage device group 309 using the third protocol.
- the DI / F 245 may be prepared for each type of physical storage device.
- the controller 251 accesses the physical storage device via the DI / F 245.
- the memory 243 stores a computer program executed by the processor 244 and various information.
- the memory 243 has a cache memory area.
- write target data received from the host computer 101 and read target data read from an actual data storage area (hereinafter referred to as a page) on the physical storage device are temporarily stored.
- the write target data in the cache memory area is stored in the physical storage device allocated to the write destination virtual area. Read target data in the cache memory area is provided to the host computer 101.
- the host computer 101 includes, for example, an MI / F 224, a CI / F 226, a storage resource 221, a processor 222 connected thereto, and an I / O device 223.
- the MI / F 224 is, for example, a NIC.
- the CI / F 226 is, for example, an HBA (Host Bus Adapter).
- the storage resource 221 is, for example, a memory.
- the storage resource 221 may include an auxiliary storage device such as an HDD (Hard Disk Drive).
- the storage resource 221 stores, for example, an application program such as a business program, an OS (Operating System), and the like.
- the processor 222 executes application programs and OS stored in the storage resource 221.
- the I / O device 223 includes an input unit (for example, a keyboard, a switch, a pointing device, a microphone, and a camera) that receives input from the user, and an output unit (for example, a display device and a speaker) that displays various information to the user. And have.
- an input unit for example, a keyboard, a switch, a pointing device, a microphone, and a camera
- an output unit for example, a display device and a speaker
- the management computer 201 includes, for example, an MI / F 214, a storage resource 211, a processor 212 connected to them, and an I / O device 213.
- the MI / F 214 is, for example, a NIC.
- the I / O device 213 is the same as the I / O device 223.
- the storage resource 211 is a memory, for example, and may include an auxiliary storage device such as an HDD.
- the storage resource 211 stores computer programs and various information.
- the computer program is executed by the processor 212.
- the storage resource 211 includes, as information, a volume management table 801, a pool management table 802, a drive management table 803, a port management table 804, a CPU management table 805, an automatic configuration change management table 806, a VM management table 807, and a configuration change plan management.
- a table 808 and a configuration change means table 809 are stored.
- the storage resource 211 stores an automatic configuration change control program 810 and a VM management program 811 as computer programs.
- the above is the configuration example of the hardware of the computer system according to the present embodiment.
- the communication interface devices used in the above-described MI / F, CI / F, etc. are not limited to HBAs and NICs.
- the communication interface device differs depending on, for example, the type of network to which the I / F is connected and the type of apparatus having the I / F.
- FIG. 3 shows an outline of the storage structure of the computer system.
- the storage system 103 is composed of one or more storage devices 102. At least one of the storage apparatuses 102 is a storage apparatus to which the thin provisioning technology is applied.
- the thin provisioning technology is a technology for defining a virtual storage capacity and providing it to the host computer 101 regardless of the actual physical storage capacity.
- the storage system 103 has a plurality of logical volumes having different characteristics, which are configured from real areas of the physical storage device group 309. Hereinafter, the volume may be referred to as VOL.
- the storage system 103 of this embodiment can use three types of volumes: LDEV, ExVOL, and VVOL.
- the LDEV 107 is a volume composed of a drive 109 indicating a physical storage device.
- the ExVOL 105 is a volume configured from the VDEV 108.
- the VDEV 108 is an intermediate volume connected to a volume of an external storage device.
- the VVOL 106 is a virtual volume configured based on the thin provisioning technology, and is generated using a real area (a real storage area, also referred to as a page) of a plurality of volumes registered in the pool 110.
- a real area a real storage area, also referred to as a page
- the drive 109 can be composed of, for example, SSD, SAS-HDD, SATA-HDD, MRAM, phase change memory, ReRAM, FeRAM, and the like.
- the storage system 103 has one or more pools 110 (only one is shown in FIG. 3). Each pool 110 has a plurality of pool volumes having different performances.
- the pool volume includes a real volume (LDEV) provided inside the storage apparatus 102 and an external connection volume (ExVOL) connected to the real volume provided outside the storage apparatus 102. These pool volumes are divided into a plurality of pages. Basically, a pool volume belongs to only one pool and does not belong to a plurality of pools.
- the storage system 103 has a plurality of VVOLs 106 (only one is shown in FIG. 3).
- the VVOL 106 is a virtual logical volume that conforms to the thin provisioning technology, and includes a plurality of virtual areas that are virtual storage areas.
- the virtual area is, for example, an address such as LBA (Logical Block Addressing).
- the storage system 103 determines whether a page that is a real area is allocated to the specified virtual area.
- the storage system 103 When the storage system 103 determines that a page is allocated to the specified virtual area, the storage system 103 writes the write target data to the page. When the storage system 103 determines that no page is allocated to the designated virtual area, the storage system 103 selects an unused page from the pool 110 associated with the write-targeted VVOL 106. The storage system 103 allocates the selected unused page to the designated virtual area, and writes the write target data to the allocated page.
- the host computer 101 is an example of an access source to the storage system 103.
- the host computer 101 has a hypervisor 112 that logically generates and executes a VM (Virtual Machine) 111.
- VM Virtual Machine
- the hypervisor 112 can control a plurality of VMs 111 at a time. Each of the plurality of VMs 111 can execute an application as if it is a stand-alone physical computer.
- the hypervisor 112 can perform VM migration in which a VM 111 operating on a certain host computer is moved to another host computer.
- VVOL 106 is provided to one or more VMs 111.
- the connection between the host computer 101 and the VVOL 106 in FIG. 3 does not mean a physical connection, but indicates that the VVOL 106 is provided to the host computer 101 and recognized by the host computer 101.
- the connection between the pool 110 and the VVOL 106 does not mean a physical connection, and indicates that the VVOL 106 is associated with the pool 110.
- the host computer accesses the provided VVOL 106 according to the request from the VM 111. Specifically, the host computer 101 transmits an access command having access destination information to the storage system 103.
- the access destination information is information representing an access destination, and includes, for example, an ID (identifier) of the VVOL 106 such as LUN (Logical Unit Number) and an ID of a virtual area such as LBA.
- the processor 244 is associated with one or more volumes.
- the processor 244 performs various processes related to the control target volume, such as a volume I / O process and a process related to page allocation of the VVOL 106.
- One processor 244 can also control a plurality of volumes.
- the LDEV 107A and the LDEV 107B are both real volumes and constitute a remote copy pair.
- One LDEV 107A and the other LDEV 107B are provided in different storage apparatuses.
- the LDEV 107A is a primary volume (PVOL) and an LDEV 107B secondary volume (SVOL).
- PVOL primary volume
- SVOL LDEV 107B secondary volume
- the remote copy pair when data is written to the LDEV 107A (PVOL), the write data is copied to the LDEV 107B (SVOL) synchronously or asynchronously.
- FIG. 4 shows the volume management table 801.
- the volume management table 801 manages information related to volumes that the storage system 103 has.
- the volumes to be managed in the volume management table 801 may be all the volumes that the storage system 103 has, or only a part thereof.
- the host computer 101 that provides the virtual storage area by the VVOL 106 and the pool 110 to which pages are allocated to the VVOL 106 can be specified.
- the target volume is the LDEV 107
- the host computer 101 that provides the storage area by the LDEV 107 and the drive 109 that allocates a page to the LDEV 107 can be specified.
- the target volume is the VDEV 108
- the storage apparatus 102 that provides the storage area to the VDEV 108, the drive 109 that provides the storage area to the VDEV 108, or the pool 110 that allocates pages to the VDEV 108 can be identified.
- the target volume is the ExVOL 105
- the host computer 101 that provides the storage area by the ExVOL 105 and the VDEV 108 to which the page is allocated to the ExVOL 105 can be specified.
- the management computer 201 collects information from the storage apparatus 102 and updates the volume management table 801 periodically or triggered by an information collection request input by the user via the I / O device 213.
- the volume management table 801 manages volume ID 301, storage ID 302, volume type 303, storage capacity 304, used capacity 305, target port ID 306, initiator ID 307, initiator port ID 308, source storage ID 309, and source resource ID 310 in association with each other. .
- Volume ID 301 is information for identifying a volume.
- the storage ID 302 is information for identifying the storage apparatus 102 having a volume.
- the volume type 303 indicates whether the volume type is VVOL, LDEV, VDEV, or ExVOL.
- the storage capacity 304 indicates the storage capacity of the volume.
- the used capacity 305 indicates the total amount of pages allocated from the pool 110 to the VVOL 106.
- a volume whose source resource 310 is other than “pool” is described as N / A (not applicable).
- the target port ID 306 is information for identifying the target port associated with the volume among the communication ports of the storage apparatus 102.
- the initiator ID 307 is identification information of the host computer 101 or the storage apparatus 102 that is the volume provision destination. In the example of FIG. 4, when the volume type 303 is VVOL, LDEV, or ExVOL, the initiator ID 307 stores information for identifying the host computer 101. When the volume type 303 is VDEV, the initiator ID 307 stores information for identifying the storage apparatus 102.
- the initiator port ID 308 is information for identifying the initiator port of the host computer or the initiator port of the storage apparatus that is the volume providing destination.
- the source storage ID 309 is information for identifying a storage apparatus that provides a volume.
- the source resource ID 310 is information for identifying an element (pool 110, device 109) that provides a storage area of a volume.
- the volume type 303 is VVOL
- information for identifying the pool 110 is stored in the source resource ID 310.
- the volume type 303 is LDEV
- information for identifying the drive 109 is stored in the source resource ID 310.
- the source resource ID 310 stores either the identification information of the pool 110 or the identification information of the drive 109 associated with the VVOL.
- the source resource ID 310 stores information for identifying an external volume connected to the ExVOL.
- An external volume is a volume connected to ExVOL among logical volumes provided in an external storage device.
- the target port ID 306, initiator ID 307, and initiator port ID 308 are described as N / A (not applicable). To do.
- FIG. 5 shows the pool management table 802.
- the pool management table 802 stores information on the pool 110.
- the pool management table 802 can identify the volume constituting the pool 110 and the correspondence relationship between each page stored in the pool and the virtual area of the VVOL 106.
- the management computer 201 collects information from the storage apparatus 102 and updates the pool management table 802 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user.
- the pool management table 302 has the following information, for example.
- the storage ID 401 is information for identifying the storage apparatus 102 having the pool 110.
- the pool ID 402 is information for identifying the pool 110.
- the page ID 403 is information for identifying pages belonging to the pool 110.
- the volume ID 404 is information for identifying a volume having a page.
- the volume LBA 405 is information indicating the position of the page in the volume (for example, the top LBA of the page and the LBA at the end of the page).
- the VVOL ID 406 is information for identifying a VVOL having a virtual area to which a page is allocated. “N / A (Not / Assigned)” indicates that the page is not assigned to any virtual area.
- the VVOL LBA 407 is information indicating the position of the virtual area to which the page is allocated (for example, the top LBA of the virtual area and the end LBA of the virtual area).
- FIG. 6 shows the drive management table 803.
- the drive management table 803 stores information on the drive 109. From the drive management table 803, the operation rate, I / O performance, and history information of the operation rate, which are the criteria for determining whether the drive has a high load, can be known.
- the management computer 201 collects information from the storage apparatus 102 and updates the drive management table 803 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user.
- the drive management table 803 manages, for example, a storage ID 501, a drive ID 502, an operation rate 503 for high load determination criteria, a read speed 504, a write speed 505, a measurement time 506, and an operation rate 507.
- the storage ID 501 is information for identifying the storage apparatus 102 having the drive 109.
- the drive ID 502 is information for identifying the drive 109.
- the operation rate 503 for the high load criterion indicates an operation rate index at which the performance of the drive 109 is degraded.
- the read speed 504 is the read speed (MB / s) of the target drive.
- the write speed 505 is the write speed (MB / s) of the target drive.
- the measurement time 506 is the time when the read speed 304, the write speed 505, and the operation rate 507 are measured. The same applies to the following tables, but the date can be included in the measurement time.
- the operation rate 507 is a measurement value of the operation rate of the target drive.
- the high load determination reference operating rate 503 is determined according to the specifications of the storage apparatus 102.
- FIG. 7 shows the port management table 804.
- the port management table 804 stores information for managing the C-IF 242 of the storage apparatus 102.
- the read data transfer amount and write data transfer amount, the read data transfer amount history information, and the write data transfer amount history information can be known.
- the management computer 201 collects information from the storage apparatus 102 and updates the port management table 804 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user.
- the port management table 804 includes, for example, a storage ID 601, a port ID 602, a read data transfer amount 603 for a high load determination criterion, a write data transfer amount 604 for a high load determination criterion, a measurement time 605, a read data transfer amount 606, and write data.
- the transfer amount 607 is managed in association with it.
- the storage ID 601 is information for identifying the storage apparatus 102 having the C-IF 242.
- the port ID 602 is information for identifying the C-IF 242.
- the read data transfer amount 603 for the high load criterion indicates an index of the read data transfer amount when the performance of the C-IF 242 is degraded.
- the high load determination reference write data transfer amount 604 indicates an index of the write data transfer amount when the performance of the C-IF 242 is degraded.
- the measurement time 605 is the time when the performance (read data transfer amount 606, write data transfer amount 607) of the target port is measured.
- the read data transfer amount 606 is an amount of data read from the target port per unit time, and is a read speed measurement value.
- the write data transfer amount 607 is the amount of data written to the target port per unit time, and is a measured value of the write speed.
- the high load determination reference operating rates 603 and 604 are determined according to the specifications of the storage apparatus 102.
- FIG. 8 shows the CPU management table 805.
- the CPU management table 805 manages information about the processor 244 of the storage apparatus 102. Based on the CPU management table 805, the operating rate that is a reference for determining whether or not the processor 244 has a high load and the history information of the operating rate are known.
- the management computer 201 collects information from the storage apparatus 102 and updates the CPU management table 805 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user.
- the CPU management table 805 manages, for example, a storage ID 701, a CPU ID 702, an operation rate 703 for high load determination criteria, a measurement time 704, and an operation rate 705 in association with each other.
- the storage ID 701 is information for identifying the storage apparatus 102 having the processor 244.
- the CPU ID 702 is information for identifying the target processor 244.
- the operation rate 703 for the high load determination criterion is information indicating an index of the operation rate at which the performance of the processor 244 is deteriorated.
- the measurement time 704 is information indicating the time when the performance of the target processor 244 is measured.
- the operation rate 705 indicates a measurement value of the operation rate of the target processor 244.
- the operation rate 703 for the high load determination standard is determined according to the specifications of the storage apparatus 102.
- FIG. 9 shows a table 806 for managing configuration changes that are automatically performed.
- the automatic configuration change management table 806 manages setting information for changing the configuration according to an arbitrary condition.
- the management computer 201 collects information from the computer system periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user, and updates the automatic configuration change management table 806 with the information.
- the management computer 201 collects information related to the configuration change performed in the storage apparatus 102 from the storage apparatus 102. Further, the management computer 201 collects information regarding the configuration change executed by the host computer 101 from the host computer 101. When a host management computer that manages the host computer 101 is included in the computer system, the management computer 201 may collect information on the configuration change executed by the host computer 101 from the host management computer (not shown). .
- the management computer 201 collects input information about the setting, and an automatic configuration change management table You may store in 806.
- the automatic configuration change management table 806 manages, for example, a resource owner ID 901, a resource ID 902, a condition 903, and a result 904 in association with each other.
- the resource owner ID 901 is information indicating a subject having a resource whose configuration is to be changed. Examples of the main body include the storage apparatus 102, the host computer 101, and the switch 233.
- the resource ID 902 is information for identifying the resource that is the target of the configuration change. Examples of configuration change target resources include logical volumes, virtual machines, and communication ports.
- the condition 903 is an example of “predetermined condition”, and is information indicating a condition for automatically changing the configuration.
- the conditions include, for example, an I / O error, a communication error between host computers, and a response time exceeding a predetermined response time.
- the result 904 is information indicating the result of the configuration change.
- the configuration change 905 shown in FIG. 9 assumes the following configuration. That is, the volume “VOL50” of the storage device “storage 1” and the volume “VOL10” of the storage device “storage2” construct a remote copy pair, and the host computer sets both the volume “VOL50” and the volume “VOL10”. Connection is possible.
- a path setting that allows the host computer to connect to multiple volumes is called multipath setting.
- This configuration change instruction may be issued by the host computer itself or may be issued by a host management computer that manages the host computer.
- a host management computer that manages the host computer.
- FC switch 233 has a redundant configuration.
- the host computer changes the I / O path from one FC switch having a redundant configuration to the other FC switch.
- a configuration change example 906 shown in FIG. 9 will be described.
- This configuration change 906 is based on the following configuration. That is, one host computer “host 10” and the other host computer “host 20” refer to the same data, and the host computer “host 10” and the host computer “host 20” regularly communicate. In the regular communication between the host computers, when an error set as the condition 903 occurs, the virtual machine “VM1” provided in one host computer “Host 10” is transferred to the other host computer “Host 20”. Moving. Specifically, the computer resource used by the virtual machine “VM1” is changed from the computer resource of one host computer “host 10” to the computer resource of the other host computer “host 20”.
- a configuration change example 907 shown in FIG. 9 will be described.
- this configuration change 907 when the response time of the virtual machine “VM2” operating on the host computer “host 20” falls below a predetermined threshold “10 ms”, the data used by the virtual machine “VM2” is transferred to the storage device “storage”. 2 ”volume“ volume 10 ”.
- FIG. 10 shows a table 807 for managing the virtual machine (VM) 111.
- the VM management table 807 stores information on the VM 111. From the VM management table 807, the service level defined in the VM 111, the schedule for operating the VM, the volume in which the data is stored, and the I / O performance information can be known.
- the management computer 201 collects information from the host computer and updates the VM management table 807 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user.
- the VM management table 807 manages, for example, a VM ID 1001, a host ID 1002, a service level 1003, an operation schedule 1004, a storage ID 1005, a volume ID 1006, a measurement time 1007, an IOPS (Input Output Per Second) 1008, and a response time 1009. .
- VM ID 1001 is information for identifying a VM.
- the host ID 1002 is information for identifying the host computer 101 having the VM.
- the service level 1003 is information indicating a service level defined for the VM.
- the operation schedule 1004 is information indicating the time during which the VM is operating.
- the storage ID 1005 is information for identifying the storage apparatus 102 having a volume for storing VM data.
- the volume ID 1006 is information for identifying a volume storing VM data.
- the measurement time 1007 is information indicating the time when the performance of the target VM is measured.
- the IOPS 1008 is information indicating the measured value of the IOPS of the target VM.
- the response time 1009 is information indicating a measured value of the response time of the target VM.
- the service level 1003 uses downtime and response time as indices.
- the present invention exemplifies a case in which a VM exists.
- the present invention is not limited to this, and a configuration in which a hypervisor does not exist in the host computer 101 may be used.
- the VM ID 1001 is blank.
- the operation state of the VM is expressed by the operation schedule 1004. Instead, the result of confirming the operation state of the VM 111 is held at the measurement time 1008, and the confirmation result is used. The operating state of the VM 111 may be confirmed.
- FIG. 11 shows a table 808 for managing a configuration change plan created and registered by a system administrator or the like.
- the configuration change plan management table 808 stores information about configuration changes that are to be implemented via the management computer 201. From the configuration change plan management table 808, information on the configuration change plan to be executed and the expected effect value of the configuration change plan can be known.
- the management computer 201 collects input information of the setting and stores it in the configuration change plan management table 808.
- the information stored in the configuration change plan management table 808 is not limited to information directly input to the management computer 201 by a system administrator or the like.
- a value calculated by the management computer 201 based on input information such as a system administrator may be stored in the configuration change plan management table 808.
- the configuration change plan management table 808 manages, for example, a configuration change plan ID 1101, a task ID 1102, a task type 1103, a task parameter 1104, an execution start time 1105, and an expected effect value 1106.
- items 1101 to 1106 items 1102 to 1106 are information indicating details of the configuration change plan.
- the configuration change plan ID 1101 is information for identifying the configuration change plan.
- the task ID 1102 is information for identifying a single configuration change process constituting the configuration change plan. A single configuration change process is called a task.
- the task type 1103 is information indicating the type of task.
- the task parameter 1104 is information defining task parameters.
- the execution start time 1105 is information indicating the execution start time of the task (that is, the execution start time of the configuration change plan).
- the expected effect value 1106 is information indicating an operation state expected to be obtained by executing the configuration change plan.
- the configuration change plan with the configuration change plan ID 1101 “1” has the processor 244 responsible for processing the volume “volume 1” of the storage device “storage 1” from “CPU1” to “CPU2”. Change to.
- the expected effect value 1106 when the configuration change plan is implemented is that the operating rates of the processors “CPU1” and “CPU2” are “20% or more and 30% or less”, respectively.
- the configuration change plan with the configuration change plan ID “2” is composed of a plurality of tasks “task 1” and “task 2”.
- the expected effect value 1106 is set so that the average response time of the virtual machine “VM50” becomes “15 ms” or less by executing both tasks.
- FIG. 11 the configuration change plan executed by the storage device has been described. However, the configuration change plan executed by the host computer and the configuration change plan executed by both the storage device and the host computer are not limited to this.
- FIG. 12 shows a configuration change means table 809 that manages means for changing the configuration.
- the configuration change unit table 809 stores configuration change units that can be implemented in the host computer 101 and the storage system 103. From the configuration change means table 809, the type of configuration change that can be implemented in the host computer 101, the type of configuration change that can be implemented in the storage system 103, and the characteristics of the configuration change means are known.
- the management computer 201 collects information from the computer system and updates the configuration change unit table 809 periodically or triggered by acceptance of an information collection request from the I / O device 213 by the user. For example, the management computer 201 collects information on the configuration change unit related to the storage device 102 from the storage device 102 and collects information on the configuration change unit related to the host computer 101 from the host computer 101 or the like.
- the configuration change unit table 809 manages the configuration change unit 1201, the execution subject 1202, and the characteristics 1203 in association with each other.
- the configuration change unit 1201 is information indicating the type of configuration change unit.
- the execution subject 1202 is information indicating a subject that executes the configuration changing unit.
- a characteristic 1203 is information indicating the characteristic of the configuration changing unit.
- a configuration changing unit having downtime as the characteristic 1203 is shown.
- the downtime is set to “10.0 ms” in the configuration changing means called volume migration for moving the volume. This indicates that a downtime of 10.0 ms occurs when volume migration is performed.
- the present invention is not limited to this.
- the configuration change required time required from the start to the end of the configuration change and a formula for calculating the required time, etc. May be set in the characteristic 1203.
- Another characteristic may be set as the characteristic 1203 instead of or together with the mathematical expression.
- FIGS. 13 and 14 are flowcharts showing processing for controlling (correcting) automatic configuration changes. This process is realized by the processor 212 executing the automatic configuration change control program 810.
- FIG. 13 shows the entire automatic configuration change control process.
- FIG. 14 shows details of a part of the processing S103 in FIG.
- the subject of the operation will be described as the automatic configuration change control program 810.
- the automatic configuration change control process will be described with reference to FIG.
- the automatic configuration change control program 810 detects a sign of an automatic configuration change (S100). Detecting a sign is obtaining information indicating when and how the configuration is changed. For example, information indicating that the I / O path to “Volume 1” is changed to “Volume 2” after 1 second is acquired. In other words, detecting the sign of automatic configuration change means determining whether or not the automatic configuration change defined in FIG. Detecting a sign of automatic configuration change may be paraphrased as, for example, predicting execution of automatic configuration change. The following three methods are exemplified as a method for detecting a sign of automatic configuration change.
- the first method is a method of receiving a configuration change schedule transmitted from the storage system 103 and the host computer 101 (or a host management computer that manages the host computer; the same applies hereinafter).
- the second method is a determination method based on failure predictor information transmitted from the storage system 103 and the host computer 101. For example, when the failure configuration information is received, the automatic configuration change control program 810 determines whether the failure satisfies the content set in the condition 903 of the automatic configuration change management table 806. If the automatic configuration change control program 810 determines that the predicted failure satisfies the condition 903, the automatic configuration change control program 810 considers that the configuration is changed with the contents set in the result 904 of the automatic configuration change management table 806.
- performance information after a predetermined operation period has elapsed is calculated from performance history information held by the management computer 201 as predicted performance information, and automatic configuration change is performed based on the calculated predicted performance information.
- This is a method of predicting whether or not
- the automatic configuration change control program 810 determines whether the calculated predicted performance information satisfies the contents set in the condition 903 of the automatic configuration change management table 806. If the automatic configuration change control program 810 determines that the predicted performance information satisfies the condition 903, the automatic configuration change control program 810 considers that the configuration is changed with the contents set in the result 904 of the automatic configuration change management table 806.
- the automatic configuration change control program 810 may use the least square method or other algorithms in order to calculate the predicted performance information.
- the least square method for example, a straight line or a curve indicating the time change of the response time is calculated from the time change of the response time 1009 in the VM management table 807.
- the automatic configuration change control program 810 may calculate the tendency of the measurement value of the performance information instead of the predicted performance information.
- the tendency in this case is, for example, the slope of a straight line indicating the time change of the response time.
- the predetermined operation period may be designated by the user, or a predetermined value stored in advance in the storage resource 211 may be used.
- the predicted performance information may be indicated as performance information after a predetermined operation period has elapsed from the measurement time of the performance information, or the performance information after the predetermined operation period has elapsed from the time when the performance information was calculated. May show.
- the method of detecting a sign of automatic configuration change is not limited to the three methods described above, and other methods may be used.
- the automatic configuration change control program 810 calculates a predicted value of the configuration and performance when the automatic configuration change is performed (S101).
- a method for calculating the predicted value of performance the method described in step S100 may be used.
- another method for calculating the predicted value of performance there is a method based on the relationship between the I / O amount from the access source and the operation rate of the constituent elements. For example, the relationship between the operation rate of an arbitrary processor 244 and the total IOPS of a volume group for which I / O processing is performed by the processor 244 is quantified and held. As a result, it is possible to predict how much the operating rate of the processor 244 will be as a result of the change of the IOPS path due to the VM migration.
- the automatic configuration change control program 810 determines whether or not the expected effect value of the configuration change plan to be executed can be satisfied when the automatic configuration change is performed (S102). For example, the automatic configuration change control program 810 executes the automatic configuration change plan by comparing the predicted performance value calculated in step S101 with the contents set in the expected effect value 1106 of the configuration change plan management table 808. It is determined whether or not the expected effect value planned by the configuration change plan can be obtained even in the case of being performed.
- the configuration change plan regarding the configuration using the part where the failure has occurred is invalidated.
- the configuration change plan “change volume assigned to the processor 244” cannot be executed. Therefore, the plan including the infeasible configuration change is invalidated.
- a configuration change plan including a configuration change related to a failure occurrence site that causes the automatic configuration change may be excluded from the determination target in step S102.
- the automatic configuration change control program 810 determines that there is no possibility of damaging the expected effect value planned in the configuration change plan even when the automatic configuration change is performed as originally planned (S102: YES), this processing is performed normally. To finish. This is because it is not necessary to correct the contents of the automatic configuration change.
- the automatic configuration change control program 810 determines that the expected effect value scheduled for the configuration change plan cannot be obtained when the automatic configuration change is performed as originally scheduled (S102: NO)
- the automatic configuration change control program 810 corrects the automatic configuration change (S102: NO).
- the automatic configuration change control program 810 generates an automatic configuration change alternative that can simultaneously achieve the service level of the VM 111 or the host computer 101 and the expected effect value of the configuration change plan (S103). Details of step S103 will be described later with reference to FIG.
- the automatic configuration change control program 810 determines whether an alternative has been generated as a result of executing Step S103 (S104).
- the automatic configuration change alternative cannot be generated (S104: NO)
- the automatic configuration change control program 810 presents the influence of the execution of the automatic configuration change on the configuration change plan to the system administrator or the like (S105). This process ends normally.
- the automatic configuration change control program 810 outputs information on the influence of the configuration change plan due to the execution of the automatic configuration change. For example, the system administrator is notified via the I / O device 213 of the management computer 201 or the like. You may notify by means, such as an email.
- step S105 after the effect of the automatic configuration change on the configuration change plan is presented, the implementation of the affected configuration change plan may be stopped.
- the automatic configuration change control program 810 can generate an automatic configuration change alternative (S104: YES), the automatic configuration change control program 810 executes the alternative (S106), and ends this processing normally.
- any one of the multiple alternatives may be selected at random, or a predetermined evaluation is performed on the multiple alternatives, and the evaluation result You may choose based on. For example, with the I / O performance as the evaluation axis, the one with the highest predicted value of the I / O performance is selected as an alternative. Note that one alternative may be selected by a method other than the method described above.
- FIG. 14 is a flowchart showing details of step S103 in FIG. In this process, an automatic configuration change alternative that can achieve both the service level of the VM 111 or the host computer 101 and the expected effect of the configuration change plan to be executed is generated.
- the automatic configuration change control program 810 refers to the service level 1003 of the VM management table 807 and determines whether the content defining the downtime is set as the service level. If the automatic configuration change control program 810 determines that the downtime is not set as the service level (S200: NO), it skips step S201 and proceeds to step S202 described later.
- the automatic configuration change control program 810 determines that the downtime is set for the service level of the element (host computer, VM) affected by the automatic configuration change (S200: YES), the automatic configuration change control program 810 A configuration changing unit that satisfies the downtime service level is selected from among them (S201).
- the automatic configuration change control program 810 includes, for example, a configuration change unit registered in the configuration change unit table 809 with a configuration change unit that satisfies the service level of the VM and the host computer that use the resource that is the target of the automatic configuration change. Select from 1201.
- the VM and the host computer that use the resource that is the target of the automatic configuration change use “volume 50” of “storage 1”.
- a VM and a host computer that use resources subject to automatic configuration change can be identified by referring to the VM management table 807.
- the configuration change means may be selected by ignoring the service level for downtime.
- configuration changing means require an arbitrary time from the operation start time to the operation end time of the configuration change, for example, “VM data movement” in FIG.
- VM data movement in FIG.
- downtime occurrence timing at the time of implementation of the configuration changing unit varies depending on the type of the configuration changing unit.
- the VM is not generated at the timing when downtime occurs due to the execution of the configuration change unit. It can be considered that is operating. In this case, as a result, the downtime defined in the service level 1003 may be violated.
- the downtime occurrence time for each configuration change means is managed by the characteristic 1203 by the configuration change means table 809.
- the automatic configuration change control program 810 refers to the downtime and the contents of the operation schedule 1004 of the VM management table 807 when selecting the configuration change means, and determines the influence of the downtime on the service level. good.
- the automatic configuration change control program 810 determines, for each configuration change plan to be executed (S202), whether the selected configuration change means is data movement, resource allocation change, or other than these (S203). , S204).
- step S204 When the determination result in step S204 is “data migration”, the automatic configuration change control program 810 calculates the performance after data migration for each resource to be migrated (S205) and generates an alternative (S207). , This processing ends normally.
- the alternative generated in step S207 uses a resource that satisfies both the service level of the I / O performance and the expected effect value of the configuration change plan as the destination.
- the method for calculating the performance after movement is the same as the method described in step S101 in FIG.
- the automatic configuration change control program 810 selects a resource that satisfies the constraint condition of the target configuration change means in order to select a resource that satisfies both the service level of the I / O performance and the expected effect value of the configuration change plan.
- the automatic configuration change control program 810 calculates performance for each resource that satisfies the constraint conditions, and selects a resource that satisfies both the service level and the expected effect value of the configuration change plan.
- Constraint conditions for configuration change means differ for each configuration change means.
- the constraint condition is that the allocation destination processor 244 exists in the same storage device, and the number of volumes allocated to the allocation destination processor is a predetermined upper limit value. There are things such as: Other constraint conditions may be used.
- step S204 When the determination result in step S204 is “resource allocation change”, the automatic configuration change control program 810 calculates the performance after changing the resource allocation for each resource to be allocated (S206), and sets an alternative. Generate (S208), and this process ends normally.
- the alternative generated in step S208 selects a resource that satisfies both the service level of the I / O performance and the expected effect value of the configuration change plan as the allocation change destination.
- the performance calculation method after the resource allocation change is the same as the method shown in step S101 of FIG.
- the automatic configuration change control program 810 selects a resource that satisfies both the service level of the I / O performance and the expected effect value of the configuration change plan, satisfying the constraints of the target configuration change means, Calculate the performance for each selected resource.
- the automatic configuration change control program 810 selects resources that simultaneously satisfy the service level and the expected effect value of the configuration change plan. As described above, the constraint condition of the configuration changing unit is different for each configuration changing unit.
- an alternative that satisfies the service level of I / O performance and the expected effect of the configuration change plan is generated from a single configuration change.
- a desired configuration change plan may not be generated unless a plurality of configuration changes are combined.
- a VM provided on the volume may be moved to a volume in the other storage apparatus. It is done. If implemented, this case can satisfy both the service level of the I / O performance and the expected effect value of the configuration change plan. However, this case violates the constraint condition of the configuration changing means and cannot be implemented with the current configuration.
- the automatic configuration change control program 810 in step S207 and step S208, “selects a resource that satisfies the service level of I / O performance and the expected effect of the configuration change plan.
- the process of “selecting a resource that satisfies the constraint condition of the configuration changing unit” is omitted.
- the automatic configuration change control program 810 calculates performance for all resources, selects all resources that simultaneously satisfy the service level and the expected effect value of the configuration change plan, and selects the selected resources as the data move destination or assignment change destination. A first configuration change is generated.
- the automatic configuration change control program 810 uses the configuration change unit selected in step S201 for the resource that does not satisfy the constraint condition of the target configuration change unit among the resources to be subjected to the first configuration change. It is determined whether or not the configuration that satisfies the condition can be changed.
- the configuration change is set as the second configuration change. Then, the automatic configuration change control program 810 generates a configuration change plan having each of the first configuration change and the second configuration change as tasks.
- the automatic configuration change control program 801 can configure an alternative of automatic configuration change from a plurality of configuration changes by performing the above processing.
- an alternative plan for automatic configuration change it may be possible to cancel the automatic configuration change setting that was set for any resource before the execution of the alternative plan. Therefore, an automatic configuration change setting that is canceled as a result of execution of the alternative may be generated by being included in the alternative. As a result, after implementing an alternative for a certain resource, the automatic configuration change can be reset for that resource, so that the reliability of the system can be maintained and the usability is further improved.
- the automatic configuration change control program 810 determines whether an automatic configuration change can be reset for an arbitrary resource after the execution of the alternative.
- the automatic configuration change control program 810 preferentially selects a configuration change plan including the resetting of the automatic configuration change with respect to the resource determined to be able to reset the automatic configuration change, and the preferentially selected plan is selected. Decide as an alternative. Conditions under which automatic configuration change can be reset differ depending on the type of configuration change.
- a remote copy pair is constructed with the change destination volume “volume 10” and an arbitrary volume. Furthermore, in the configuration change example 905, a multipath is set so that the host computer can access each volume constituting the remote copy pair.
- an alternative plan that can construct a remote copy pair with an arbitrary volume and can set multipaths on the host computer is an alternative that allows automatic configuration change reconfiguration It can be determined that it is a plan.
- a step of presenting the effect of executing the alternative may be included in the flowchart of FIG.
- the effect of the alternative is that the automatic configuration change that has been set is canceled as a result of implementing the alternative.
- the method for presenting the influence of the alternative the method for presenting the influence due to the automatic configuration change described above can be used.
- the automatic configuration change that is performed according to the current state of the computer system and the configuration change plan that is planned in advance by the system administrator in order to improve the operation status, etc. are performed independently of each other, In addition, in a situation where the configuration change plan cannot be executed without obtaining approval from the manager, the automatic configuration change is performed so that both the expected effect of the automatic configuration change and the expected effect of the configuration change plan are satisfied. Can be generated and executed. Therefore, the reliability and management efficiency of the computer system can be improved.
- the configuration change to be achieved can be implemented as an alternative to the predefined automatic configuration change.
- a predetermined condition such as a cluster configuration
- the configuration change plan created by the system administrator with the approval of the manager in advance is created. Can be executed. For this reason, the operational efficiency of system management can be improved.
- 101 Host computer, 102: Storage device, 103: Storage system, 201: Management computer, 233: Switch
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Hardware Redundancy (AREA)
Abstract
Description
以下、ボリュームをVOLと呼ぶ場合がある。
ストレージシステム103は、指定された仮想領域にページが割り当てられていないと判定すると、ライト対象のVVOL106が関連付けられているプール110の中から、未使用のページを選択する。ストレージシステム103は、指定された仮想領域に選択した未使用ページを割り当て、その割り当てたページにライト対象のデータを書き込む。
Claims (14)
- 計算機とストレージ装置とに接続する管理計算機であって、
前記ストレージ装置により提供される複数の論理記憶領域を示す第1構成情報と、前記複数の論理記憶領域の中の第1論理記憶領域に格納され、前記計算機により実行される所定のオブジェクトの稼働要件を示す第2構成情報と、前記計算機または前記ストレージ装置により実行予定である第1の構成変更の計画を示す構成変更計画情報と、を格納するメモリと、前記メモリに接続されるマイクロプロセッサと、
を有し、
前記マイクロプロセッサは、
前記計算機または前記ストレージ装置において所定の条件に従って実施するように事前に設定される第2の構成変更が実施されるか否かを判定し、
前記第2の構成変更が実施されると判定した場合、前記第2の構成変更が実施された場合における前記計算機または前記ストレージ装置についての、所定の性能指標に関する性能指標値を予測し、
前記予測した性能指標値に基づいて、前記構成変更計画についてあらかじめ設定される効果期待値を満たすか否かを判定し、
前記効果期待値が満たされないと判定した場合、前記所定のオブジェクトの稼働要件と前記効果期待値の両方を満たす代替案を生成する、
管理計算機。
- 前記マイクロプロセッサは、
前記代替案を生成できた場合、前記代替案を実行する、
請求項1に記載の管理計算機。
- 前記マイクロプロセッサは、前記代替案の実行の結果として解除される前記第2の構成変更の設定を前記代替案に含めることができる場合、前記第2の構成変更の再設定を含むように前記代替案を生成する、
請求項2に記載の管理計算機。
- 前記マイクロプロセッサは、前記代替案を生成できない場合、前記構成変更計画の効果期待値を達成できなくなる旨を出力する、
請求項3に記載の管理計算機。
- 前記マイクロプロセッサは、前記代替案を生成できない場合、前記構成変更計画を中止する、
請求項4に記載の管理計算機。
- 前記マイクロプロセッサは、前記性能指標値として、前記第2の構成変更が実施された場合における前記計算機または前記ストレージ装置についての、I/O性能とリソース稼働率とを算出する、
請求項5に記載の管理計算機。
- 前記代替案は、複数の構成変更を含む、
請求項1ないし請求項6のいずれか一項に記載の管理計算機。
- 前記所定のオブジェクトは、仮想マシンである、
請求項1ないし請求項6のいずれか一項に記載の管理計算機。
- 前記第2の構成変更は、障害発生を契機に実行されるクラスタ構成の切り替え処理である、
請求項1ないし請求項6のいずれか一項に記載の管理計算機。
- 前記第2の構成変更は、前記論理記憶領域の状態の変動を契機に実行されるデータ移動処理である、
請求項1ないし請求項6のいずれか一項に記載の管理計算機。
- 前記第2の構成変更は、前記論理記憶領域の状態の変動を契機に実行されるリソース割り当て変更処理である、
請求項1ないし請求項6のいずれか一項に記載の管理計算機。
- 計算機とストレージ装置を含む計算機システムを管理計算機を用いて管理する方法であって、
前記管理計算機は、
前記ストレージ装置により提供される複数の論理記憶領域のうち所定の論理記憶領域に格納され、前記計算機により実行される所定のオブジェクトの稼働要件と、前記計算機または前記ストレージ装置により実行予定である第1の構成変更の計画を示す構成変更計画情報とを記憶し、
前記計算機または前記ストレージ装置において所定の条件に従って実施するように事前に設定される第2の構成変更が実施されるか否かを判定し、
前記第2の構成変更が実施されると判定した場合、前記第2の構成変更が実施された場合における前記計算機または前記ストレージ装置についての、所定の性能指標に関する性能指標値を予測し、
前記予測した性能指標値に基づいて、前記構成変更計画についてあらかじめ設定される効果期待値を満たすか否かを判定し、
前記効果期待値が満たされないと判定した場合、前記所定のオブジェクトの稼働要件と前記効果期待値の両方を満たす代替案を生成する、
計算機システムの管理方法。
- 前記管理計算機は、
前記代替案を生成できた場合に前記代替案を実行し、
前記代替案を生成できない場合に前記構成変更計画の効果期待値を達成できなくなる旨を出力する、
請求項12に記載の計算機システムの管理方法。
- 前記管理計算機は、前記代替案を生成できない場合に前記構成変更計画を中止する、
請求項13に記載の計算機システムの管理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/080394 WO2015068299A1 (ja) | 2013-11-11 | 2013-11-11 | 管理計算機および計算機システムの管理方法 |
US14/768,795 US9639435B2 (en) | 2013-11-11 | 2013-11-11 | Management computer and management method of computer system |
JP2015546261A JP6151795B2 (ja) | 2013-11-11 | 2013-11-11 | 管理計算機および計算機システムの管理方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/080394 WO2015068299A1 (ja) | 2013-11-11 | 2013-11-11 | 管理計算機および計算機システムの管理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015068299A1 true WO2015068299A1 (ja) | 2015-05-14 |
Family
ID=53041100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/080394 WO2015068299A1 (ja) | 2013-11-11 | 2013-11-11 | 管理計算機および計算機システムの管理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9639435B2 (ja) |
JP (1) | JP6151795B2 (ja) |
WO (1) | WO2015068299A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017022233A1 (ja) * | 2015-08-06 | 2017-02-09 | 日本電気株式会社 | 情報処理装置、リクエスト処理遅延制御方法及び記憶媒体 |
CN106844010A (zh) * | 2017-01-20 | 2017-06-13 | 深信服科技股份有限公司 | 动态内存大页调度处理方法及装置 |
JP2019074798A (ja) * | 2017-10-12 | 2019-05-16 | 株式会社日立製作所 | リソース管理装置、リソース管理方法、及びリソース管理プログラム |
JP2022045666A (ja) * | 2020-09-09 | 2022-03-22 | 株式会社日立製作所 | リソース割当制御装置、計算機システム、及びリソース割当制御方法 |
JP7132386B1 (ja) | 2021-03-31 | 2022-09-06 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの負荷分散方法 |
JP7518364B2 (ja) | 2020-08-27 | 2024-07-18 | 富士通株式会社 | 情報処理装置およびパス制御方法 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015104811A1 (ja) * | 2014-01-09 | 2015-07-16 | 株式会社日立製作所 | 計算機システム及び計算機システムの管理方法 |
US9052938B1 (en) | 2014-04-15 | 2015-06-09 | Splunk Inc. | Correlation and associated display of virtual machine data and storage performance data |
US10097431B1 (en) | 2014-06-06 | 2018-10-09 | Amazon Technologies, Inc. | Routing to tenant services utilizing a service directory |
US10250455B1 (en) * | 2014-06-06 | 2019-04-02 | Amazon Technologies, Inc. | Deployment and management of tenant services |
JP6443170B2 (ja) * | 2015-03-26 | 2018-12-26 | 富士通株式会社 | 階層ストレージ装置,階層ストレージ制御装置,階層ストレージ制御プログラム及び階層ストレージ制御方法 |
US11153223B2 (en) * | 2016-04-07 | 2021-10-19 | International Business Machines Corporation | Specifying a disaggregated compute system |
JP6791834B2 (ja) * | 2017-11-30 | 2020-11-25 | 株式会社日立製作所 | 記憶システム及び制御ソフトウェア配置方法 |
US11314584B1 (en) * | 2020-11-25 | 2022-04-26 | International Business Machines Corporation | Data quality-based confidence computations for KPIs derived from time-series data |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009217373A (ja) * | 2008-03-07 | 2009-09-24 | Ns Solutions Corp | 情報処理装置、情報処理方法及びプログラム |
JP2010191524A (ja) * | 2009-02-16 | 2010-09-02 | Hitachi Ltd | 管理計算機及び処理管理方法 |
WO2013084332A1 (ja) * | 2011-12-08 | 2013-06-13 | 株式会社日立製作所 | 仮想計算機の制御方法及び仮想計算機システム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7050863B2 (en) * | 2002-09-11 | 2006-05-23 | Fisher-Rosemount Systems, Inc. | Integrated model predictive control and optimization within a process control system |
JP4646309B2 (ja) | 2005-09-26 | 2011-03-09 | 新日本空調株式会社 | デシカント式換気装置 |
US8160056B2 (en) * | 2006-09-08 | 2012-04-17 | At&T Intellectual Property Ii, Lp | Systems, devices, and methods for network routing |
US8117495B2 (en) * | 2007-11-26 | 2012-02-14 | Stratus Technologies Bermuda Ltd | Systems and methods of high availability cluster environment failover protection |
US8543778B2 (en) | 2010-01-28 | 2013-09-24 | Hitachi, Ltd. | Management system and methods of storage system comprising pool configured of actual area groups of different performances |
US8874954B1 (en) * | 2012-10-19 | 2014-10-28 | Symantec Corporation | Compatibility of high availability clusters supporting application failover with shared storage in a virtualization environment without sacrificing on virtualization features |
US9348627B1 (en) * | 2012-12-20 | 2016-05-24 | Emc Corporation | Distributed dynamic federation between multi-connected virtual platform clusters |
-
2013
- 2013-11-11 WO PCT/JP2013/080394 patent/WO2015068299A1/ja active Application Filing
- 2013-11-11 US US14/768,795 patent/US9639435B2/en not_active Expired - Fee Related
- 2013-11-11 JP JP2015546261A patent/JP6151795B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009217373A (ja) * | 2008-03-07 | 2009-09-24 | Ns Solutions Corp | 情報処理装置、情報処理方法及びプログラム |
JP2010191524A (ja) * | 2009-02-16 | 2010-09-02 | Hitachi Ltd | 管理計算機及び処理管理方法 |
WO2013084332A1 (ja) * | 2011-12-08 | 2013-06-13 | 株式会社日立製作所 | 仮想計算機の制御方法及び仮想計算機システム |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017022233A1 (ja) * | 2015-08-06 | 2017-02-09 | 日本電気株式会社 | 情報処理装置、リクエスト処理遅延制御方法及び記憶媒体 |
CN106844010A (zh) * | 2017-01-20 | 2017-06-13 | 深信服科技股份有限公司 | 动态内存大页调度处理方法及装置 |
JP2019074798A (ja) * | 2017-10-12 | 2019-05-16 | 株式会社日立製作所 | リソース管理装置、リソース管理方法、及びリソース管理プログラム |
JP7518364B2 (ja) | 2020-08-27 | 2024-07-18 | 富士通株式会社 | 情報処理装置およびパス制御方法 |
JP2022045666A (ja) * | 2020-09-09 | 2022-03-22 | 株式会社日立製作所 | リソース割当制御装置、計算機システム、及びリソース割当制御方法 |
JP7191906B2 (ja) | 2020-09-09 | 2022-12-19 | 株式会社日立製作所 | リソース割当制御装置、計算機システム、及びリソース割当制御方法 |
JP7132386B1 (ja) | 2021-03-31 | 2022-09-06 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの負荷分散方法 |
JP2022157664A (ja) * | 2021-03-31 | 2022-10-14 | 株式会社日立製作所 | ストレージシステム及びストレージシステムの負荷分散方法 |
Also Published As
Publication number | Publication date |
---|---|
US9639435B2 (en) | 2017-05-02 |
JP6151795B2 (ja) | 2017-06-21 |
JPWO2015068299A1 (ja) | 2017-03-09 |
US20150378848A1 (en) | 2015-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6151795B2 (ja) | 管理計算機および計算機システムの管理方法 | |
JP6051228B2 (ja) | 計算機システム、ストレージ管理計算機及びストレージ管理方法 | |
JP5953433B2 (ja) | ストレージ管理計算機及びストレージ管理方法 | |
JP5756240B2 (ja) | 管理システム及び管理方法 | |
US8639899B2 (en) | Storage apparatus and control method for redundant data management within tiers | |
US10359938B2 (en) | Management computer and computer system management method | |
US8443241B2 (en) | Runtime dynamic performance skew elimination | |
JP5658197B2 (ja) | 計算機システム、仮想化機構、及び計算機システムの制御方法 | |
US20120005435A1 (en) | Management system and methods of storage system comprising pool configured of actual area groups of different performances | |
US10108517B1 (en) | Techniques for data storage systems using virtualized environments | |
US10846231B2 (en) | Storage apparatus, recording medium, and storage control method | |
JP2015520876A (ja) | 情報記憶システム及び情報記憶システムの制御方法 | |
US20120297156A1 (en) | Storage system and controlling method of the same | |
US9760292B2 (en) | Storage system and storage control method | |
WO2016103471A1 (ja) | 計算機システムおよび管理プログラム | |
US8572347B2 (en) | Storage apparatus and method of controlling storage apparatus | |
US8627126B2 (en) | Optimized power savings in a storage virtualization system | |
JP5597266B2 (ja) | ストレージシステム | |
US20240311002A1 (en) | Scaling management apparatus and scaling management method for storage system including storage nodes | |
WO2017163322A1 (ja) | 管理計算機、および計算機システムの管理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13897116 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14768795 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015546261 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13897116 Country of ref document: EP Kind code of ref document: A1 |