WO2015049789A1 - リソース管理システムおよびリソース管理方法 - Google Patents
リソース管理システムおよびリソース管理方法 Download PDFInfo
- Publication number
- WO2015049789A1 WO2015049789A1 PCT/JP2013/077063 JP2013077063W WO2015049789A1 WO 2015049789 A1 WO2015049789 A1 WO 2015049789A1 JP 2013077063 W JP2013077063 W JP 2013077063W WO 2015049789 A1 WO2015049789 A1 WO 2015049789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- resource
- container
- application
- resources
- predetermined
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present invention relates to a resource management system and a resource management method.
- a system that has been operating on resources in a fixed manner is separated from resources using virtualization technology, and the system is migrated between multiple physical computer resources according to the operating status of business applications. Thereby, load imbalance can be leveled dynamically.
- Patent Document 1 discloses a technique for quickly eliminating a load imbalance between groups in a virtual machine system in which a plurality of tenants use a plurality of physical machine resources.
- a plurality of virtual machines are managed on a plurality of resource groups, and a plurality of virtual machines are migrated between physical computer resources according to the load, thereby distributing the load.
- the conventional technology uniformly evaluates the performance load of the virtual server, and does not consider the performance characteristics of the application provided on the virtual server.
- business applications have requirements for resources, but these resource requirements are not constant depending on the type of application and the configuration scale.
- a method of defining a dedicated resource area sized according to the performance characteristics of the application in the tenant can be considered. For example, a resource area for application A and a resource area for application B are prepared in the tenant. A user generates a virtual server using resources in the dedicated resource area according to a desired application. According to this method, although it is possible to operate efficiently in each dedicated resource area, it is not possible to expect an improvement in the utilization rate in the entire resource.
- the present invention has been made in view of the above problems, and an object of the present invention is to provide a resource management system and a resource management method capable of improving resource utilization efficiency in response to a change in situation.
- a resource management system is capable of communicating with a plurality of physical computers that provide resources, a plurality of virtual computers that execute at least one application program, a plurality of physical computers, and a plurality of virtual computers.
- the integrated resource management unit prepares a plurality of containers for managing a part of the associated resources and providing them as virtual resources in advance, and determines the usage range of the provided virtual resources.
- the resource partition to be defined is managed, and the container is associated with one of the application programs. When a resource configuration change request for an application program operating in the resource partition is received, another container is added to the container associated with the application program.
- Resources managed in , Amount of resources managed by other containers may be smaller than the virtual resources provided by other containers.
- a container is associated with one of application programs.
- the container associated with the application program is managed by another container. Migrated resources Therefore, according to the present invention, resources can be migrated according to a change in situation, and resource utilization efficiency can be increased.
- the block diagram of the computer system which manages a resource Explanatory drawing which shows the outline
- the flowchart which shows the process which changes the structure of a container according to the request
- resources are allocated according to changes in the demand of the computer system beyond the resource area defined for each tenant or the resource area defined for each application. Improve resource utilization efficiency of the entire data center by dynamically adapting.
- performance characteristics (resource requirements) required by a plurality of applications are acquired from an application management unit for managing each application.
- the resource demand for the entire resource is calculated from a unified viewpoint in consideration of the performance characteristics required by each application.
- the arrangement of the entire resource is planned based on the resource demand considering the performance characteristics required by the application.
- the resource pool is reconfigured beyond the resource area dedicated to the application.
- the present embodiment it is possible to optimize the utilization rate of the entire resource in a data center where demand changes every moment without contradicting the resource management system and application management system for each tenant.
- the application management know-how cultivated in the conventional operation can be continuously utilized while ensuring security for each tenant.
- each instance is configured to satisfy the resource requirements required by the application, and the utilization rate of the entire resource is increased.
- FIG. 1 shows the configuration of a computer system in this embodiment.
- the computer system can include, for example, a host 10 and a host 20 that operate information processing services, a client computer 70 that is used by a user who requests the service, and a management computer 200 that manages the computer system.
- the host 10 is configured by a general computer architecture including, for example, a physical CPU, a memory (main storage device), an input / output device, a network interface, and an internal bus that interconnects them.
- a general computer architecture including, for example, a physical CPU, a memory (main storage device), an input / output device, a network interface, and an internal bus that interconnects them.
- An OS (Operating System) 13 that directly controls physical processing resources is running on the host 10.
- the application administrator or user can directly control the logical configuration of the host 10 by using the OS 13 on the host 10 as the guest OS 13.
- the host 10 Since the guest OS 13 can control the configuration of all physical processing resources of the host 10, the host 10 is referred to as a bare metal host 10 in this embodiment.
- a logical server configuration controlled by the guest OS 13 is referred to as an instance 11.
- the user can enjoy a desired information processing service by operating one or more applications 12 in the instance 11.
- the application may be abbreviated as “App”.
- the host 20 also has a configuration similar to that of the host 10, but includes virtualization software called a hypervisor 24 or a virtual machine monitor. With the virtualization software, physical resources are logically divided into one or more guest OS areas and provided to an application administrator or user. These guest OS areas are generally called virtual machines 21 or logical partitions, and the host 20 is called a virtual machine host 20.
- virtualization software With the virtualization software, physical resources are logically divided into one or more guest OS areas and provided to an application administrator or user. These guest OS areas are generally called virtual machines 21 or logical partitions, and the host 20 is called a virtual machine host 20.
- These guest OS areas may be virtual machines that share physical components such as CPU cores by time scheduling, for example, or may be logical partitions that are associated with physical components in a fixed manner. There may be. Here, for the sake of simplicity, these are not particularly distinguished and are simply called virtual machines.
- a guest OS 23 is running in the virtual machine, and an application 22 for use by the user is running.
- a virtual machine corresponds to one logical server configuration. Similar to the bare metal host 10, the virtual machine is referred to as an instance 21.
- the computer system according to the present embodiment uses a physical server and storage having the same configuration in two ways: a bare metal host 10 and a virtual machine host 20. However, unless otherwise specified in the following description, the instance 11 on the bare metal host 10 and the instance 21 on the virtual machine host 20 are not distinguished from each other and are simply described as the instance 11.
- a virtual machine In a system that dynamically creates an instance in response to a user creation request, generally called a cloud, a virtual machine is often used.
- the instance configuration can be separated from the physical resource configuration of the host.
- the server configuration for operating the application can be generated flexibly and immediately, and the configuration can be easily changed after generation.
- the virtual machine since the virtual machine shares the physical resources of the virtual machine host 20, it must be influenced by performance from other virtual machines provided in the same virtual machine host 20. Such an environment is not suitable for applications with strict performance requirements, such as databases that require stable disk I / O (Input / Output) performance.
- a bare metal host 10 corresponding to a physical resource is prepared.
- a server environment having stable performance is provided by utilizing the bare metal host 10.
- resource allocation suitable for the performance requirements and life cycle of the application can be realized.
- the storage apparatus 100 has a function of providing storage resources to the instances 11 and 21.
- the storage apparatus 100 is a computer system specialized for data input / output processing, and includes a CPU, a memory, and the like.
- the CPU of the storage apparatus 100 operates a control program for controlling the configuration of storage resources.
- a physical storage medium such as an HDD (hard disk drive) is provided for each logical area called a volume 101.
- the storage medium is not limited to the HDD, but, for example, a flash memory device, MRAM (Magnetoresistive Random Access Memory), phase change memory (Phase-Change Memory), ReRAM (Resistive random-accessromemory), FeRAM (Ferroelectric Random Access Memory) Etc. may be used.
- the volume 101 is recognized as a storage resource (logical device) by the OS 13. Data required for the OS 13 and the application 12 is read from and written to the volume 101.
- the hypervisor 24 further divides the volume 101 into areas called virtual disks 102.
- Virtual disks are often files on a file system.
- the OS 23 on the virtual machine 21 recognizes the data area in the virtual disk 102 as the virtual volume 103.
- the hosts 10 and 20 and the storage apparatus 100 are connected by a SAN (Storage Area Network).
- a SAN Storage Area Network
- An example of a SAN is FC_SAN.
- the FC_SAN includes one or a plurality of fiber channel switches (FC SW) 50, a fiber channel cable 51, and an HBA (host bus adapter) 52 that connects each data input / output device.
- FC SW fiber channel switches
- HBA host bus adapter
- SAN implementations are not limited to Fiber Channel, but can achieve the same purpose of large-capacity data communication, such as iSCSI (Internet Small Computer System Interface), FCoE (Fibre Channel over Ethernet (registered trademark)), Infini- Other types of devices and protocols such as band may be used.
- iSCSI Internet Small Computer System Interface
- FCoE Fibre Channel over Ethernet (registered trademark)
- Infini- Other types of devices and protocols such as band may be used.
- the hosts 10 and 20 that provide the service, the client computer 70 that requests the service, and the management computer 200 that manages the computer system are connected to each other via a communication network.
- the communication network is physically connected by an Ethernet (registered trademark) switch 60 and a cable, and bi-directionally communicates application data and control information using a protocol such as TCP / IP.
- these communication networks are roughly divided into a service network 300 and management networks 301 and 302.
- traffic in which the service client 71 communicates with the applications 12 and 22 mainly flows.
- the service network 300 transmits and receives data necessary for the information processing service.
- management network traffic in which the management client 72 and the management servers 201, 203, 204, 295 communicate with the control components in the devices 10, 20, 50, 100 mainly flows.
- the management network transmits and receives control data for managing the configuration of each information processing apparatus 10, 20, 50, 100.
- These communication networks may be physically separated, or may be configured by logical network division such as layer 3 switch setting or layer 2 switch setting (VLAN, virtual local area network).
- the management network 302 has a firewall 303 for mainly controlling communication between the application management server 201 and each of the hosts 10 and 20.
- the firewall 303 has a function of permitting or blocking a connection from a specific application management server 201 to a specific host, for example, in response to a request from the network management unit 204.
- the firewall 303 performs communication connection or blocking using an IP address, a host name, or the like.
- the client computer 70 is physically configured based on a general computer architecture such as a CPU, a memory, and a non-volatile storage device, similarly to the hosts 10 and 20.
- the client computer 70 includes a service client 71 and a management client 72.
- the service client 71 is software that receives provision of an information processing service from the applications 12 and 22 on the hosts 10 and 20.
- the management client 72 is software that connects to the management computer 200 and manages the configuration of the computer system.
- Each software 71 and 72 on the client computer 70 is not necessarily a dedicated program, and may be a general-purpose program such as a Web browser as long as it has a function to achieve the same purpose.
- the service client 71 may be prepared for each application, or may be configured such that a plurality of applications can be managed by one service client 71.
- the management client 72 may be prepared for each device to be managed, or may be configured to manage a plurality of target devices.
- the management computer 200 changes the configuration of the computer system in response to a user request transmitted from the client computer 70.
- the management computer 200 has a general computer architecture like other computers, and operates a management program necessary for realizing each function.
- the management programs in this embodiment are, for example, the application management server 201, the integrated resource management unit 202, the server management unit 203, the network management unit 204, and the storage management unit 205.
- Each management program may be subdivided for each function, or a plurality of programs may be combined into one.
- a configuration in which a plurality of management computers 200 cooperate to operate these management programs may be employed.
- At least a part of the management program may be distributed and arranged in a part of the management target.
- As an arrangement destination for example, there is an agent program on the host 10 or the FC SW 50.
- a plurality of management programs are prepared according to the scope and responsibilities of the devices managed by each administrator, and access control for each function of the management program is provided by account authentication.
- Application management server 201 provides a function of managing applications 12 and 22 on instances 11 and 21.
- the applications 12 and 21 have unique data structures, functions, and processing flows suitable for processing by the applications themselves. Therefore, a dedicated application management server 201 may be required to manage the applications 12 and 22.
- the application administrator who manages the application is familiar with the original management method and the operation of the application management server.
- the application management and resource configuration management it is possible to obtain great convenience that application managers can make use of know-how.
- the application management server may have a function of managing a part of the resource configuration. Therefore, in this embodiment, the application management server 201 and the integrated resource management unit 202 are configured to be able to change the configuration of the management target by mutually transmitting and receiving control information.
- a plug-in is provided in the application management server 201, or an API (Application Programmable Interface) is provided in the integrated resource management unit 202.
- the application management server 201 can designate a specific physical device as a management target and have the authority to change the configuration of the specific physical device exclusively with other application management servers.
- the hypervisor 24 can be considered as one application in that it is deployed on a physical host. Accordingly, a hypervisor management server for managing the hypervisor 24 may be provided.
- the resource configuration of each device 10, 20, 50, 60, 100 managed by the management computer 200 is managed by the integrated resource management unit 202 and each device management unit.
- the device management units are the server management unit 203, the network management unit 204, and the storage management unit 205.
- Each device management unit manages the configuration of the physical server device such as the hosts 10 and 20, the network switch such as the FC SW 50 and the Ethernet SW 60, and the storage device 100 via the management network 302.
- each device management unit changes the configuration of logical resources that operate on the devices to be managed, acquires and accumulates the operation information of the resources, It has a function to set attribute values.
- Each device management unit may be a dedicated management program provided by a vendor who develops or manufactures a device.
- Each device management unit and the integrated resource management unit 202 transmit and receive control information using a management interface such as an API, for example.
- the integrated resource management unit 202 controls each device management unit in response to a request from the management client 72.
- the integrated resource management unit 202 creates an instance and changes a device configuration necessary for configuration change through each device management unit. A specific configuration of the integrated resource management unit 202 will be described later.
- a tenant is an example of a “resource partition”.
- Each tenant is associated with each piece of configuration information such as users and user groups belonging to the tenant, definition of administrator authority for managing the tenant, and resources available in the tenant.
- tenants which are resource partitions
- hardware and physical data centers can be shared among multiple user groups while maintaining security and processing performance independence, and the overall resource utilization can be reduced. Can be increased.
- a tenant is a group of users who share resources, for example.
- an application management server 201 is prepared for each tenant. This is because the information processing system managed by the application management server 201 holds a lot of confidential information such as customer data and business data.
- the application management server 201 can identify physical resources managed by the application management server 201 itself, and can use these physical resources exclusively from other types of application management servers.
- one of the configuration management information possessed by tenants is the amount of resources.
- the usage fee charged to each user is a pay-per-use charge calculated based on the amount of resources used by the user.
- the physical resource amount held by the computer system does not necessarily match the amount that the user can contract (virtual resource amount).
- the total amount of resources that each user can contract can be larger than the actual physical resource amount.
- the purchase frequency and the number of purchased physical resources cannot be determined regardless of the user resource demand. Furthermore, the demand trend varies depending on the group to which the user belongs and the application to be used, and it is impossible to predict the amount of resource supply by ignoring them.
- the simplest and most general method is a method of determining a certain amount of physical resources to be used for each user group to be contracted, that is, each tenant.
- Such a range of resources available to a tenant is referred to as a resource pool.
- User 301 creates instance 11 (or 21) in order to use a desired application.
- the physical resources necessary for operating the instance are managed in advance in the resource pool 303.
- the resource pool 303 is set by the device administrator using each device management unit on the management computer 200.
- the device managers are, for example, a server manager 305a, a network manager 305b, and a storage manager 305c.
- the device management units are, for example, a server management unit 203, a network management unit 204, and a storage management unit 205.
- the resource pool 303 a combination of resources necessary for configuring an instance, such as the host 10 (or 20), the isolated network 304, and the volume 101, is registered in advance by each device management unit.
- Various management operations such as instance creation and resource configuration change by the user 301 are performed by communicating with the management client 72, and are completed by the self-service of the user 301.
- the user 301 may query the application management servers 201a and 201b in advance for requirements required by the application to be installed, and specify the requirements during various management operations.
- the management client 72 has a function of requesting each configuration management component for at least operations related to configuration management of instances.
- the application management servers 201a and 201b set in advance what is managed as a dedicated area for each application from among the resources registered in the resource pool 303. As a result, the application management servers 201a and 201b accumulate the detailed configuration and operation information of physical resources, and can manage applications more efficiently.
- the application management servers 201a and 201b are referred to as application management servers 201.
- the user 301 When the instance 11 (or 21) is prepared in the tenant 300, the user 301 notifies the application management server 201 of the desired instance, and constructs the desired application inside the desired instance.
- the user 301 collaborates with the application manager 306 to examine details of the resource configuration or ask the application manager 306 to design. Also good. Further, the application management server 201 creates a new instance for constructing an application so that the application management server 201 can manage the physical device in which the instance is operating, or the application management server 201 dedicates the instance. A process of migrating to a physical device may be performed.
- a dedicated application management server 201 is provided for each tenant 300, and an instance 11 (or 21) is created using physical resources deployed in the resource pool 303.
- Each application management server 201 occupies a specific resource (host, network, volume, etc.).
- the application management server 201 exclusively manages an application that is a management target of the application management server 201 with other application management servers.
- applications businesses
- applications can be transferred by transferring applications between virtual machine hosts according to the performance load of the instance 21, or by adding or deleting resources to the resource pool 303.
- a resource pool to be allocated to a tenant is dynamically configured based on application requirements and operation information.
- FIG. 3 is an explanatory diagram showing an overview of the resource management system and the resource management method according to this embodiment.
- One of the concepts characteristic of this embodiment is a container 310 that manages a part of associated resources and provides it as a virtual resource.
- the container 310 is a component corresponding to a physical resource (physical device) registered with the application management server 201 in the comparative example of FIG.
- the container 310 has a role of virtualizing an actual physical hardware configuration.
- alphabets attached to symbols are ignored.
- the tenants 300a and 300b may be referred to as the tenant 300, the resource pools 303a and 303b as the resource pool 303, and the application management servers 201a and 201b as the application management server 201.
- the container 310 is used as a resource pool for each application.
- Each tenant manages resource information such as the contract resource amount for each tenant.
- the integrated resource management unit 202 configures all physical resources in advance as containers 310 and registers them as management targets in each application management server 201. As will be described later, it is not necessary to match the virtual resource configuration provided by the container 310 with the physical resource configuration registered in the container. Therefore, the resource amount of the container 310 can be defined exceeding the total amount of physical resources.
- Examples of physical resources as “resources” associated with the container 310 include a server (calculation resource), a network (communication resource), and a storage resource.
- the container 310 is configured by a combination of these physical resources, and can be registered as a management target of the application management server 201.
- a server cluster configured by combining a plurality of servers may be used.
- a logical partition (LPAR) host is a server resource that logically divides each component such as a CPU and a network adapter.
- the server cluster for example, there are a redundant virtual machine host 20, a redundant bare metal host 10, or a server group having a SMP (symmetric multiprocessing) configuration by combining buses of a plurality of servers.
- the network resource 311b is a range (isolated network) that can be communicated between hosts, such as a virtual LAN, subnet, or virtual private network controlled by layer 2 or layer 3.
- the network resource 311b may be an appliance that provides a network function, such as a firewall, a load balancer, and a VPN (Virtual Private Network) gateway.
- the storage resource 311c is, for example, a volume 101 provided by the storage apparatus 100, or a directory on the network file system.
- the above-mentioned resources 311a, 311b, and 311c may be specific devices provided in the data center, or may be server instances or object storage procured from an external IaaS cloud.
- the configuration of the container 310 is managed by the integrated resource management unit 202.
- the application management server 201 procures necessary containers 310 by requesting resources from the integrated resource management unit 202.
- the application management server 201 can occupy a desired container 310 and use the container 310 as a resource area dedicated to the application.
- the integrated resource management unit 202 controls configuration management such as which resources 311 are physically allocated to which containers 310.
- the integrated resource management unit 202 can change the physical configuration of the container 310 without being involved in the application management server 201. Therefore, according to the present embodiment, resources can be interchanged between a plurality of applications or a plurality of tenants without interfering with the application management system, and the resource utilization rate can be leveled.
- resources managed by the same application management server 201a can be adjusted between containers provided to different tenants.
- the resource of the first container 310 (1) managed by the management server 201a of application A is used for the instance 11a via the resource pool 303a of the first tenant 300a.
- the resources of the second container 310 (2) managed by the same application management server 201a are used for the two instances 11b via the resource pool 303b of the second tenant 300b.
- the user 301b using the instance 11b requests resource addition to the second container 310 (2).
- This resource addition request is sent from the client computer 70 to the management computer 200 via the communication network 301.
- resources are migrated from the first container 310 (1) to the second container 310 (2) managed by the same application management server 201a.
- the first container 310 (1) in which resources are reduced may be referred to as a migration source container
- the second container 310 (2) to which resources are added may be referred to as a migration destination container.
- the first container 310 satisfies a predetermined migration source selection condition such as a low resource utilization rate.
- a predetermined migration source selection condition such as a low resource utilization rate.
- the resource utilization rate is below the reference value (or the utilization rate is the lowest)
- the functions indispensable for applications that use the resource migration destination container Have satisfy application constraint conditions
- no performance degradation such as bottleneck.
- Conditions other than these may be added to the migration source selection conditions, and some of the above conditions may be excluded from the migration source selection conditions.
- the processing capacity of the second container 310 (2) increases and the load is reduced.
- FIG. 3 also shows an example (312b) of adjusting resources between different applications.
- the third container 310 (3) managed by the management server 201b of the application B is used for the instance 11a via the resource pool 303a of the first tenant 300a.
- the application management server 201b manages another container 310, but the container 310 is not used.
- the user 301a desires to improve the response performance of the instance 11a that uses the third container 310 (3)
- the user 301a requests the management computer 200 to add resources to the third container 310 (3).
- a part of the resources virtually allocated to the first container 310 (1) moves to the third container 310 (3).
- the first container 310 (1) is selected as a resource migration source container in order to satisfy a predetermined migration source selection condition.
- the resource presented to the application management server 201 can be virtualized by the container 310 created and managed by the integrated resource management unit 202. Therefore, even when there is a change in the physical device configuration allocated to the container 310, such as addition or deletion of a storage medium, the application management server 201 has no knowledge of the change in the physical device configuration.
- the application management server 201 can manage applications regardless of changes in the physical device configuration.
- FIG. 4 shows a management component group that is provided in the management computer 200 and implements functions characteristic of the present embodiment.
- the integrated resource management unit 202 provides a function of allocating resources between containers according to the resource usage rate.
- the user request management unit 210 is a function for managing user requests.
- the user request management unit 210 receives a configuration change request such as a user instance creation request from the management client 72.
- the user request management unit 210 returns the result of the configuration change by the integrated resource management unit 202 to the management client 72.
- the user request management unit 210 controls the execution order, progress status, and the like.
- the integrated resource management unit 202 uses the instance management table 211 and the tenant management table 212 to hold configuration information provided to the user and the application management server 201.
- the virtual resource configuration provided to the tenant 300 or the application management server 201 is managed for each container 310 and held in the container management table 215.
- the resource configuration management unit 213 is a function for managing the resource configuration, and operates in cooperation with the application management server 201.
- the resource configuration management unit 213 controls the association between the container 310 and the physical device (resource) while referring to the operation information held in the resource operation information database 216.
- the resource configuration management unit 213 processes the operation information of each resource according to a management method set in advance, and stores the processing result in the performance evaluation table 214. Specifically, the resource configuration management unit 213 evaluates or processes resource operation information based on resource requirements (performance requirements) requested by the application, and stores the results in the performance evaluation table 214.
- the integrated resource management unit 202 transmits a control command for the configuration of each device 10, 20, 50, 100 to be managed to each device management unit 203, 204, 205.
- the device management units 203, 204, and 205 obtain the utilization rate and actual performance from each managed device and store them in the resource operation information database 216.
- the resource operation information database 216 has a function of providing a history of operation information for each component using the device identifier and time stored in the container management table 215 as keys.
- the application management server 201 includes, for example, a resource management unit 206, a resource requirement definition table 207, and an application operation information database 208.
- the resource management unit 206 requests the integrated resource management unit 202 to change the configuration in order to manage the configuration of the target resource used by the application.
- the application management server 201 defines requirements suitable for the application, and holds these requirements in the resource requirement definition table 207. Since resource requirements differ depending on the type of application, the format of the resource requirement definition table 207 for managing resource requirements is not constant and may vary from application to application.
- the application operation information database 208 holds operation information such as application setting values and performance values.
- the application operation information includes, for example, a calculation resource, a connection established with the service client 71, a use status of the connection, and the like.
- the calculation resource is, for example, a process, thread, memory space, etc. used by the application.
- the connection usage status is, for example, response time, number of transactions, and the like.
- the operation information has a data structure such as a unique name and attribute based on the design of each application.
- FIG. 5 shows an instance management table 211 for storing setting values and attributes managed for each instance.
- Information described in the instance management table 211 is provided to the user via the client computer 70.
- the identification information of the physical device assigned to the instance is provided to the user in an abstracted manner. The user does not necessarily need to know which physical resource is actually allocated to the instance 11 used by the user.
- the instance management table 211 includes, for example, an instance identifier 211a, an instance owner 211b, a resource pool 211c to which the instance belongs, a container 211d, a resource configuration 211e, a network policy 211f, a grade 211g, a consumption point 211h, an expiration date 211j, ( The application management server identification name 211k is stored in association with it (if necessary).
- each instance 11 represents a logical computer having a guest OS.
- a virtual resource configuration 211e is defined in the table 211.
- the instance identifier 211a is information for uniquely identifying the instance 11.
- the owner 211b is information that uniquely identifies a user who uses the instance 11.
- the resource pool 211c is information for uniquely identifying the resource pool 303 to which the container 310 used by the instance 11 belongs.
- the container 211d is information that uniquely identifies the container 310 that provides resources to the instance 11.
- the resource configuration 211e is composed of components such as a CPU and a memory as in a general computer architecture.
- the contents of the resource configuration 211e can be individually set according to a user request.
- template data of the resource configuration 211e may be prepared.
- the total of the resource configurations 211e of the instances having the same container identifier 211d is set so as not to exceed the resource capacity provided by the container 310.
- the resource configuration 211e does not necessarily match the resource configuration of the physical device associated with the container 310.
- the network policy 211f sets a protocol to be used and communication permission or disapproval regarding communication between instances and communication with other hosts on the public communication network.
- the grade 211g sets a service level for the resource performance assigned to the instance.
- the instance 11 can be associated with contract information based on, for example, a contract between a data center operator and a user.
- the contract information includes, for example, a consumption point 211h for determining a usage fee, a usage time limit 211j indicating the remaining period of a valid contract, and the like.
- Information related to licenses, user application identification information, and the like may be managed in the table 211 in accordance with the subject and form of the contract.
- the user can introduce and operate a desired application to the instance by notifying the application management server 211k or application manager of the identifier 211d of the container used by the user's instance. .
- FIG. 6 shows a tenant management table 212 for managing tenants.
- the tenant management table 212 stores setting values and attributes managed for each tenant.
- a tenant ID 212a for example, a tenant ID 212a, a user group ID 212b, a management role definition 212c, a resource pool ID 212d, an instance management table ID 212e, and a remaining available point 212f are stored in association with each other.
- the tenant ID 212a is information for uniquely identifying the tenant 300.
- the user group ID 212b is information that uniquely identifies a user group.
- the management role definition 212c is information for identifying information that defines a user role and management authority.
- the resource pool ID 212d is information for uniquely identifying the resource pool 303 provided in the tenant.
- the instance management table ID 212e is information for uniquely identifying the instance management table 211 for managing instances operating in the tenant.
- the remaining available point 212f is information representing the usage fee of the available contract resource.
- FIG. 7 shows a container management table 215 for managing containers.
- the container management table 215 stores setting values and attributes managed for each container.
- the container management table 215 manages the correspondence between container information and physical devices (resources) notified to the application management server 201.
- the container ID 215a and the access authority 215b necessary for the configuration change are delivered to the application management server 201.
- the application management server 201 does not disclose information 215c to 215k related to the physical device configuration.
- Container ID 215a is information for uniquely identifying a container.
- the access authority 215b is information for uniquely identifying access authority information for accessing a container and managing a logical resource configuration.
- the application management server 215c is information that uniquely identifies the application management server 201 that manages applications that use containers.
- the container management table 215 holds, for example, a combination of the server 215d, the network 215e, and the storage 215g as the contents of the container 310.
- the total amount of resources constituting the container 310 is held in the virtual resource capacity 215j and the real resource capacity k.
- the virtual resource capacity 215j and the actual resource capacity k are both amounts managed by the integrated resource management unit 202.
- the actual resource capacity 215k is equal to the total of the resource capacity physically possessed by the device managed as one container
- the virtual resource capacity 215j is a virtual resource capacity provided to the application management server 201. is there. Due to the function of the integrated resource management unit 202, the two may not match.
- the instance type 215f indicates the type of instance running on the container.
- a configuration management method and a device management unit can be managed by the instance type 215f.
- the instance type 215f for example, “VM” is set if the container is the virtual machine host 20, and “PM” is set if the container is the bare metal host 10.
- the application management server 201 cannot use the container 310 specified by the container ID 215a as a management target unless the access authority information specified by the access authority 215b is used. However, only a specific application may be available depending on the container configuration. Information for identifying an application that can use a container having a specific configuration is set in the application constraint 215h.
- only a specific application can be used.
- the hardware certified by the application developer is guaranteed to operate
- a specific hardware function is required to operate the application
- the firmware configuration of a specific version There is a case where the operation is guaranteed only with.
- the application constraint field 215h is set manually or automatically by the administrator or the application management server 201, and the value is updated when the configuration of the container specified by the container ID 215a is changed.
- the container 310 by introducing a characteristic configuration called the container 310, the physical device configuration and the logical device configuration recognized by the application management server 201 or the tenant 300 can be separated.
- a physical device to be transferred into a container can be interchanged or the entire container can be exchanged between different types of applications or between different tenants.
- the resource configuration allocated to the business system (application) can be dynamically changed according to the usage status without affecting the existing application management system or tenant management system.
- the resource utilization of the entire data center can be increased by performing resource allocation to containers based on different types of application requirements (conditions required by applications for resources). It is therefore important to evaluate and adjust different types of application requirements from a unified perspective.
- FIG. 8 shows the concept of a method for evaluating performance in consideration of application requirements.
- FIG. 8 is a two-dimensional plan view for the sake of explanation, the same applies to the case of evaluating with any positive N-dimension according to the number of performance indexes to be considered.
- FIG. 8A shows the distribution of each of the containers 310a to 310d in a certain performance index (for example, CPU usage rate and disk IOPS).
- a certain performance index for example, CPU usage rate and disk IOPS.
- the range of the target performance value defined by the resource manager is indicated by a dotted line 321.
- One idea is to define a service level using a target performance value range 321 based on the performance of a physical device (resource) itself and present it to the user as a grade.
- the usage status of each container is evaluated based on the distance from the origin. For example, it can be analyzed that the resource usage rate of the container 310c is relatively low and the resource usage rate of the container 310b is high. Regarding the containers 310a and 310b, it is determined that the usage tendency is biased.
- the type of required resource differs depending on the type of application running on the container. Therefore, a method for uniformly evaluating the performance index without considering the type of application is not appropriate. That is, if the resource requirements defined by the application management server 201 and know-how for application management can be taken into consideration, the resource utilization rate can be determined fairly based on the demand trend of resources in the container.
- Fig. 8 (b) shows a performance evaluation method that takes into account the usage characteristics of different resources for each application.
- the resource usage trend of application A is shown as a performance characteristic curve 322.
- the performance characteristic curve 322 of the application A and the performance values of the containers 310c and 310d in which the application A operates are superimposed, it can be seen that the container 310c has a resource utilization rate suitable for the application A. However, it turns out that the container 310d is not suitable for the application A.
- the reason why the container 310d is not suitable for the performance trend of the application A is that a bottleneck exists in another resource element (for example, network performance) associated with the container 310d, or is assigned to the container 310d. If there is too much CPU resource amount, it is estimated that either or both of them are used.
- another resource element for example, network performance
- the same analysis is possible for application B.
- the performance characteristic curve 323 indicating the resource usage tendency of the application B and the performance values of the containers 310a and 310b in which the application B operates are overlapped for determination. It can be seen that the container 310a is not suitable for the performance trend of the application B.
- the containers 310a and 310d are exchanged between the application A and the application B, or the resource configuration of the container 310a is changed to be close to the resource configuration of the container 310d. Improvement measures such as
- the resource utilization rate of the entire data center can be maximized in consideration of application characteristics.
- the first technical feature is the concept of “container” that separates the resource configuration targeted by the application management server 201 and the configuration of the physical device.
- a second technical feature is an expression format for uniformly comparing resource requirements of heterogeneous applications. This will be described later.
- a third technical feature is a configuration management method for accommodating physical devices (resources) among a plurality of containers.
- information related to the resource configuration required by the application management server 201 is held in the resource requirement definition table 207.
- Information regarding these resource requirements is not constant for each application, and the resource requirement expression format held by the application management server 201 is also different for each application.
- FIG. 9 shows an example of the resource requirement definition table 207 for defining resource requirements.
- the resource requirement definition table 207 shown in FIG. 9A manages the target 207a, resource item 207b, request value 207c, operation status 207d, and evaluation value 207e in association with each other.
- the target 207a is information for uniquely identifying the container 310 to be used by the application.
- the resource item 207b is information indicating a resource item associated with the target container 207a.
- the request value 207c is a condition that the application requests to the resource.
- the operating status 207d is an actual measurement value of the performance index indicated by the resource item 207b.
- the evaluation value 207e is information that determines whether or not the value of the operation status 207d satisfies the request value 207c.
- FIG. 9B shows a resource requirement definition table 207 (1) in another format.
- the target 207f and the evaluation value 207g are managed in association with each other, and other items (resource items, request values, operating status) are not managed.
- a performance requirement (207b) is defined for a specific component (resource), and as shown in FIG.
- performance requirements are defined using performance values that are not directly related to resource requirements, such as the number of processing threads.
- FIG. 10 shows a performance evaluation table 214 specific to the present embodiment, which is used by the resource management unit 213 to calculate operation information for the entire resource.
- a performance index 214b that needs to be evaluated and its actual measurement value 214c are held for the container ID 214a based on the resource requirements determined for each type of application.
- a correction coefficient 214d is provided in order to compare and evaluate resource requirements by a plurality of different types of applications in common.
- the actual measurement value 214c of each performance index 214b is corrected by the correction coefficient 214d, and the correction result is recorded in the correction value 214e.
- the correction value 214e is calculated as a simple product of the actual measurement value 214b and the correction coefficient 214d, and is a dimensionless value.
- the performance index 214b includes not only an index indicating the high resource utilization rate but also an index indicating the degradation of resource performance.
- an index indicating the degradation of resource performance for example, an index in which the magnitude of performance degradation appears as a numerical value, such as a response time of disk I / O, and the like, it is used for determining a comprehensive performance bottleneck. There are also indicators.
- Which component (resource) is selected as the performance index 214b may be automatically set by the application management server 201 or may be set by an administrator.
- the correction coefficient 214d is an important value for fairly comparing the performance of the container 310 with respect to a plurality of different applications, and has a meaning of a weight for giving a quantitative value indicating how much each performance index is important. .
- the correction coefficient 214d the resource performance characteristics for each application described above can be expressed.
- the CPU usage rate (the ratio of CPU time to the total processing time) in the container ID: CNT-A021 is 45%
- the CPU usage rate in the container ID: CNT-A025 is 69%.
- the correction coefficient 214d may be set by the hand of each application administrator, or may be automatically set by the application management server 201.
- the resource configuration management unit 213 may calculate the correction coefficient by statistically processing the operation information.
- the above magnification is calculated using, for example, the following formula using values that can be taken for each performance value.
- the application magnification represents the importance and popularity of the application, and is evaluated based on the number of applications installed (number of instances).
- the evaluation value 207g is calculated from the following equation.
- simultaneous equations are established with the number of records in the resource requirement definition table 207 held in the same application as the order. If each performance index 214b is independent of each other, the simultaneous equations have a solution, and the correction coefficient 214d is determined. When one or more sets of each performance index 214b are dependent, an approximate solution obtained by the least square method or the like may be used as the correction coefficient 214d.
- FIG. 11 is a flowchart of processing for correcting the operation information of each container based on the resource requirement requested by the application, calculating the operation information for the entire resource, and determining a container that accommodates the resource.
- the resource configuration management unit 213 When the resource configuration management unit 213 receives, for example, a resource addition request from the application management server 201 (S10), the subsequent processing is triggered by the reception.
- the resource addition request includes the type of the application 12 (or 22). It may also include the amount of resources to be added.
- the predetermined trigger for starting a series of processes for changing the resource allocation by changing the configuration of the container 310 is not limited to the reception of the resource addition request described above. For example, this processing may be started when the system administrator explicitly instructs.
- the integrated resource management unit 202 may periodically execute this process.
- the application management server 211k When the user request management unit 210 receives a request for an instance configuration change, the application management server 211k may be identified from the instance management table 211 and this process may be executed.
- a configuration may be employed in which the type and amount of additional necessary resources are automatically specified by automatically calculating based on the history of resource utilization.
- step S11 the integrated resource management unit 202 determines whether or not the resource requirements defined in the application management server 201 have changed. Specifically, the resource configuration management unit 213 refers to the resource requirement definition table 207 via the resource management unit 206 on the application management server 201 and determines whether there is an update. If there is an update as a result of the determination (S11: YES), the resource configuration management unit 213 calculates the correction coefficient 214d again in step S12, and updates the performance evaluation table 214 to the latest state.
- step S13 the resource configuration management unit 213 acquires the actual measurement value 214c of the operation information from the resource operation information database 216, calculates the correction value 214e, and stores it in the performance evaluation table 214.
- step S14 the resource configuration management unit 213 sequentially searches the records described in the container management table 215, and starts a process of detecting a container having a low resource utilization rate.
- a container with a low resource utilization rate is determined by calculating the sum of correction values 214e related to positive influence determination (+) in the performance evaluation table 214 and selecting one having a small total value.
- step S15 the resource configuration management unit 213 refers to the application restriction 215h in the container management table 215, and determines whether the application that requests the addition of the resource can be operated on the processing target container. If the resource configuration management unit 213 determines that the processing target container violates the application restriction 215h (S15: YES), the resource configuration management unit 213 returns to step S14 and selects another container having a low sum of the correction values 214e as the processing target container.
- step S16 the resource configuration management unit 213 refers to the influence determination 214f of the performance evaluation table 214 to determine whether there is a performance deterioration larger than a predetermined performance deterioration reference value.
- a method for determining the presence or absence of performance degradation for example, in addition to a method based on a threshold, a method of comparing the magnitude of the negative performance influence degree 214f between a candidate container that has already been selected and a processing target container There is.
- the negative performance influence degree 214f of the processing target container is larger than the corresponding value 214f of the candidate container, it may be determined that there is performance degradation.
- the resource configuration management unit 213 selects the processing target container as a candidate container for reviewing the resource configuration in step S17.
- the number of containers selected as candidates is not limited to one, and a plurality of containers may be selected as candidates. Note that candidate containers may be selected in consideration of the container type 215f shown in FIG.
- step S18 the resource configuration management unit 213 determines whether there is an unprocessed container among the containers described in the container management table 215. If there is an existing container (S18: YES), the resource configuration management unit 213 returns to step S14 to Repeat the steps.
- step S19 the resource configuration management unit 213 changes the resource configuration of the candidate container when one or more candidate containers are detected.
- a change in the resource configuration of a candidate container is typically a resource reduction.
- the processing for reducing the resource configuration of the container in step S19 includes, for example, processing for deleting one or more nodes (physical servers) from the server cluster, processing for reducing volumes from storage resources, and communication priority set for network resources. There is a process to reduce the degree.
- the process of consolidating instances may be executed in another container managed by the same application management server 201.
- a new container having a resource capacity smaller than that of the candidate container is created, and the newly created small container executes processing for taking over the instance running on the candidate container and the identification information of the candidate container. May be.
- step S19 the identification information and instance of the candidate container are not deleted, and the record of the container management table 215 is updated to an appropriate value by the resource configuration management unit 213. Is done. More specifically, for example, in step S19, when a node is deleted from the server cluster of the migration source container, if the virtual server is operating on the node, the virtual server is in the same server cluster. It is migrated to the node and remains in the migration source container. At this time, the instance management table 211 and the virtual resource capacity j of the container management table 215 are not changed. As a result, the user who uses the migration source container and the application management server 201 can continue to use it without noticing the physical configuration change in step 19. On the other hand, the identifier of the node is deleted from the server field 215d of the record that manages the migration source container in the container management table 215, and the resources of the node are subtracted from the actual resource capacity k field.
- step S19 the surplus resources generated by the candidate container reduction process are moved to the container for which resource addition is requested in step S10.
- the resource configuration management unit 213 updates the record of the container management table 215 to an appropriate value.
- the virtual resource capacity j Add the capacity for the added resource.
- the migration destination container performs a process of adding resources constituting the container. Thereafter, a process of leveling the load in the same container, for example, a process of rearranging virtual machines in the virtual machine host cluster may be performed.
- the application management server 201 When the application management server 201 manages the container 310, access to a physical device is required.
- the application management server 201 accesses the physical device using the access information.
- the access information for example, a management IP address, an authentication account, and a device ID can be used.
- the application management server 201 can change the resource configuration of the container 310 safely and reliably by using appropriate access information.
- the configuration change here refers to, for example, a change in power supply state, a change in BIOS (Basic Input / Output System) setting values such as a boot option, a change in network adapter settings such as a physical address, and an input / output device switching.
- BIOS Basic Input / Output System
- the OS on the physical device is used. Therefore, by reconfiguring or reinstalling the OS, any number of new server environments can be created. Access information could be set.
- step S19 in the process of FIG. 11 described above when the container 310 is composed of a single physical device (for example, when the bare metal host 10 is used), the same physical device is transferred to another application management server container. You need to register again. For this reason, it is necessary to transfer access information from the migration source application management server to the migration destination application management server.
- migration of access information does not mean that the access information used at the migration source can be used as it is at the migration destination, but for the migration destination application management server to use the migration destination container safely. This means that access information is set in the migration destination application management server.
- the migration source application management server that originally used the physical device can still access the migrated physical device. is there. Therefore, unauthorized access, system failure, data loss, etc. may occur.
- appropriate migration of access information is indispensable.
- FIG. 12 is a flowchart showing processing for migrating access information.
- the access information may be referred to as access authority information.
- step S30 the resource configuration management unit 213 invalidates the access authority set in the migration target physical device and used by the migration source application management server 201.
- the reason why the access authority is not completely deleted is to prevent the effective access authority from being lost even if an unexpected failure occurs during the subsequent processing.
- step S31 the resource configuration management unit 213 requests the resource management unit 206 of the migration source application management server 201 to delete the access authority for the migration target physical device.
- the firewall 303 on the management network may block the access path from the migration source application management server 201 to the migration target physical device.
- step S32 the resource configuration management unit 213 creates a new access authority on the physical device to be migrated through the server management unit 203 and validates it.
- step S33 the resource configuration management unit 213 notifies the migration destination application management server 201 of the new access authority generated in step S32, and sets to use the new access authority in the future.
- the firewall 303 on the management network permits the access path between the migration destination application management server 201 and the physical device to be migrated. That's fine.
- step S34 the resource configuration management unit 213 confirms that the access authority has been appropriately transferred.
- confirmation means, for example, a connection test is performed from the migration source application management server and the migration destination application management server 201 to each physical device to be migrated.
- the resource configuration management unit 213 proceeds to step S36, returns the state to the start point of this flowchart, and ends with an error.
- step S35 the resource configuration management unit 213 deletes the old access authority that has been revoked in step S30 from the physical device to be transferred, and executes this process. Finish normally. Thereby, the migration source application management server 201 cannot permanently access the migrated physical device, and safety is maintained.
- the resources allocated in advance to the application program may be set to be less than the usage amount specified in the resource addition request.
- additional allocation of resources is possible at any time according to the subsequent usage status. Therefore, the computer system can be efficiently operated with few resources.
- the expression “the resource may be set to be smaller than the specified usage amount” may be rephrased as “the resource is set to be lower than the specified usage amount”.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
Claims (13)
- リソースを管理するリソース管理システムであって、
前記リソースを提供する複数の物理計算機と、
少なくとも一つのアプリケーションプログラムを実行する複数の仮想計算機と、
前記複数の物理計算機および前記複数の仮想計算機に通信可能に接続される統合リソース管理部とを有し、
前記統合リソース管理部は、
対応づけられる前記リソースの一部を管理し仮想リソースとして提供する複数のコンテナを予め用意し、
前記提供される仮想リソースの利用範囲を定義するリソース区画を管理し、
前記コンテナは前記アプリケーションプログラムの何れかに対応づけられており、
前記リソース区画で動作するアプリケーションプログラムのリソース構成変更要求を受けると、当該アプリケーションプログラムに対応づけられる前記コンテナに他のコンテナで管理される前記リソースを移行し、前記他のコンテナの管理する前記リソース量は前記他のコンテナの提供する前記仮想リソースより少なくなることがあるリソース管理システム。 - 前記統合リソース管理部は、前記複数のリソース区画のうち第1のリソース区画に提供する第1のコンテナに対応付けられている前記リソースの一部である第1所定リソースを、前記複数のリソース区画のうち第2のリソース区画に提供する第2のコンテナに対応付けることで、前記第1所定リソースの所属先を前記第2のリソース区画に移す、請求項1に記載のリソース管理システム。
- 前記統合リソース管理部は、前記複数のリソースについての前記アプリケーションプログラムによる利用率を管理しており、前記複数のリソースのうち最も利用率の低いリソースを前記第1所定リソースとして選択する、
請求項2に記載のリソース管理システム。 - 前記統合リソース管理部は、前記複数のリソースについての性能劣化を管理しており、最も利用率の低い前記リソースについての前記性能劣化が所定値よりも小さい場合に、前記利用率が最も低く、かつ、前記性能劣化が前記所定値よりも小さいリソースを前記第1所定リソースとして選択する、
請求項3に記載のリソース管理システム。 - 前記統合リソース管理部は、前記第2のアプリケーションプログラムが使用するための所定の制約条件を前記第1所定リソースが満足するかを判定し、前記利用率が最も低く、かつ、前記性能劣化が前記所定値よりも小さく、かつ、前記所定の制約条件を満たす仮想リソースを前記第1所定リソースとして選択する、
請求項4に記載のリソース管理システム。 - 前記統合リソース管理部は、
前記第1のコンテナを使用する第1のアプリケーションプログラムを管理するための第1アプリケーション管理部と、前記第2のコンテナを使用する第2のアプリケーションプログラムを管理するための第2アプリケーション管理部とに、通信可能に接続されており、
前記第1所定リソースを前記第1のコンテナから前記第2のコンテナに移す際に、前記第1所定リソースに設定されている、前記第1アプリケーション管理部が前記第1所定リソースにアクセスするための旧アクセス権限情報を無効化し、
前記旧アクセス権限情報を前記第1アプリケーション管理部から削除し、
新たなアクセス権限情報を生成して、前記新たなアクセス権限情報を前記第1所定リソースに設定し、
前記新たなアクセス権限情報を前記第2のアプリケーション管理部に設定し、
前記第1所定リソースにアクセスするためのアクセス権限が前記第1アプリケーション管理部から前記第2アプリケーション管理部に移行したことを確認し、
前記旧アクセス権限情報を前記第1アプリケーション管理部から削除する、
請求項5に記載のリソース管理システム。 - 前記統合リソース管理部は、前記複数のコンテナに対応付けられている前記リソースのそれぞれに予め設定されている所定の性能指標の実測値を用いて所定の演算を行うことにより、前記コンテナを使用する前記アプリケーションプログラムの種類に依存せずに前記複数のコンテナの性能を評価するための性能評価値を算出し、前記算出した性能評価値を提示する、
請求項1のいずれかに記載のリソース管理システム。 - 前記統合リソース管理部は、前記複数のコンテナに対応付けられている前記リソースのそれぞれに予め設定されている所定の性能指標の実測値を用いて所定の演算を行うことにより、前記コンテナを使用する前記アプリケーションプログラムの種類に依存せずに前記複数のコンテナの性能を評価するための性能評価値を算出し、算出した性能評価値を前記所定の要求を発行する装置に通知し、
前記所定の要求は、前記性能評価値からボトルネックであると判定された前記リソースの追加を要求するものである、
請求項4に記載のリソース管理システム。 - 前記統合リソース管理部は、
前記複数のコンテナに対応付けられている前記リソースのそれぞれに予め設定されている所定の性能指標の実測値を用いて所定の演算を行うことにより、前記コンテナを使用する前記アプリケーションプログラムの種類に依存せずに前記複数のコンテナの性能を評価するための性能評価値を算出し、
前記算出した性能評価値に基づいて前記複数のコンテナについての性能劣化を管理し、
最も利用率の低い前記コンテナについての前記性能劣化が所定値よりも小さい場合に、前記利用率が最も低く、かつ、前記性能劣化が前記所定値よりも小さいリソースを前記第1所定リソースとして選択する、
請求項3に記載のリソース管理システム。 - 情報処理システムのリソースを管理コンピュータを用いて管理する方法であって、
前記情報処理システムは、前記リソースを提供する複数の物理計算機と、少なくとも一つのアプリケーションプログラムを実行する複数の仮想計算機と、前記複数の物理計算機および前記複数の仮想計算機に通信可能に接続される管理コンピュータとを有し、
前記管理コンピュータは、
対応付けられる前記リソースの一部を管理し仮想リソースとして提供する複数のコンテナを予め用意し、
前記提供される仮想リソースの利用範囲を定義するリソース区画を管理し、
前記コンテナは前記アプリケーションプログラムの何れかに対応づけられており、
前記リソース区画で動作するアプリケーションプログラムのリソース構成変更要求を受けると、当該アプリケーションプログラムに対応づけられる前記コンテナに他のコンテナで管理される前記リソースを移行し、前記他のコンテナの管理する前記リソース量は前記他のコンテナの提供する前記仮想リソースより少なくなることがある、
リソース管理方法。 - 前記管理コンピュータは、前記複数のリソース区画のうち第1のリソース区画に提供する第1のコンテナに対応付けられている前記リソースの一部である第1所定リソースを、前記複数のリソース区画のうち第2のリソース区画に提供する第2のコンテナに対応付けることで、前記第1所定リソースの所属先を前記第2リソース区画に移す、請求項10に記載のリソース管理方法。
- コンピュータを、情報処理システムのリソースを管理するための管理コンピュータとして機能させるためのコンピュータプログラムであって、
前記情報処理システムは、所定リソースを提供する複数の物理計算機と、少なくとも一つのアプリケーションプログラムを実行する複数の仮想計算機と、前記複数の物理計算機および前記複数の仮想計算機に通信可能に接続される管理コンピュータとを有し、
前記コンピュータに、対応付けられる前記リソースの一部を管理し仮想リソースとして提供する複数のコンテナを予め用意し、
前記提供される仮想リソースの利用範囲を定義するリソース区画を管理し、
前記コンテナは前記アプリケーションプログラムの何れかに対応づけられており、
前記リソース区画で動作するアプリケーションプログラムのリソース構成変更要求を受けると、当該アプリケーションプログラムに対応づけられる前記コンテナに他のコンテナで管理される前記リソースを移行し、前記他のコンテナの管理する前記リソース量は前記他のコンテナの提供する前記仮想リソースより少なくなることがある、
コンピュータプログラム。 - 前記コンピュータプログラムは、前記コンピュータに、前記複数のリソース区画のうち第1のリソース区画に提供する第1のコンテナに対応付けられている前記リソースの一部である第1所定リソースを、前記複数のリソース区画のうち第2のリソース区画に提供する第2のコンテナに対応付けることで、前記第1所定リソースの所属先を前記第2仮想リソースに移させる、請求項12に記載のコンピュータプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/770,081 US9495195B2 (en) | 2013-10-04 | 2013-10-04 | Resource migration between virtual containers based on utilization rate and performance degradation |
JP2015540343A JP5976230B2 (ja) | 2013-10-04 | 2013-10-04 | リソース管理システムおよびリソース管理方法 |
PCT/JP2013/077063 WO2015049789A1 (ja) | 2013-10-04 | 2013-10-04 | リソース管理システムおよびリソース管理方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/077063 WO2015049789A1 (ja) | 2013-10-04 | 2013-10-04 | リソース管理システムおよびリソース管理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015049789A1 true WO2015049789A1 (ja) | 2015-04-09 |
Family
ID=52778397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/077063 WO2015049789A1 (ja) | 2013-10-04 | 2013-10-04 | リソース管理システムおよびリソース管理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9495195B2 (ja) |
JP (1) | JP5976230B2 (ja) |
WO (1) | WO2015049789A1 (ja) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016167086A1 (ja) * | 2015-04-17 | 2016-10-20 | 日本電信電話株式会社 | サーバ選択装置、サーバ選択方法及びサーバ選択プログラム |
JP2017037403A (ja) * | 2015-08-07 | 2017-02-16 | 株式会社日立製作所 | 計算機システム及びコンテナ管理方法 |
JP2017107555A (ja) * | 2015-12-11 | 2017-06-15 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | ソフトウェア・コンテナ中のソフトウェアの識別を決定するための方法、システム、およびプログラム |
JP2017111761A (ja) * | 2015-12-18 | 2017-06-22 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | コンテナ収容装置、コンテナ作成方法、及びプログラム |
JP2019503535A (ja) * | 2016-02-25 | 2019-02-07 | 華為技術有限公司Huawei Technologies Co.,Ltd. | 自動アプリケーション展開のための方法およびクラウド管理ノード |
CN111200595A (zh) * | 2019-12-20 | 2020-05-26 | 北京淇瑀信息科技有限公司 | 一种访问容器的权限管理方法、装置和电子设备 |
WO2021095943A1 (ko) * | 2019-11-15 | 2021-05-20 | 대구대학교 산학협력단 | 서비스 프로파일을 고려한 컨테이너의 배치 방법 |
US11252228B2 (en) | 2015-10-19 | 2022-02-15 | Citrix Systems, Inc. | Multi-tenant multi-session catalogs with machine-level isolation |
JP7455813B2 (ja) | 2018-08-28 | 2024-03-26 | ノボ・ノルデイスク・エー/エス | コンテナベースの医薬品投与ガイダンスを提供して、糖尿病を治療するためのシステムおよび方法 |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6033985B2 (ja) * | 2014-03-07 | 2016-11-30 | 株式会社日立製作所 | 性能評価方法及び情報処理装置 |
US9703611B1 (en) * | 2014-03-21 | 2017-07-11 | Amazon Technologies, Inc. | Isolating resources for utilization by tenants executing in multi-tenant software containers |
JP6326913B2 (ja) * | 2014-03-31 | 2018-05-23 | 富士通株式会社 | 制御プログラムおよび制御方法 |
US9843478B2 (en) * | 2014-06-12 | 2017-12-12 | Dell Products L.P. | Template builder for end-to-end provisioning and lifecycle management of it infrastructure and services |
US9563475B2 (en) | 2014-09-30 | 2017-02-07 | International Business Machines Corporation | Merging connection pools to form a logical pool of connections during a preset period of time thereby more efficiently utilizing connections in connection pools |
JP6540356B2 (ja) * | 2015-08-10 | 2019-07-10 | 富士通株式会社 | システム複製制御装置およびシステムの複製制御方法 |
US9846602B2 (en) * | 2016-02-12 | 2017-12-19 | International Business Machines Corporation | Migration of a logical partition or virtual machine with inactive input/output hosting server |
US10034407B2 (en) * | 2016-07-22 | 2018-07-24 | Intel Corporation | Storage sled for a data center |
US10348813B2 (en) * | 2016-10-28 | 2019-07-09 | International Business Machines Corporation | Provisioning a bare-metal server |
US10684933B2 (en) * | 2016-11-28 | 2020-06-16 | Sap Se | Smart self-healing service for data analytics systems |
CN106878058B (zh) * | 2017-01-03 | 2020-11-06 | 新华三技术有限公司 | 一种服务节点复用方法及装置 |
US10409702B2 (en) * | 2017-03-20 | 2019-09-10 | Netapp, Inc. | Methods and systems for managing networked storage system resources |
US10884816B2 (en) | 2017-03-28 | 2021-01-05 | International Business Machines Corporation | Managing system resources in containers and virtual machines in a coexisting environment |
US10387212B2 (en) * | 2017-06-15 | 2019-08-20 | Microsoft Technology Licensing, Llc | Attribute collection and tenant selection for on-boarding to a workload |
CN109254843A (zh) * | 2017-07-14 | 2019-01-22 | 华为技术有限公司 | 分配资源的方法和装置 |
KR102052652B1 (ko) * | 2017-12-05 | 2019-12-06 | 광주과학기술원 | 클라우드 서비스 시스템 |
US11513864B2 (en) * | 2018-03-22 | 2022-11-29 | Amazon Technologies, Inc. | Adoption of existing virtual computing resources into logical containers for management operations |
US11086685B1 (en) | 2018-04-25 | 2021-08-10 | Amazon Technologies, Inc. | Deployment of virtual computing resources with repeatable configuration as a resource set |
JP6957431B2 (ja) * | 2018-09-27 | 2021-11-02 | 株式会社日立製作所 | Hci環境でのvm/コンテナおよびボリューム配置決定方法及びストレージシステム |
US11086686B2 (en) * | 2018-09-28 | 2021-08-10 | International Business Machines Corporation | Dynamic logical partition provisioning |
US11500874B2 (en) * | 2019-01-23 | 2022-11-15 | Servicenow, Inc. | Systems and methods for linking metric data to resources |
US11416264B2 (en) * | 2019-08-27 | 2022-08-16 | Sap Se | Software component configuration alignment |
US11356317B2 (en) * | 2019-12-24 | 2022-06-07 | Vmware, Inc. | Alarm prioritization in a 5G telco network |
US20230057210A1 (en) | 2020-02-26 | 2023-02-23 | Rakuten Symphony Singapore Pte. Ltd. | Network service construction system and network service construction method |
WO2021171211A1 (ja) * | 2020-02-26 | 2021-09-02 | ラクテン・シンフォニー・シンガポール・プライベート・リミテッド | リソースプール管理システム、リソースプール管理方法及びプログラム |
JP7389351B2 (ja) * | 2020-03-23 | 2023-11-30 | 富士通株式会社 | 移動対象コンテナ決定方法および移動対象コンテナ決定プログラム |
US11875046B2 (en) * | 2021-02-05 | 2024-01-16 | Samsung Electronics Co., Ltd. | Systems and methods for storage device resource management |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010282420A (ja) * | 2009-06-04 | 2010-12-16 | Hitachi Ltd | 管理計算機、リソース管理方法、リソース管理プログラム、記録媒体および情報処理システム |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7140020B2 (en) * | 2000-01-28 | 2006-11-21 | Hewlett-Packard Development Company, L.P. | Dynamic management of virtual partition computer workloads through service level optimization |
US7748005B2 (en) * | 2000-01-28 | 2010-06-29 | Hewlett-Packard Development Company, L.P. | System and method for allocating a plurality of resources between a plurality of computing domains |
US7694303B2 (en) * | 2001-09-25 | 2010-04-06 | Sun Microsystems, Inc. | Method for dynamic optimization of multiplexed resource partitions |
US7299468B2 (en) * | 2003-04-29 | 2007-11-20 | International Business Machines Corporation | Management of virtual machines to utilize shared resources |
US8560671B1 (en) * | 2003-10-23 | 2013-10-15 | Netapp, Inc. | Systems and methods for path-based management of virtual servers in storage network environments |
US7752623B1 (en) * | 2004-09-16 | 2010-07-06 | Hewlett-Packard Development Company, L.P. | System and method for allocating resources by examining a system characteristic |
US7765552B2 (en) * | 2004-09-17 | 2010-07-27 | Hewlett-Packard Development Company, L.P. | System and method for allocating computing resources for a grid virtual system |
US7752624B2 (en) * | 2004-12-21 | 2010-07-06 | Hewlett-Packard Development Company, L.P. | System and method for associating workload management definitions with computing containers |
US7458066B2 (en) * | 2005-02-28 | 2008-11-25 | Hewlett-Packard Development Company, L.P. | Computer system and method for transferring executables between partitions |
US7730486B2 (en) * | 2005-02-28 | 2010-06-01 | Hewlett-Packard Development Company, L.P. | System and method for migrating virtual machines on cluster systems |
US8020164B2 (en) * | 2005-12-22 | 2011-09-13 | International Business Machines Corporation | System for determining and reporting benefits of borrowed computing resources in a partitioned environment |
US8146091B2 (en) * | 2008-05-01 | 2012-03-27 | International Business Machines Corporation | Expansion and contraction of logical partitions on virtualized hardware |
US8856783B2 (en) * | 2010-10-12 | 2014-10-07 | Citrix Systems, Inc. | Allocating virtual machines according to user-specific virtual machine metrics |
US8707300B2 (en) * | 2010-07-26 | 2014-04-22 | Microsoft Corporation | Workload interference estimation and performance optimization |
US8667496B2 (en) * | 2011-01-04 | 2014-03-04 | Host Dynamics Ltd. | Methods and systems of managing resources allocated to guest virtual machines |
US8601483B2 (en) * | 2011-03-22 | 2013-12-03 | International Business Machines Corporation | Forecasting based service for virtual machine reassignment in computing environment |
JP5691062B2 (ja) | 2011-04-04 | 2015-04-01 | 株式会社日立製作所 | 仮想計算機の制御方法及び管理計算機 |
US8694995B2 (en) * | 2011-12-14 | 2014-04-08 | International Business Machines Corporation | Application initiated negotiations for resources meeting a performance parameter in a virtualized computing environment |
TW201407476A (zh) * | 2012-08-06 | 2014-02-16 | Hon Hai Prec Ind Co Ltd | 虛擬機資源配置系統及方法 |
US9858095B2 (en) * | 2012-09-17 | 2018-01-02 | International Business Machines Corporation | Dynamic virtual machine resizing in a cloud computing infrastructure |
CN104956325A (zh) * | 2013-01-31 | 2015-09-30 | 惠普发展公司,有限责任合伙企业 | 物理资源分配 |
US9251115B2 (en) * | 2013-03-07 | 2016-02-02 | Citrix Systems, Inc. | Dynamic configuration in cloud computing environments |
US8904389B2 (en) * | 2013-04-30 | 2014-12-02 | Splunk Inc. | Determining performance states of components in a virtual machine environment based on performance states of related subcomponents |
US9348654B2 (en) * | 2013-11-19 | 2016-05-24 | International Business Machines Corporation | Management of virtual machine migration in an operating environment |
US10142192B2 (en) * | 2014-04-09 | 2018-11-27 | International Business Machines Corporation | Management of virtual machine resources in computing environments |
US9280392B1 (en) * | 2014-10-02 | 2016-03-08 | International Business Machines Corporation | Resource substitution and reallocation in a virtual computing environment |
-
2013
- 2013-10-04 WO PCT/JP2013/077063 patent/WO2015049789A1/ja active Application Filing
- 2013-10-04 JP JP2015540343A patent/JP5976230B2/ja active Active
- 2013-10-04 US US14/770,081 patent/US9495195B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010282420A (ja) * | 2009-06-04 | 2010-12-16 | Hitachi Ltd | 管理計算機、リソース管理方法、リソース管理プログラム、記録媒体および情報処理システム |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016167086A1 (ja) * | 2015-04-17 | 2016-10-20 | 日本電信電話株式会社 | サーバ選択装置、サーバ選択方法及びサーバ選択プログラム |
JPWO2016167086A1 (ja) * | 2015-04-17 | 2017-09-14 | 日本電信電話株式会社 | サーバ選択装置、サーバ選択方法及びサーバ選択プログラム |
US10445128B2 (en) | 2015-04-17 | 2019-10-15 | Nippon Telegraph And Telephone Corporation | Server selection device, server selection method, and server selection program |
JP2017037403A (ja) * | 2015-08-07 | 2017-02-16 | 株式会社日立製作所 | 計算機システム及びコンテナ管理方法 |
US11252228B2 (en) | 2015-10-19 | 2022-02-15 | Citrix Systems, Inc. | Multi-tenant multi-session catalogs with machine-level isolation |
JP2017107555A (ja) * | 2015-12-11 | 2017-06-15 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | ソフトウェア・コンテナ中のソフトウェアの識別を決定するための方法、システム、およびプログラム |
JP2017111761A (ja) * | 2015-12-18 | 2017-06-22 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | コンテナ収容装置、コンテナ作成方法、及びプログラム |
JP2019503535A (ja) * | 2016-02-25 | 2019-02-07 | 華為技術有限公司Huawei Technologies Co.,Ltd. | 自動アプリケーション展開のための方法およびクラウド管理ノード |
US10824408B2 (en) | 2016-02-25 | 2020-11-03 | Huawei Technologies, Co., Ltd. | Method for automatic application deployment and cloud management node |
JP7455813B2 (ja) | 2018-08-28 | 2024-03-26 | ノボ・ノルデイスク・エー/エス | コンテナベースの医薬品投与ガイダンスを提供して、糖尿病を治療するためのシステムおよび方法 |
WO2021095943A1 (ko) * | 2019-11-15 | 2021-05-20 | 대구대학교 산학협력단 | 서비스 프로파일을 고려한 컨테이너의 배치 방법 |
CN111200595A (zh) * | 2019-12-20 | 2020-05-26 | 北京淇瑀信息科技有限公司 | 一种访问容器的权限管理方法、装置和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US9495195B2 (en) | 2016-11-15 |
US20160004551A1 (en) | 2016-01-07 |
JP5976230B2 (ja) | 2016-08-23 |
JPWO2015049789A1 (ja) | 2017-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5976230B2 (ja) | リソース管理システムおよびリソース管理方法 | |
JP5478107B2 (ja) | 仮想ストレージ装置を管理する管理サーバ装置及び仮想ストレージ装置の管理方法 | |
CN106168884B (zh) | 访问对象存储系统的计算机系统 | |
US8909767B2 (en) | Cloud federation in a cloud computing environment | |
US8656018B1 (en) | System and method for automated allocation of hosting resources controlled by different hypervisors | |
JP5174747B2 (ja) | 計算機システムおよび管理装置 | |
CN110955487A (zh) | Hci环境下的vm/容器和卷配置决定方法及存储系统 | |
US8051262B2 (en) | Storage system storing golden image of a server or a physical/virtual machine execution environment | |
US20110276963A1 (en) | Virtual Data Storage Devices and Applications Over Wide Area Networks | |
US20120233315A1 (en) | Systems and methods for sizing resources in a cloud-based environment | |
US20110078334A1 (en) | Methods and apparatus for managing virtual ports and logical units on storage systems | |
US8412901B2 (en) | Making automated use of data volume copy service targets | |
JP2006092322A (ja) | ファイルアクセスサービスシステムとスイッチ装置及びクオータ管理方法並びにプログラム | |
WO2014184893A1 (ja) | 計算機システム及びリソース管理方法 | |
CN101656718A (zh) | 网络服务器系统与其虚拟机器的建立与开启的方法 | |
US9535629B1 (en) | Storage provisioning in a data storage environment | |
JP2010257274A (ja) | 仮想化環境におけるストレージ管理システム及びストレージ管理方法 | |
JP6055924B2 (ja) | ストレージシステム及びストレージシステムの制御方法 | |
US9940073B1 (en) | Method and apparatus for automated selection of a storage group for storage tiering | |
KR101563292B1 (ko) | 가상 세션 관리자를 이용한 클라우드 가상화 시스템 및 방법 | |
JP2011170679A (ja) | 仮想計算機システムおよびその資源配分制御方法 | |
US8055867B2 (en) | Methods, apparatuses, and computer program products for protecting pre-staged provisioned data in a storage system | |
US11900160B2 (en) | Methods for managing storage quota assignment in a distributed system and devices thereof | |
JP6244496B2 (ja) | サーバストレージシステムの管理システム及び管理方法 | |
WO2014148142A1 (ja) | クラウド向け計算機システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13895058 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14770081 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015540343 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13895058 Country of ref document: EP Kind code of ref document: A1 |