WO2020116221A1 - Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム - Google Patents
Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム Download PDFInfo
- Publication number
- WO2020116221A1 WO2020116221A1 PCT/JP2019/045932 JP2019045932W WO2020116221A1 WO 2020116221 A1 WO2020116221 A1 WO 2020116221A1 JP 2019045932 W JP2019045932 W JP 2019045932W WO 2020116221 A1 WO2020116221 A1 WO 2020116221A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- configuration information
- virtual
- resource management
- layer
- information
- Prior art date
Links
- 238000007726 management method Methods 0.000 title claims description 122
- 238000013507 mapping Methods 0.000 claims abstract description 51
- 230000008859 change Effects 0.000 claims abstract description 12
- 238000012508 change request Methods 0.000 claims abstract description 7
- 238000013461 design Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 35
- 238000012544 monitoring process Methods 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000012790 confirmation Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000013515 script Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 230000004308 accommodation Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/76—Adapting program code to run in a different environment; Porting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45562—Creating, deleting, cloning virtual machine instances
Definitions
- the present invention relates to an ICT (Information and Communication Technology) resource management device, an ICT resource management method, and an ICT resource management program.
- ICT Information and Communication Technology
- Patent Document 1 "one different communication service that is disclosed by a communication service API for each wholesale service provider in response to an order request for use of a communication service from a terminal that provides communication to a user" is disclosed.
- An inter-provider batch service construction device for collectively providing one or more of the above, which holds a catalog in which specifications of communication wholesale services are described and a cooperation rule which defines cooperation of various communication services,
- the communication service APIs corresponding to the plurality of communication services requested for the order are collectively cooperated based on the held catalog and the cooperation rule.
- a cooperative service is constructed, and a collective construction function unit for providing the constructed cooperative service to the terminal is provided.
- a distributed system to which the virtualization technology is applied includes a physical layer that is an aggregate of physical nodes (eg, computers, routers, sensors) included in the distributed system, and an aggregate of virtual nodes configured and operating on the physical node. And a virtual layer.
- configuration management for the physical layer
- configuration management for the virtual layer
- configuration management for the virtual layer.
- the engineer of the distributed system designs both the physical layer and the virtual layer.
- a configuration change such as allocating a new application to a virtual node is made so that the service can be used by a device (eg, IoT (Internet of Things) device) connected to the physical node.
- a device eg, IoT (Internet of Things) device
- IoT Internet of Things
- an engineer has used a method of identifying a virtual node by collating a physical layer and a virtual layer.
- such a method requires manual labor, it obstructs the automation of the operation of the distributed system, resulting in an increase in the operating cost of the distributed system.
- an object of the present invention is to automate the operation of the distributed system and reduce the operation cost of the distributed system to which the virtualization technology is applied.
- the invention according to claim 1 is an ICT resource management device that manages a physical node and a virtual node, which are ICT resources, and is configuration information regarding the physical node on a physical layer.
- a physical layer configuration information, and a configuration information management unit that manages virtual layer configuration information that is configuration information regarding the virtual node on the virtual layer, a layer mapping unit that performs mapping between the physical layer and the virtual layer, and external
- the physical layer configuration information, the virtual layer configuration information, and the mapping information that is the result of the mapping based on the design information of the infrastructure necessary for the configuration change blue
- the invention according to claim 4 is an ICT resource management method in an ICT resource management device for managing a physical node and a virtual node which are ICT resources, wherein the ICT resource management device is the physical layer on a physical layer.
- the physical layer configuration information, the virtual layer configuration information, and based on the mapping information that is the result of the mapping it becomes the infrastructure design information necessary for the configuration change.
- Performing a step of creating a blueprint and a step of performing orchestration for the virtual layer by accessing and executing a program that can be operated through an API based on the blueprint. Characterize.
- the association between the physical node and the virtual node becomes clear by the mapping between the physical layer and the virtual layer, and the virtual node for the configuration change is specified without human intervention. be able to. Therefore, automation of operation can be realized. Therefore, it is possible to automate the operation of the distributed system and reduce the operating cost of the distributed system to which the virtualization technology is applied.
- the invention according to claim 2 is the ICT resource management device according to claim 1, wherein the blueprint includes a collection of catalogs indicating steps for providing a service, and input information for the catalogs. The following parameters are included.
- a workflow for executing the orchestration can be easily configured.
- the invention according to claim 3 is the ICT resource management device according to claim 1 or 2, further comprising a monitoring unit that monitors the physical node and the virtual node, and the configuration information management
- the unit generates the physical layer configuration information and the virtual layer configuration information from the information collected by the monitoring of the monitoring unit, and the layer mapping unit generates the mapping information from the information collected by the monitoring of the monitoring unit. It is characterized in that it is generated.
- the configuration information and the mapping information can be held in the latest state, a blueprint is created and orchestration is executed without human intervention, which contributes to automation of operations. be able to.
- the invention according to claim 5 is an ICT resource management program for causing a computer to function as the ICT resource management device according to any one of claims 1 to 3.
- the construction of the ICT resource management device can be facilitated.
- the distributed system 100 including the ICT resource management device 1 of the present embodiment is a system to which virtualization technology is applied, and the ICT resource management device 1, the service provider terminal 2, and the server 3 are provided. , Edge 4, and device 5.
- the distributed system 100 can manage a physical layer that is an aggregate of physical nodes and a virtual layer that is an aggregate of virtual nodes configured and operating on the physical nodes.
- the server 3, the edge 4, and the device 5 are physical nodes forming a physical layer.
- a VM (Virtual Machine) 7 arranged on the virtual layer shown in FIG. 1 is a virtual node in which the server 3 or the edge 4 is virtualized.
- the ICT resource management device 1 manages physical nodes and virtual nodes as ICT resources.
- the service provider terminal 2 is a terminal that requests configuration changes such as initial deployment and variable scale.
- the service provider terminal 2 makes the request through an API (Application Programming Interface).
- the API is a Northbound API between the ICT resource management device 1 and the service provider terminal 2.
- the service provider terminal 2 is used by a service provider or the like.
- the server 3 is a computer that executes processes related to service provision.
- the server 3 shown in FIG. 1 is located on the cloud platform A and executes processes for cloud services.
- Edge 4 is a relay device arranged on the NW (network), and corresponds to, for example, a router, a bridge, or a gateway. At the edge 4, one or a plurality of applications 6 for executing a process related to service provision are arranged. The server 3 and the edge 4 are communicably connected.
- the device 5 is a device that the end user uses the service, and corresponds to, for example, an IoT device.
- the device 5 can use the service by connecting to the edge 4.
- the ICT resource management device 1 can collect information about physical nodes and virtual nodes. Further, the ICT resource management device 1 can perform mapping between the physical layer and the virtual layer by using the collected information (see the broken double-headed arrow in FIG. 1). Further, the ICT resource management device 1 can execute orchestration on the virtual layer. Specifically, it is possible to deploy a service and allocate resources to the VM 7.
- the ICT resource management device 1 includes a request acquisition unit 11, a blueprint creation unit 12, a configuration information management unit 13, a layer mapping unit 14, a workflow execution unit 15, and an API adapter unit 16. And a functional unit such as the monitoring unit 17. Further, the ICT resource management device 1 stores the configuration information DB 21, the mapping information 22, the catalog group 23, and the blueprint 24 in the storage unit.
- the storage unit included in the ICT resource management device 1 may be, for example, inside the ICT resource management device 1 or outside the ICT resource management device 1.
- the request acquisition unit 11 acquires a configuration change request from the service provider terminal 2.
- the request acquired by the request acquisition unit 11 may be called “order information”.
- the configuration change request can be made not only from the service provider terminal 2 but also from the terminal of the maintenance person of the distributed system 100, for example.
- the service provider terminal 2 and the maintenance person's terminal are examples of external devices.
- the blueprint creation unit 12 creates the blueprint 24 corresponding to the order information acquired by the request acquisition unit 11.
- the blueprint 24 is the design information of the infrastructure required for the requested configuration change.
- the infrastructure indicates the components of the operating environment of the service. For example, the ICT resource itself, ICT resource setting information (eg VM name, IP address, host name), allocated resources, LB (load balancer) set on the NW. ), FW (firewall), and containers.
- the configuration information management unit 13 manages information about ICT resources as configuration information.
- the configuration information management unit 13 can collect information about physical nodes and virtual nodes by accessing the API for collecting resource information, for example.
- the resource information collection API is an API for providing resource information prepared by the orchestration target.
- the API for collecting resource information is a southbound API between the ICT resource management device 1 and the orchestration target.
- the information to be collected can be, for example, a MIB (Management Information Base) based on Simple Network Management Protocol (SNMP), but is not limited to this.
- MIB Management Information Base
- SNMP Simple Network Management Protocol
- Orchestration targets include, but are not limited to, physical nodes and virtual nodes.
- the interface provided by the orchestration target can be provided by, for example, a controller (not shown) that controls the orchestration target, or can be provided by each of the physical node and the virtual node.
- the physical layer configuration information 21a is configuration information regarding physical nodes on the physical layer. As shown in FIG. 3, the physical layer configuration information 21a manages, for example, “node ID”, “state”, “host name”, “IP address”, “VMID”, “use service”, and “user”. It has items, and the value of each management item is stored for each physical node.
- the management item of “node ID” stores the identifier of the target physical node.
- the management item of "state” stores the operating state of the target physical node ("OK" for normal, "NG” for failure).
- the host name of the target physical node is stored in the management item of “host name”.
- the IP address assigned to the target physical node is stored in the management item of “IP address”.
- the “VMID” management item stores the identifier of the VM operating on the target physical node.
- the management item of "utilization service” stores the identifier of the service available in the target physical node.
- Utilization services include, for example, cloud services and edge computing services, but are not limited to these. Further, the service to be used can include a service that enables a plurality of physical nodes to provide the same service.
- the “user” management item stores the identifier of the user who uses the service indicated by the corresponding “use service”.
- the user may be, for example, a corporation or an individual. Further, for example, when the corresponding physical node is an edge device, only the owner of the edge device can be the user.
- the management items of the physical layer configuration information 21a shown in FIG. 3 are examples, and more can be set.
- the management items of the physical layer configuration information 21a it is possible to set the memory size of the VM operating on the target physical node, the CPU frequency, the power state, and the VM name.
- the name and ID of the resource pool used for the target physical node can be set.
- the management item of the physical layer configuration information 21a the type, ID, and name of the network in which the target physical node is arranged can be set.
- the ID, type, and name of the folder used by the target physical node can be set.
- the management items of the physical layer configuration information 21a the storage capacity, ID, type, and name of the data store used by the target physical node can be set. Further, as the management item of the physical layer configuration information 21a, the ID and name of the data center controlling the target physical node can be set. Further, as the management items of the physical layer configuration information 21a, it is possible to set a user name and a password which are authentication information of a user who accesses the target physical node.
- the virtual layer configuration information 21b is configuration information regarding virtual nodes on the virtual layer. As shown in FIG. 4, the virtual layer configuration information 21b has management items such as “node ID”, “state”, “VM name”, “IP address”, and “physical device ID”. The value of each management item is stored in.
- the management item of “node ID” stores the identifier of the target virtual node.
- the management item of "state” stores the operating state of the target operating node ("OK" for normal operation, "NG” for failure).
- the name of the target virtual node is stored in the management item of “VM name”.
- the IP address assigned to the target virtual node is stored in the management item of “IP address”.
- the management item of “physical device ID” stores the identifier of the physical node in which the target virtual node is arranged.
- the management items of the virtual layer configuration information 21b shown in FIG. 4 are examples, and more can be set.
- a VMID that is ID information of the target virtual node can be set.
- the management items of the virtual layer configuration information 21b it is possible to set the memory size and the CPU frequency that are resources of the target virtual node.
- the power state of the target virtual node can be set as a management item of the virtual layer configuration information 21b.
- the management items of the virtual layer configuration information 21b it is possible to set a user name and a password which are authentication information of a user who accesses the target virtual node.
- a gateway used by the target virtual node a VXLAN (Virtual eXtensible Local Area Network), and a static route can be set.
- the host name of the physical node in which the target virtual node is arranged can be set as the management item of the virtual layer configuration information 21b.
- the management item of the virtual layer configuration information 21b it is possible to set information regarding the hypervisor that generates the target virtual node.
- the management item of the container application can be set as the management item set in the virtual layer configuration information 21b.
- the management items of the virtual layer configuration information 21b the host name, label, state, and account ID of the container host registered for the container used by the target virtual node can be set.
- the management items of the virtual layer configuration information 21b it is possible to set the ID, name, state, and scale (the number of utilization servers) of the service provided by the container used by the target virtual node.
- the management items of the virtual layer configuration information 21b it is possible to set the volume mount, ID (only when using the Collinser NFS (Network File System)) and image ID of the storage device provided in the container used by the target virtual node. it can.
- the management items of the virtual layer configuration information 21b it is possible to set the stack group of the storage device provided in the container used by the target virtual node, the health state, the stack ID, and the ID of the service used.
- the layer mapping unit 14 performs mapping between the physical layer and the virtual layer. Specifically, the layer mapping unit 14 uses the configuration information managed by the configuration information management unit 13 to determine which physical node on the physical layer the virtual node on the virtual layer is (or the application arranged on the physical node). 6) It is determined whether or not it is associated with.
- the ICT resource management device 1 stores, as the mapping information 22, the determination result of the association of the physical node and the virtual node by the layer mapping unit 14.
- the layer mapping unit 14 refers to the management item of “VMID” of the physical layer configuration information 21a (FIG. 3) and the management item of “Physical device ID” of the virtual layer configuration information 21b (FIG. 4), The association between the physical node and the virtual node can be determined.
- FIG. 5 is an explanatory diagram of a blueprint for initial deployment.
- the blueprint 24 can be configured as a set of a service template and parameters.
- the service template can be a collection of catalogs.
- the catalog is a template of processes used for providing services, and is an element of the catalog group 23 stored in the ICT resource management device 1.
- the catalog itself is well known, and detailed description thereof will be omitted.
- the parameter is input information for each catalog.
- the order information acquired by the request acquisition unit 11 is the order information related to the initial deployment.
- the blueprint creation unit 12 can configure a service template by selecting a catalog for VM creation, a catalog for NW setting, and a catalog for container setting from the catalog group 23, for example. ..
- the parameters input to the catalog for VM creation are, for example, the number and type of VMs created.
- parameters for designating three VMs that function as Web servers and two VMs that function as AP servers (application servers), that is, a total of five VMs are input to the catalog for VM creation.
- the parameter input to the catalog for VM creation can be input by the service provider terminal 2, for example.
- the parameter input to the NW setting catalog is, for example, IP address allocation (IP allocation).
- IP allocation IP allocation
- the parameter designating the IP address assigned to the created VM is input to the NW setting catalog.
- the parameters input to the NW setting catalog can be acquired from the distributed system 100, for example.
- the parameter input to the container setting catalog is, for example, the setting method of the container used by the created VM.
- a parameter indicating the setting method of copy execution by Collinser is input to the container setting catalog.
- the parameter input to the container setting catalog can be input by the service provider terminal 2, for example.
- the blueprint creation unit 12 selects a required catalog from the catalog group 23 according to the operation indicated by the order information acquired by the request acquisition unit 11, and forms a service template.
- the blueprint creating unit 12 can acquire the parameters input to the selected catalog from the order information and the distributed system 100.
- the blueprint creation unit 12 requests the parameters input to the selected catalog from the service provider terminal 2 or the like that has sent the order information, and receives the order information that is a response to the request again, thereby ordering the order. Parameters can be obtained from the information.
- the distributed system 100 itself can acquire the parameters such as the allocation of the IP address
- the blueprint creation unit 12 can acquire the parameters input to the selected catalog from the distributed system 100.
- the blueprint creating unit 12 refers to the configuration information managed by the configuration information managing unit 13 and the mapping information 22 stored by the layer mapping unit 14. That is, the blueprint creating unit 12 determines the current state of the physical node and the virtual node determined from the physical layer configuration information 21a, the virtual layer configuration information 21b, and the mapping information 22 and the service provider, which is indicated by the order information. It is possible to create the blueprint 24 by collating with the request such as.
- the workflow execution unit 15 executes the workflow according to the blueprint 24 created by the blueprint creation unit 12.
- the workflow is a process in which the steps indicated by the catalog included in the blueprint 24 are connected in a predetermined order.
- the workflow for initial deployment is such that the order is determined and linked, such as VM creation ⁇ NW setting ⁇ container setting.
- orchestration is executed and resources are assigned to the ICT resources.
- the API adapter unit 16 is an interface for accessing a program that can be operated through the API in response to a command from the workflow execution unit 15 that executes a workflow.
- the API is a southbound API between the API adapter unit 16 (or the ICT resource management device 1 including the API adapter unit 16) and the orchestration target.
- the API adapter unit 16 can be interface-connected for each orchestration target.
- a plurality of API adapter units 16 can be prepared for each program that can be operated through the API.
- the workflow execution unit 15 can execute a workflow by executing a program that can be operated through the API.
- the combination of the workflow execution unit 15 and the API adapter unit 16 functions as an orchestrator unit that executes orchestration.
- the monitoring unit 17 monitors the physical node on the physical layer and the virtual node on the virtual layer by, for example, SNMP.
- the monitoring result by the monitoring unit 17 indicates the usage status of the service which has been orchestrated and has become available.
- the monitoring result of the monitoring unit 17 can be transmitted to the configuration information management unit 13.
- the configuration information management unit 13 can collect information about physical nodes and virtual nodes based on the monitoring result of the monitoring unit 17.
- the processing executed by the ICT resource management device 1 of this embodiment will be described with reference to FIG. This process is started, for example, when there is a request for an operation such as a configuration change from the service provider terminal 2 or the like.
- the request acquisition unit 11 outputs order information indicating operations such as configuration change to the blueprint creation unit 12 (step S1).
- the blueprint creation unit 12 requests the configuration information from the configuration information management unit 13 (step S2).
- the configuration information management unit 13 outputs the configuration information stored in the configuration information DB 21, specifically, the physical layer configuration information 21a and the virtual layer configuration information 21b to the blueprint creation unit 12 (step S3). ..
- the blueprint creation unit 12 requests the mapping information 22 from the layer mapping unit 14 (step S4).
- the layer mapping unit 14 outputs the mapping information 22 to the blueprint creating unit 12 (step S5).
- the blueprint creating unit 12 creates the blueprint 24 for the order information based on the configuration information and the mapping information (step S6). At this time, the blueprint creating unit 12 selects a catalog required for the operation from the catalog group 23 according to the order information, and acquires the parameters to be input to the selected catalog from the order information or the distribution system 100.
- the blueprint creation unit 12 sends the created blueprint 24 to the service provider terminal 2 or the like that sent the order information via the request acquisition unit 11, and requests confirmation of the blueprint 24 (step S7). If there is no problem in confirming the blueprint 24, the request acquisition unit 11 transmits information indicating approval from the service provider terminal 2 or the like to the blueprint creation unit 12 (step S8). Steps S1 to S8 constitute an infrastructure design process in the entire process executed by the ICT resource management device 1. Note that steps S7 and S8 may be omitted in order to speed up the process.
- the blueprint creation unit 12 creates a script for executing orchestration based on the approved blueprint 24 (step S9). Techniques for creating scripts are well known, and detailed description thereof will be omitted.
- the blueprint creation unit 12 outputs the created script to the workflow execution unit 15 (step S10). Steps S9 to S10 constitute a script creation process in the entire process executed by the ICT resource management device 1.
- the workflow execution unit 15 interprets the script acquired from the blueprint creation unit 12 (step S11). Techniques for interpreting scripts are well known, and detailed description thereof will be omitted. Next, the workflow execution unit 15 instructs the API adapter unit 16 for each program that can be operated through the API (step S12) and executes the process.
- the API adapter unit 16 When the execution of the process is completed, the API adapter unit 16 notifies the workflow execution unit 15 of the completion of the process execution (step S13). Next, the workflow execution unit 15 notifies the blueprint creation unit 12 of the completion of the process execution (step S14).
- the completion of the process execution means the completion of the orchestration execution from the ICT resource management device 1 to the virtual layer, and the service after the configuration change is available. Steps S11 to S14 constitute an orchestration process in the entire process executed by the ICT resource management device 1.
- the monitoring unit 17 starts monitoring the physical nodes on the physical layer and the virtual nodes on the virtual layer.
- the monitoring unit 17 notifies the configuration information management unit 13 of the information collected by the monitoring (step S15).
- the configuration information management unit 13 notifies the layer mapping unit 14 of the information collected from the monitoring unit 17 (step S16).
- Steps S15 to S16 constitute a monitoring process in the entire process executed by the ICT resource management device 1.
- the configuration information management unit 13 generates configuration information from the information collected by the monitoring unit 17, and stores it in the configuration information DB 21.
- the layer mapping unit 14 also generates mapping information 22 from the information collected by the monitoring unit 17 via the configuration information management unit 13. The generated configuration information and mapping information 22 is used to create a new blueprint.
- the ICT resource management device 1 can hold the configuration information and the mapping information 22 in the latest state, so that the blueprint is created, the orchestration is executed, and the operation is performed without human intervention. Can be automated.
- the workflow of initial deployment can be defined using, for example, MSA (Micro Service Architecture), and can be designed in three layers of a service layer, a baseline, and a micro service.
- MSA Micro Service Architecture
- the service layer registers each process of the workflow of initial deployment in a linked state in a predetermined order.
- the workflow of initial deployment is as follows: catalog/configuration information acquisition A1, surplus resource acquisition/accommodation destination setting A2, VM creation A3, LB/FW group addition A4, container host registration A5, container image expansion A6,
- the process can be divided into a process of state confirmation A7, which is performed in this order.
- the catalog/configuration information acquisition A1 is a step showing selection of a catalog from the catalog group 23 and acquisition of configuration information by the configuration information management unit 13.
- the surplus resource acquisition/accommodation destination setting A2 is a step of acquiring the surplus resources of each ICT resource and setting the physical node that is the accommodation destination of the created VM.
- the VM creation A3 is a step of creating a VM in which an application provided for a service is arranged.
- the LB/FW group addition A4 is a step showing that a load balancer group and a firewall group are added on the NW.
- the container host registration A5 is a step showing the environment construction for executing the container used by the created VM.
- a host machine for operating the container is additionally created, and the added host machine is registered in the controller that manages the container.
- the container image expansion A6 is a step of arranging a containerized application on a host machine for operating the container.
- the containerized application is in the form of an image file, and is placed in the host machine that indicates the image.
- the state confirmation A7 is a step of confirming the state of the created VM as the final stage of the initial deployment.
- the baseline registers the parts of each process in the service layer.
- the registered parts can be configured as a catalog.
- Each process shown in the service layer can be realized as a combination of parts shown in the baseline.
- a surplus resource acquisition B1, VM creation B2, LB/FW group addition B3, container host registration B4, container image expansion B5, and status confirmation B6 are registered in the baseline of initial deployment.
- the surplus resource acquisition B1 registers the parts of the surplus resource acquisition/accommodation destination setting A2.
- the components registered in the surplus resource acquisition B1 include, for example, [1] host and VM list acquisition, [2] VM allocation specification acquisition, and [3] host allocation specification aggregation, but are not limited to these.
- VM creation B2 registers the parts of VM creation A3.
- the parts registered in the VM creation B2 are, for example, [1] acquisition of all VM names, [2] VM name determination, [3] IP payout, [4] VM clone, [5] SSH (Secure Shell) individual setting , But is not limited to these.
- LB/FW group addition B3 registers the parts of LB/FW group addition A4. Parts registered in the LB/FW group addition B3 include, for example, [1] LB group addition and [2] FW group addition, but are not limited to these.
- the container host registration B4 registers the parts of the container host registration A5.
- Parts registered in the container host registration B4 include, for example, [1] token acquisition and [2] host addition, but are not limited to these.
- the container image expansion B5 registers the parts of the container image expansion A6.
- the component registered in the container image development B5 includes, for example, [1] image development, and SSH individual setting can be prepared as necessary, but the component is not limited to these.
- Status check B6 registers the parts of status check A7.
- Parts registered in the status confirmation B6 include, for example, the [1] Ping test, but are not limited to these.
- Microservice registers the function to execute the components registered in the baseline. As shown in FIG. 7, [host] information acquisition C1, [VM] information acquisition C2, [host] information acquisition C3, [NSX] IP payout C4, [VM] clone C5, [ VM] SSH execution C6, [NSX] LB addition C7, [NSX] FW addition C8, [Rancher] token acquisition C9, [VM] SSH execution C10, [Rancher] container creation C11, [VM] Ping test C12 are registered. ing.
- the [host] information acquisition C1 acquires the information of the management item of the physical node (for example, see FIG. 3), and realizes the [1] host and VM list acquisition of the surplus resource acquisition B1.
- the [VM] information acquisition C2 acquires the information of the management item (for example, see FIG. 4) of the virtual node, and realizes the [2] VM allocation specification acquisition of the surplus resource acquisition B1.
- the [host] information acquisition C3 acquires information on the management item of the physical node (for example, see FIG. 3), and realizes [1] acquisition of all VM names of the VM creation B2.
- the [NSX] IP payout C4 pays out the IP address of the VM and realizes [3] IP payout of the VM creation B2.
- the [VM] clone C5 performs the VM duplication process to realize the [4] VM clone of the VM creation B2.
- [VM] SSH execution C6 realizes the [5] SSH (Secure Shell) individual setting of VM creation B2 by logging in to the VM and executing the command.
- [NSX] LB addition C7 adds LB on the network and realizes [1] LB group addition of LB/FW group addition B3.
- the [NSX]FW addition C8 adds the FW on the network and realizes the [2]FW group addition of the LB/FW group addition B3.
- [Rancher] token acquisition C9 creates a container that acquires the authority to perform processing, and realizes [1] token acquisition of container host registration B4.
- the [VM] SSH execution C10 logs in to the VM and executes the command to realize [2] host addition of the container host registration B4.
- Container creation C11 creates a containerized application and realizes [1] image expansion of container image expansion B5.
- the [VM] Ping test C12 confirms the communication and realizes the [1] Ping test of the state confirmation B6.
- Each process (A1 to A7) shown in the service layer corresponds to the catalog, and the parts registered in the baseline correspond to the catalog part.
- the various functions (C1 to 12) shown in the microservice can input parameters for the corresponding parts registered in the baseline.
- the workflow execution unit 15 creates a VM based on the blueprint 24 created in advance and executes container image expansion. In this way, the workflow execution unit 15 can connect the processes that are originally separate and can automate the process that requires parameters for each process.
- the association between the physical node and the virtual node becomes clear, and the virtual node for the configuration change can be specified without manual intervention. Therefore, automation of operation can be realized. Therefore, it is possible to automate the operation of the distributed system and reduce the operating cost of the distributed system to which the virtualization technology is applied.
- the blueprint 24 so as to include the collection of catalogs and parameters (see FIG. 5 )
- the configuration information and the mapping information can be held in the latest state by providing the monitoring unit 17, it is possible to create a blueprint, perform orchestration, and contribute to automation of operations without human intervention. You can Further, the ICT resource management program described later can facilitate the construction of the ICT resource management device.
- program It is also possible to create a program in which the processing executed by the ICT resource management device 1 according to the above-described embodiment is described in a computer-executable language. In this case, the computer executes the program to obtain the same effect as that of the above embodiment. Furthermore, the same processing as that in the above-described embodiment may be realized by recording such a program in a computer-readable recording medium and causing the computer to read and execute the program recorded in the recording medium.
- An example of a computer that executes an ICT resource management program that realizes the same function as the ICT resource management device 1 will be described below.
- FIG. 8 is a diagram showing a computer that executes an ICT resource management program.
- the computer 1000 has, for example, a memory 1010, a CPU 1020, a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. These units are connected by a bus 1080.
- the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM (Random Access Memory) 1012.
- the ROM 1011 stores, for example, a boot program such as BIOS (Basic Input Output System).
- BIOS Basic Input Output System
- the hard disk drive interface 1030 is connected to the hard disk drive 1090.
- the disk drive interface 1040 is connected to the disk drive 1100.
- a removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive 1100.
- a mouse 1110 and a keyboard 1120 are connected to the serial port interface 1050, for example.
- a display 1130 is connected to the video adapter 1060, for example.
- the memory 1010, the hard disk drive 1090, the disk drive 1100, and the storage medium inserted into the disk drive 1100 are specific hardware resources of the storage unit included in the ICT resource management device 1.
- the hard disk drive 1090 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094.
- Each table described in the above embodiment is stored in, for example, the hard disk drive 1090 or the memory 1010.
- the ICT resource management program is stored in the hard disk drive 1090 as a program module in which a command executed by the computer 1000 is described, for example.
- the program module in which each process executed by the ICT resource management device 1 described in the above embodiment is described is stored in the hard disk drive 1090.
- the data used for information processing by the ICT resource management program is stored as program data in, for example, the hard disk drive 1090. Then, the CPU 1020 reads the program module 1093 and the program data 1094 stored in the hard disk drive 1090 into the RAM 1012 as necessary, and executes the above-described procedures.
- the program module 1093 and the program data 1094 related to the ICT resource management program are not limited to being stored in the hard disk drive 1090, and may be stored in a removable storage medium, for example, by the CPU 1020 via the disk drive 1100 or the like. It may be read.
- the program module 1093 and the program data 1094 related to the ICT resource management program are stored in another computer connected via a network such as a LAN (Local Area Network) or a WAN (Wide Area Network), and the network interface 1070 It may be read by the CPU 1020 via the.
- the configuration information management unit 13 can manage the position information of the physical node.
- the configuration information management unit 13 can manage information regarding a user who uses the device 5 connected to the edge 4 as a physical node, or information regarding a tenant of the user. Therefore, the configuration information managed as the physical layer configuration information 21a can be the configuration information for each user or each tenant.
- the layer mapping unit 14 maps the physical layer and the virtual layer, the configuration information in units of users or units of tenants can be configuration information managed as the virtual layer configuration information 21b.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Control Of Heat Treatment Processes (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
したがって、分散システムのオペレーションを自動化し、仮想化技術が適用される分散システムの運用コストを低減することができる。
図1に示すように、本実施形態のICT資源管理装置1を含む分散システム100は、仮想化技術が適用されるシステムであり、ICT資源管理装置1と、サービス事業者端末2と、サーバ3と、エッジ4と、デバイス5とを備える。また、分散システム100は、物理ノードの集合体となる物理レイヤと、物理ノード上に構成されて稼働する仮想ノードの集合体となる仮想レイヤとを管理することができる。サーバ3と、エッジ4と、デバイス5は、物理レイヤを構成する物理ノードとなる。また、図1に示す、仮想レイヤ上に配置されているVM(Virtual Machine)7は、サーバ3またはエッジ4を仮想化した仮想ノードである。
サービス事業者端末2は、初期デプロイやスケール可変などの構成変更を要求する端末である。サービス事業者端末2は、当該要求をAPI(Application Programming Interface)を通じて行う。当該APIは、ICT資源管理装置1とサービス事業者端末2との間のNorthbound APIである。また、サービス事業者端末2は、サービス事業者等が用いる。
また、ICT資源管理装置1は、仮想レイヤに対してオーケストレーションを実行することができる。具体的には、サービスのデプロイや、VM7へのリソース割り当てをすることができる。
図2に示すように、ICT資源管理装置1は、要求取得部11と、ブループリント作成部12と、構成情報管理部13と、レイヤマッピング部14と、ワークフロー実行部15と、APIアダプタ部16と、監視部17といった機能部を備える。また、ICT資源管理装置1は、構成情報DB21と、マッピング情報22と、カタログ群23と、ブループリント24を記憶部で記憶する。ICT資源管理装置1が備える前記記憶部は、例えば、ICT資源管理装置1の内部にあってもよいし、ICT資源管理装置1の外部の装置にあってもよい。
要求取得部11は、サービス事業者端末2からの構成変更の要求を取得する。要求取得部11が取得した要求を「オーダ情報」と呼ぶ場合がある。また、構成変更の要求は、サービス事業者端末2に限らず、例えば、分散システム100の保守者の端末からも行うことができる。サービス事業者端末2、および、保守者の端末は、外部装置の例となる。
ブループリント作成部12は、要求取得部11が取得したオーダ情報に対応するブループリント24を作成する。ブループリント24は、要求された構成変更に必要となるインフラの設計情報である。インフラは、サービスの動作環境の構成要素を示し、例えば、ICT資源そのもの、ICT資源の設定情報(例:VM名、IPアドレス、ホスト名)や割当リソース、NW上に設定されるLB(ロードバランサ)、FW(ファイアウォール)、コンテナといったさまざまな要素を指す。
構成情報管理部13は、ICT資源に関する情報を構成情報として管理する。構成情報管理部13は、例えば、リソース情報収集のAPIにアクセスすることで、物理ノードおよび仮想ノードに関する情報を収集することができる。リソース情報収集のAPIとは、オーケストレーション対象がそれぞれに用意するリソース情報を提供するためのAPIである。リソース情報収集のAPIは、ICT資源管理装置1とオーケストレーション対象との間のsouthbound APIである。収集する情報は、例えば、SNMP(Simple Network Management Protocol)によるMIB(Management Information Base)とすることができるが、これに限定されない。構成情報管理部13が管理する構成情報は、構成情報DB21に格納されており、物理レイヤ構成情報21aと、仮想レイヤ構成情報21bに分類される。
オーケストレーション対象は、物理ノードおよび仮想ノードを含むが、これらに限定されない。オーケストレーション対象が提供するインタフェースは、例えば、当該オーケストレーション対象を制御するコントローラ(図示せず)が提供することもできるし、物理ノードおよび仮想ノードの各々が提供することもできる。
「状態」の管理項目には、対象の物理ノードの稼働状態が格納される(正常の場合「OK」、故障の場合「NG」)。
「ホスト名」の管理項目には、対象の物理ノードのホスト名が格納される。
「IPアドレス」の管理項目には、対象の物理ノードに割り当てられたIPアドレスが格納される。
「VMID」の管理項目には、対象の物理ノード上で稼働するVMの識別子が格納される。
「利用サービス」の管理項目には、対象の物理ノードで利用可能なサービスの識別子が格納される。利用サービスは、例えば、クラウドサービス、エッジコンピューティングサービスがあるが、これらに限定されない。また、利用サービスには、複数の物理ノードで同一のサービスを提供可能となるサービスを含めることができる。
「利用者」の管理項目には、対応の「利用サービス」に示すサービスを利用する利用者の識別子が格納される。利用者は、例えば、法人であってもよいし、個人であってもよい。また、例えば、該当の物理ノードがエッジ装置であった場合、当該エッジ装置の所有者のみを利用者とすることができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードに用いるリソースプールの名称、IDを設定することができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードが配置されるネットワークの型、ID、名称を設定することができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードが用いるフォルダのID、型、名称を設定することができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードが使用するデータストアの記憶容量、ID、型、名称を設定することができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードを制御するデータセンタのID、名称を設定することができる。
また、物理レイヤ構成情報21aの管理項目として、対象の物理ノードにアクセスするユーザの認証情報となるユーザ名、パスワードを設定することができる。
「状態」の管理項目には、対象の稼働ノードの稼働状態が格納される(正常の場合「OK」、故障の場合「NG」)。
「VM名」の管理項目には、対象の仮想ノードの名称が格納される。
「IPアドレス」の管理項目には、対象の仮想ノードに割り当てられたIPアドレスが格納される。
「物理デバイスID」の管理項目には、対象の仮想ノードが配置される物理ノードの識別子が格納される。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードのリソースとなるメモリサイズ、CPU周波数を設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードの電源状態を設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードにアクセスするユーザの認証情報となるユーザ名、パスワードを設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードが使用するゲートウェイ、VXLAN(Virtual eXtensible Local Area Network)、スタティックルートを設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードが配置される物理ノードのホスト名を設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードを生成するハイパバイザに関する情報を設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードが用いるコンテナで提供されるサービスのID、名称、状態、スケール(利用サーバの台数)を設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードが用いるコンテナで提供されるストレージ装置のボリュームのマウント、ID(RancherNFS(Network File System)利用時のみ)、イメージIDを設定することができる。
また、仮想レイヤ構成情報21bの管理項目として、対象の仮想ノードが用いるコンテナで提供されるストレージ装置のスタックのグループ、ヘルス状態、スタックID、利用されるサービスのIDを設定することができる。
図2に戻って、レイヤマッピング部14は、物理レイヤと仮想レイヤとのマッピングを行う。具体的には、レイヤマッピング部14は、構成情報管理部13が管理する構成情報に基づいて、仮想レイヤ上の仮想ノードが物理レイヤ上のいずれの物理ノード(または当該物理ノードに配置されたアプリ6)に紐付けられているかを判定する。ICT資源管理装置1は、レイヤマッピング部14による、物理ノードと仮想ノードとの紐付けの判定結果を、マッピング情報22として保存する。例えば、レイヤマッピング部14は、物理レイヤ構成情報21a(図3)の「VMID」の管理項目と、仮想レイヤ構成情報21b(図4)の「物理デバイスID」の管理項目を参照することで、物理ノードと仮想ノードとの紐付けを判定することができる。
ワークフロー実行部15は、ブループリント作成部12が作成したブループリント24に従って、ワークフローを実行する。ワークフローは、ブループリント24に含まれるカタログが示す工程を、順序を定めて連結したものである。図5に示す初期デプロイのブループリント24に対しては、初期デプロイのワークフローは、VM作成→NW設定→コンテナ設定、のように順序を定めて連結したものとなる。ワークフロー実行部15がワークフローを実行することで、オーケストレーションが実行され、ICT資源にリソースが割り当てられる。
APIアダプタ部16は、ワークフローを実行するワークフロー実行部15からの命令に対し、APIを通じ操作することができるプログラムにアクセスするためのインタフェースである。当該APIは、APIアダプタ部16(またはAPIアダプタ部16を備えるICT資源管理装置1)とオーケストレーション対象との間のsouthbound APIである。APIアダプタ部16は、オーケストレーション対象ごとにインタフェース接続することができる。APIアダプタ部16は、APIを通じ操作することができるプログラムごとに複数用意することができる。ワークフロー実行部15は、APIを通じ操作することができるプログラムを実行することで、ワークフローを実行することができる。
ワークフロー実行部15およびAPIアダプタ部16の組合せは、オーケストレーションを実行するオーケストレータ部として機能する。
監視部17は、物理レイヤ上の物理ノード、および、仮想レイヤ上の仮想ノードを、例えば、SNMPによって監視する。監視部17による監視結果は、オーケストレーションが実行され、利用可能となったサービスの使用状況を示している。監視部17による監視結果は、構成情報管理部13に送信することができる。構成情報管理部13は、監視部17の監視結果で、物理ノードおよび仮想ノードに関する情報を収集することができる。
本実施形態のICT資源管理装置1が実行する処理を、図6を参照して説明する。この処理は、例えば、サービス事業者端末2などから構成変更等のオペレーションの要求があった場合に開始する。
ステップS1~ステップS8は、ICT資源管理装置1が実行する処理全体のうち、インフラ設計の処理を構成する。
なお、処理を迅速にするため、ステップS7、ステップS8は、省略してもよい。
ステップS9~ステップS10は、ICT資源管理装置1が実行する処理全体のうち、スクリプト作成の処理を構成する。
ステップS11~ステップS14は、ICT資源管理装置1が実行する処理全体のうち、オーケストレーションの処理を構成する。
ステップS15~ステップS16は、ICT資源管理装置1が実行する処理全体のうち、監視の処理を構成する。
次に、自動化されるオペレーションとして、初期デプロイについてのワークフローの実行例について、図7を参照して詳細に説明する。図7に示すように、初期デプロイのワークフローは、例えば、MSA(Micro Service Architecture)を用いて定義することができ、サービスレイヤ、ベースライン、マイクロサービスの3層で設計することができる。
余剰リソース取得/収容先設定A2は、ICT資源の各々の余剰リソースの取得、および、作成されるVMの収容先となる物理ノードの設定を示す工程である。
LB/FWグループ追加A4は、ロードバランサのグループおよびファイアウォールのグループをNW上に追加することを示す工程である。
コンテナイメージ展開A6は、コンテナを動作させるためのホストマシンに、コンテナ化されたアプリケーションを配置する工程である。コンテナ化されたアプリケーションはイメージファイル化されており、当該イメージを指示するホストマシンに配置される。
状態確認A7は、初期デプロイの最終段階として、作成したVMの状態を確認する工程である。
[VM]情報取得C2は、仮想ノードの管理項目(例えば、図4参照)の情報を取得し、余剰リソース取得B1の[2]VM割当スペック取得を実現する。
[NSX]IP払出C4は、VMのIPアドレスを払い出し、VM作成B2の[3]IP払出を実現する。
[VM]クローンC5は、VMの複製処理をし、VM作成B2の[4]VMクローンを実現する。
[VM]SSH実行C6は、VMへログインして命令を実行し、VM作成B2の[5]SSH(Secure Shell)個別設定を実現する。
[NSX]FW追加C8は、ネットワーク上にFWを追加し、LB/FWグループ追加B3の[2]FWグループ追加を実現する。
[VM]SSH実行C10は、VMへログインして命令を実行し、コンテナホスト登録B4の[2]ホスト追加を実現する。
[VM]PingテストC12は、疎通確認をし、状態確認B6の[1]Pingテストを実現する。
したがって、分散システムのオペレーションを自動化し、仮想化技術が適用される分散システムの運用コストを低減することができる。
特に、ブループリント24を、カタログの集合体とパラメータを含むように構成することで(図5参照)、オーケストレーションを実行するためのワークフローを容易に構成することができる。
また、監視部17を備えることで、構成情報およびマッピング情報を最新状態で保持することができるため、人手を介することなくブループリントを作成し、オーケストレーションを実行し、オペレーションの自動化に寄与することができる。
また、後記のICT資源管理プログラムは、ICT資源管理装置の構築を容易にすることができる。
また、上記実施形態に係るICT資源管理装置1が実行する処理をコンピュータが実行可能な言語で記述したプログラムを作成することもできる。この場合、コンピュータがプログラムを実行することにより、上記実施形態と同様の効果を得ることができる。さらに、かかるプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータに読み込ませて実行することにより上記実施形態と同様の処理を実現してもよい。以下に、ICT資源管理装置1と同様の機能を実現するICT資源管理プログラムを実行するコンピュータの一例を説明する。
(a)構成情報管理部13は、物理ノードの位置情報を管理することができる。
(b)構成情報管理部13は、物理ノードとしてのエッジ4に接続しているデバイス5を使用するユーザに関する情報、または、ユーザのテナントに関する情報を管理することができる。よって、物理レイヤ構成情報21aとして管理される構成情報は、ユーザ単位またはテナント単位の構成情報とすることができる。レイヤマッピング部14が、物理レイヤと仮想レイヤとのマッピングを行った場合、ユーザ単位またはテナント単位の構成情報は、仮想レイヤ構成情報21bとして管理される構成情報とすることができる。
1 ICT資源管理装置
2 サービス事業者端末
3 サーバ
4 エッジ
5 デバイス
6 アプリ(アプリケーション)
7 VM
11 要求取得部
12 ブループリント作成部
13 構成情報管理部
14 レイヤマッピング部
15 ワークフロー実行部(オーケストレータ部)
16 APIアダプタ部(オーケストレータ部)
17 監視部
21 構成情報DB
21a 物理レイヤ構成情報
21b 仮想レイヤ構成情報
22 マッピング情報
23 カタログ群
24 ブループリント
Claims (5)
- ICT資源となる、物理ノードおよび仮想ノードを管理するICT資源管理装置であって、
物理レイヤ上の前記物理ノードに関する構成情報である物理レイヤ構成情報、および、仮想レイヤ上の前記仮想ノードに関する構成情報である仮想レイヤ構成情報を管理する構成情報管理部と、
前記物理レイヤおよび前記仮想レイヤとのマッピングを行うレイヤマッピング部と、
外部装置からの構成変更の要求に対して、前記物理レイヤ構成情報、前記仮想レイヤ構成情報、および、前記マッピングの結果であるマッピング情報に基づいて、前記構成変更に必要なインフラの設計情報となるブループリントを作成するブループリント作成部と、
前記ブループリントに基づいて、APIを通じ操作することができるプログラムにアクセスして実行することで、前記仮想レイヤに対するオーケストレーションを実行するオーケストレータ部と、を備える、
ことを特徴とするICT資源管理装置。 - 前記ブループリントは、サービスの提供に供する工程を示すカタログの集合体と、前記カタログに対するインプット情報となるパラメータとを含む、
ことを特徴とする請求項1に記載のICT資源管理装置。 - 前記物理ノードおよび前記仮想ノードを監視する監視部、をさらに備え、
前記構成情報管理部は、前記監視部の監視によって収集した情報から、前記物理レイヤ構成情報および前記仮想レイヤ構成情報を生成し、
前記レイヤマッピング部は、前記監視部の監視によって収集した情報から、前記マッピング情報を生成する、
ことを特徴とする請求項1または請求項2に記載のICT資源管理装置。 - ICT資源となる、物理ノードおよび仮想ノードを管理するICT資源管理装置におけるICT資源管理方法であって、
前記ICT資源管理装置は、
物理レイヤ上の前記物理ノードに関する構成情報である物理レイヤ構成情報、および、仮想レイヤ上の前記仮想ノードに関する構成情報である仮想レイヤ構成情報を収集するステップと、
前記物理レイヤおよび前記仮想レイヤとのマッピングを行うステップと、
外部装置からの構成変更の要求に対して、前記物理レイヤ構成情報、前記仮想レイヤ構成情報、および、前記マッピングの結果であるマッピング情報に基づいて、前記構成変更に必要なインフラの設計情報となるブループリントを作成するステップと、
前記ブループリントに基づいて、APIを通じ操作することができるプログラムにアクセスして実行することで、前記仮想レイヤに対するオーケストレーションを実行するステップと、を実行する、
ことを特徴とするICT資源管理方法。 - コンピュータを、請求項1から請求項3の何れか1項に記載のICT資源管理装置として機能させるためのICT資源管理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202105923VA SG11202105923VA (en) | 2018-12-04 | 2019-11-25 | Ict resource management device, ict resource management method and ict resource management program |
US17/311,073 US20220043946A1 (en) | 2018-12-04 | 2019-11-25 | Ict resource management device, ict resource management method, and ict resource management program |
AU2019393436A AU2019393436B2 (en) | 2018-12-04 | 2019-11-25 | ICT resource management device, ICT resource management method and Ict resource management program |
JP2020559060A JP7056759B2 (ja) | 2018-12-04 | 2019-11-25 | Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018227538 | 2018-12-04 | ||
JP2018-227538 | 2018-12-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020116221A1 true WO2020116221A1 (ja) | 2020-06-11 |
Family
ID=70975261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/045932 WO2020116221A1 (ja) | 2018-12-04 | 2019-11-25 | Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220043946A1 (ja) |
JP (1) | JP7056759B2 (ja) |
AU (1) | AU2019393436B2 (ja) |
SG (1) | SG11202105923VA (ja) |
TW (1) | TWI807139B (ja) |
WO (1) | WO2020116221A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111800506A (zh) * | 2020-07-06 | 2020-10-20 | 深圳市网心科技有限公司 | 一种边缘计算节点部署方法及相关装置 |
WO2022254728A1 (ja) * | 2021-06-04 | 2022-12-08 | 楽天モバイル株式会社 | ネットワーク管理システム、ネットワーク管理方法およびプログラム |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113111374B (zh) * | 2021-05-13 | 2022-09-23 | 上海交通大学 | 一种端边云的工业微服务系统、数据交互方法及介质 |
TWI821038B (zh) * | 2022-11-22 | 2023-11-01 | 財團法人工業技術研究院 | 運算工作分派方法及應用其之終端電子裝置與運算系統 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015056182A (ja) * | 2013-09-13 | 2015-03-23 | 株式会社Nttドコモ | ネットワーク仮想化のための方法及び装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6499622B2 (ja) * | 2016-08-22 | 2019-04-10 | 日本電信電話株式会社 | 事業者間一括サービス構築装置及び事業者間一括サービス構築方法 |
-
2019
- 2019-11-25 SG SG11202105923VA patent/SG11202105923VA/en unknown
- 2019-11-25 WO PCT/JP2019/045932 patent/WO2020116221A1/ja active Application Filing
- 2019-11-25 AU AU2019393436A patent/AU2019393436B2/en active Active
- 2019-11-25 US US17/311,073 patent/US20220043946A1/en active Pending
- 2019-11-25 JP JP2020559060A patent/JP7056759B2/ja active Active
- 2019-12-03 TW TW108144070A patent/TWI807139B/zh active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015056182A (ja) * | 2013-09-13 | 2015-03-23 | 株式会社Nttドコモ | ネットワーク仮想化のための方法及び装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111800506A (zh) * | 2020-07-06 | 2020-10-20 | 深圳市网心科技有限公司 | 一种边缘计算节点部署方法及相关装置 |
CN111800506B (zh) * | 2020-07-06 | 2023-09-19 | 深圳市网心科技有限公司 | 一种边缘计算节点部署方法及相关装置 |
WO2022254728A1 (ja) * | 2021-06-04 | 2022-12-08 | 楽天モバイル株式会社 | ネットワーク管理システム、ネットワーク管理方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
SG11202105923VA (en) | 2021-07-29 |
TWI807139B (zh) | 2023-07-01 |
AU2019393436B2 (en) | 2022-08-04 |
AU2019393436A1 (en) | 2021-06-24 |
US20220043946A1 (en) | 2022-02-10 |
JP7056759B2 (ja) | 2022-04-19 |
TW202037132A (zh) | 2020-10-01 |
JPWO2020116221A1 (ja) | 2021-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7056759B2 (ja) | Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム | |
AU2020200723B2 (en) | Systems and methods for blueprint-based cloud management | |
US11321130B2 (en) | Container orchestration in decentralized network computing environments | |
EP2926253B1 (en) | Diagnostic virtual machine | |
WO2016121802A1 (ja) | 仮想化管理・オーケストレーション装置、仮想化管理・オーケストレーション方法、および、プログラム | |
US20130339510A1 (en) | Fast provisioning service for cloud computing | |
CN105379185B (zh) | 用于创建和管理网络群组的方法和系统 | |
US20100293269A1 (en) | Inventory management in a computing-on-demand system | |
US9116874B2 (en) | Virtual machine test system, virtual machine test method | |
US20110072505A1 (en) | Process for Installing Software Application and Platform Operating System | |
WO2016121736A1 (ja) | オーケストレータ装置、システム、仮想マシンの作成方法及びプログラム | |
JP7056760B2 (ja) | Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム | |
JP6393612B2 (ja) | システムのバックアップ装置及びバックアップ方法 | |
JP2017068480A (ja) | ジョブ管理方法、ジョブ管理装置及びプログラム | |
CN114461303A (zh) | 一种访问集群内部服务的方法和装置 | |
JP7115561B2 (ja) | Ict資源管理装置、ict資源管理方法、および、ict資源管理プログラム | |
Aznar et al. | CNSMO: A Network Services Manager/Orchestrator tool for cloud federated environments | |
GORDIN et al. | Web portal development with different cloud containers: Docker vs. Kubernetes. | |
Comas Gómez | Despliegue de un gestor de infraestructura virtual basado en Openstack para NFV | |
SYN et al. | D3. 3: Infrastructures Management Toolkit API |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19894225 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020559060 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019393436 Country of ref document: AU Date of ref document: 20191125 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19894225 Country of ref document: EP Kind code of ref document: A1 |