WO2014026524A1 - 一种分配资源的方法及装置 - Google Patents

一种分配资源的方法及装置 Download PDF

Info

Publication number
WO2014026524A1
WO2014026524A1 PCT/CN2013/079502 CN2013079502W WO2014026524A1 WO 2014026524 A1 WO2014026524 A1 WO 2014026524A1 CN 2013079502 W CN2013079502 W CN 2013079502W WO 2014026524 A1 WO2014026524 A1 WO 2014026524A1
Authority
WO
WIPO (PCT)
Prior art keywords
level
physical
node
isolation
affinity
Prior art date
Application number
PCT/CN2013/079502
Other languages
English (en)
French (fr)
Inventor
甘嘉栋
奇斯·安德鲁
霍尔姆·拉尔斯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2014026524A1 publication Critical patent/WO2014026524A1/zh
Priority to US14/585,927 priority Critical patent/US9807028B2/en
Priority to US15/712,386 priority patent/US10104010B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • H04L47/782Hierarchical allocation of resources, e.g. involving a hierarchy of local and centralised entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware

Definitions

  • the invention relates to the field of I T, and in particular to a method and device for allocating resources.
  • the architecture is divided into three layers, which are the upper application layer, the intermediate infrastructure management layer, and the lower physical resource layer.
  • the upper-layer application requests virtual resources from the infrastructure management layer to ensure the running of the application, and the application needs one or more virtual resources.
  • the infrastructure management layer selects the appropriate physical resources and creates virtual resources on the selected physical resources for provision to the application. Because of the requirement to support dynamic allocation of resources, the application does not perceive which specific physical devices are deployed. However, whether different virtual resources are deployed on the same physical resource has a significant impact on the quality attributes such as reliability and performance of the carried services.
  • the application runs efficiently; there is better fault isolation between the two virtual resources deployed to different physical resources, the application
  • the reliability of operation is high, that is, two virtual resources will not fail at the same time due to the failure of one host.
  • the running of an application can be measured in two dimensions, isolation and affinity.
  • isolation and affinity Generally, the higher the isolation between two virtual resources, the lower the affinity, indicating the higher the reliability of the applications supported by them. Conversely, the higher the affinity, the lower the isolation, which means that the more closely the virtual resources are matched, so that the applications supported by them can achieve higher business performance.
  • the infrastructure management layer can only provide the upper layer application with separate or non-separate deployment.
  • the example is For telecommunications systems using the advanced hardware architecture of the Advanced Telecom Computing Architecture (ATCA), the deployment requirements are: To ensure a certain degree of isolation, and to consider a certain degree of affinity, however, The infrastructure management team cannot provide a deployment strategy that takes into account both isolation and affinity, and cannot meet the deployment requirements of the above applications. Summary of the invention
  • the embodiments of the present invention provide a method and an apparatus for allocating resources, which are used to solve the problem that an infrastructure management layer cannot provide a deployment strategy that takes into consideration both isolation and affinity, and meets application deployment requirements. Guarantee the service quality of the business application (Qua ty of Service, QoS).
  • an embodiment of the present invention provides a method for allocating resources, including:
  • the infrastructure management node provides an option for describing resource allocation to the service application, the option corresponding to at least two different levels of physical resources;
  • the method before the providing the option to the service application, the method further includes:
  • the physical resource configuration information is saved according to a tree hierarchy.
  • determining the physical resource corresponding to the level of the result includes: determining that the result corresponding level is in the tree shape A node in a hierarchical structure, where the node is information of a physical resource corresponding to the level of the saved result.
  • the determining the node corresponding to the level in the tree hierarchical structure including:
  • the plurality of child nodes or the plurality of descendant nodes are physical resource information corresponding to the level of the result.
  • the determining the tree shape correspond to the nodes of the hierarchy, including:
  • Determining a level corresponding to the affinity is a node in the tree hierarchical structure, or determining a plurality of nodes in the tree hierarchical structure corresponding to the level of the isolation.
  • the selection result of the option is determined by the service application according to the security service. The isolation and affinity requirements for quality QoS, as well as the isolation and affinity of the corresponding levels of the options.
  • an embodiment of the present invention provides an infrastructure management node for allocating resources, including: an option providing unit, configured to provide an option for describing resource allocation to a service application, where the option corresponds to at least two different levels Physical resources;
  • a determining unit configured to determine, according to a result of the selection of the option that is fed back by the service application, a physical resource corresponding to the level of the selection result
  • the management node also includes: a sending unit, configured to: physical resource configuration request information to a physical host in a range managed by the infrastructure management node;
  • a receiving unit configured to receive physical resource configuration information sent by the physical host, where the physical resource configuration information includes a corresponding relationship of physical resources of different levels;
  • a storage unit configured to save the physical resource configuration information according to a tree hierarchical structure.
  • determining, by the determining unit, the result corresponding level is in the tree, A node in a hierarchical structure, wherein the node is information of a physical resource corresponding to the level of the saved result.
  • the determining unit determining a node corresponding to the level of the result in the tree hierarchical structure, including:
  • the plurality of child nodes or the plurality of descendant nodes are physical resource information corresponding to the level of the result.
  • the determining unit determines the The results in the tree hierarchy correspond to the nodes of the hierarchy, including:
  • Determining a level corresponding to the affinity is a node in the tree hierarchical structure, or determining a plurality of nodes in the tree hierarchical structure corresponding to the level of the isolation.
  • the management node for allocating resources provided by the embodiment of the present invention sends an option for the infrastructure management node to describe the resource allocation to the service application, and according to the selection result of the service application feedback, corresponding to the result
  • the virtual resource is set up on the physical resource to provide the service application, meets the deployment requirements of the service application, and ensures the QoS of the service application.
  • FIG. 1 is a structural diagram of an application environment according to an embodiment of the present invention.
  • Embodiment 3 is a diagram showing a relative position relationship of physical resources in Embodiment 1 of the present invention.
  • FIG. 4 is a flowchart of Embodiment 2 of the present invention.
  • FIG. 5 is a structural diagram of a management node in Embodiment 3 of the present invention.
  • FIG. 6 is a structural diagram of hardware components of a management node in Embodiment 3 of the present invention.
  • the application environment of the embodiment of the present invention can be divided into three layers, which are an upper application layer, an intermediate infrastructure management layer, and a lower physical resource layer.
  • An application is a program that runs on the application layer.
  • the runtime contains one or more processes, which may be distributed on one or more virtual machines (VMs).
  • the infrastructure management layer virtualizes physical resources and provides virtual resources such as VMs, VM clusters, virtual volumes, and virtual networks.
  • the VM cluster is a grouping of VMs, and each application corresponds to one VM cluster.
  • the VM is uniformly scheduled and managed by the infrastructure management layer and attached to the physical host.
  • a physical host can establish one or more VMs, which can be fixed on one physical host or migrated to another physical host.
  • the physical resource layer provides operations on physical hosts, such as installation, deployment, upgrade, power-on, and so on.
  • the physical resource layer is in the ATCA hardware architecture, including hosts, chassis, and racks.
  • the method disclosed in the foregoing embodiments of the present invention may be implemented in a central processing unit or a central processing unit.
  • the central processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the central processing unit or an instruction in the form of software.
  • the foregoing central processing unit may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • Device discrete gate or transistor logic device
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly embodied as the execution of the hardware processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory, and the central processor reads the information in the memory, and combines the hardware to complete the steps of the foregoing method.
  • Embodiment 1 Embodiment 1:
  • the process of allocating resources in the embodiment of the present invention is as follows:
  • the S10 infrastructure management node provides an option for describing resource allocation to the service application, the option corresponding to at least two different levels of physical resources;
  • physical resources include, but are not limited to, physical hosts, chassis, racks, sites, and geographic disaster tolerance.
  • the physical host includes a central processing unit (CPU), a memory, and a hard disk
  • the site includes a local area network and a cross-local area network.
  • the virtual resource is included in the embodiment of the present invention, including but not limited to a virtual machine.
  • the infrastructure management node provides an option for describing the resource allocation to the business application, and the infrastructure management node can be a server, and the business application is an application layer application.
  • Figure 3 is a diagram showing the relative positional relationship between physical resources of a tree structure.
  • the root nodes of the two trees in Figure 3 are "Geographicl” and “Geographic2" respectively, and the subsequent nodes of the root node are "Site", “Rack”, “Chassis, and "Host”.
  • Site refers to a site that contains a large number of physical devices, such as data center devices placed in the same building.
  • Geographic refers to geographical disaster tolerance, that is, the geographical distance between the two regions should meet the requirements of natural disaster disaster tolerance.
  • the size of the host, chassis, rack, site, and geographic disaster recovery is small to large, the chassis can contain multiple hosts, the rack can contain multiple chassis, the site contains multiple racks, and the geographical disaster tolerance includes many Sites.
  • the above five different levels in the physical sense can clearly correspond to a certain degree of physical isolation and affinity. For example, if the same host runs different VMs, the affinity is the highest and the isolation is the lowest. Once the host fails, the VMs in the host cannot run.
  • Different VMs run on different hosts, but in the same chassis.
  • the hosts are connected to each other through the chassis and communicate with each other.
  • Other components in the shared chassis such as power supplies, fans, management modules, etc., have lower affinity between VMs than different VMs running the same host.
  • the isolation is higher than that of the same host running different VMs.
  • One host fails, and the other VMs work as usual. However, once the power supply and fan in the chassis are faulty, the hosts in the entire chassis will be faulty. error occured. If different VMs are in different chassis but all are in the same chassis, the affinity between the VMs is not as high as that of the VMs in the same chassis, but the isolation is more isolated than the VMs in the same chassis.
  • the infrastructure management node may also send the physical resource configuration request information to one or more physical hosts within the jurisdiction of the node before providing the option for describing the resource allocation to the service application, if the infrastructure management node is within the management scope If there is only one physical host, you only need to send a request message to the physical host. If there are multiple physical hosts in the management scope, the infrastructure management node sends the request information group to all the physics in the node level in real time or periodically. The configuration of the host and the different physical hosts may change. For example, when the power is turned on or off, and the chassis or the chassis is replaced, the infrastructure management node obtains the resource configuration information through real-time or timed transmission, and updates the saved resource configuration. happening.
  • the physical host After receiving the resource configuration request, the physical host sends the configuration information of the network to the management node.
  • the infrastructure management node receives physical resource configuration information sent by the one or more physical hosts, where the physical resource configuration information includes different hierarchical correspondences.
  • the host A belongs to the chassis A
  • the chassis A belongs to the rack A
  • the rack A belongs to the site A
  • the site A belongs to the geographic disaster-tolerant area A.
  • the information reported by the host A is the host A-frame A-rack. A-site A-geographic disaster tolerance A
  • host B belongs to chassis B
  • chassis B belongs to rack A
  • the information reported by host B is "host B-frame B-rack A-site A-ground Management disaster A”.
  • the infrastructure management node After receiving the physical resource configuration information reported by the physical host, the infrastructure management node saves the physical resource configuration information of the different levels according to a tree hierarchical structure.
  • the tree structure includes the root node, the parent node, the sibling node, the child node and the descendant node, etc., the largest node is defined as the root node, and the remaining layers are in order from the largest to the smallest. Structure to save.
  • the root node is "geographic disaster tolerance A”
  • the child node is "site A”
  • the child node of site A is "rack A”
  • “rack A” is "geographic disaster tolerance A”
  • the descendant node, "Geography Disaster Recovery A” is the ancestor node of "Rack A”.
  • the physical resource configuration information stored in the infrastructure management node can be in the form of (Geography Disaster Recovery A (Site A (Frame A (Host A, Host B)) ) ) ) ) , where comma means that host A and host B are at the same level, and the physical resource level outside the parentheses is higher than the physical resource level in the parentheses.
  • the level of chassis A is one higher than the level of host A and host B.
  • Level, rack A is one level higher than the level of chassis A.
  • the service application determines one or more options that satisfy the isolation and affinity according to the requirements of isolation and affinity that can guarantee the quality of service of the QoS, and feeds back the selection result of the option to the node. For example, in order to ensure the QoS of the service application, the service application needs to run the two master-slave VMs, and the interaction between the two VMs is frequent. The communication bandwidth between the two VMs is required to be no less than 100M. The performance and reliability requirements are relatively high, that is, the affinity and isolation are relatively high. It is necessary to ensure that two VMs are built in the same chassis, but they are built on different physical hosts under the same chassis. After the physical host fails, another physical host works as usual, and the two physical hosts communicate through the bus in the chassis, which is efficient and fast. Therefore, the business application feeds back the selection result of "same chassis, different hosts" to the management node.
  • the infrastructure management node determines, according to the result of the feedback, the node corresponding to the level in the tree hierarchical structure, and the node is the physical resource information corresponding to the saved level.
  • the management node determines that the level corresponding to the affinity is in the tree shape a node in the hierarchical structure, and determining that the isolation corresponds to a plurality of child nodes or a plurality of descendant nodes in the hierarchical structure of the node, wherein the plurality of child nodes or the plurality of descendant nodes are
  • the result corresponds to hierarchical physical resource information. For example, as shown in FIG. 3, when the service application feeds back the selection result of "same frame, different host", the management node first determines the "chassis" level in the saved tree hierarchy, under the chassis. The hierarchy only has the host level.
  • the management node traverses all the "machine frames, the child nodes, only the chassis 1, that is, the chassisl in Figure 3, there are two different physical hosts, and the other chassis have only one host.
  • the management node determines that the two physical hosts hostl and host2 under the machine ⁇ 1 are physical resources that meet the isolation and affinity specified by the result.
  • the management node will establish two VMs on the hostl and the host2 respectively, and provide the services to the service. application.
  • the management node traverses the child nodes and descendant nodes of the "rack", only rack 1, that is, rackl in FIG. There are 3 different hosts, and the rest of the racks have only one host.
  • the management node determines the physical hosts of the three physical hosts hostl, host2, and host3 under rack 1 to meet the isolation and affinity specified by the result.
  • the management node can establish two VMs on hostl and host2 respectively. It can also be set up on hostl and host3, or two VMs on host2 and host3 respectively, which are provided to the service application. Specifically, choose hostl and host 2, or choose Hostl and host3, or select host2 and host3, can be set in advance, or can be determined according to factors such as load balancing.
  • the management node determines that the service application selection level is incorrect, does not process or feeds the notification of the error to the service application.
  • the management node determines a level corresponding to the level of the affinity in the tree hierarchy, or determines the isolation.
  • the corresponding level is a plurality of nodes in the tree hierarchy. For example, if the service application only specifies the "same chassis", the management node only needs to select a chassis. As shown in Figure 3, the different VMs can be established on any physical host host under the chassis1 of Figure 3. If the service application only specifies "different hosts”, the management node only needs to set up different VMs on different hosts, such as any multiple hosts in hosl_host6 in FIG. S103. Establish a virtual resource on a physical resource corresponding to the level of the selection result, and provide the virtual resource to the service application.
  • the infrastructure management node sends an option for describing the resource allocation to the service application, and establishes a virtual resource on the physical resource corresponding to the result according to the selection result of the service application feedback.
  • the service application satisfies the deployment requirements of the service application and ensures the QoS of the service application.
  • the S20 infrastructure management node acquires a physical resource configuration.
  • the infrastructure management node is a server responsible for scheduling physical resources and providing virtual resources for the application layer.
  • the physical resources are physical resources according to the standard of the Advanced Telecommunications Competing Architecture (ATCA), including a host, a chassis, a rack, etc., wherein the host includes a CPU, a memory, a disk storage, a network interface, and the like.
  • the virtual resource includes a VM, a VM cluster, a virtual volume, a virtual network, and the like. This embodiment uses a VM as an example for description.
  • the relative positional relationship of physical resources is shown in Figure 3.
  • the "host-frame-rack” in the ATCA standard corresponds to "Hos t_Chas si s_Rack" in Figure X.
  • "S i te” refers to a site that contains a large number of physical devices, such as data center devices placed in the same building.
  • Geographic refers to geographical disaster tolerance, that is, the geographical distance between the two regions should meet the requirements of natural disaster disaster tolerance.
  • the host, chassis, rack, site, and geographic disaster recovery levels are small to large.
  • the chassis can contain multiple hosts.
  • the rack can contain multiple chassis.
  • the site contains multiple racks.
  • the geographical disaster recovery includes multiple sites. .
  • the above five different levels in the physical sense can clearly correspond to a certain degree of physical isolation and affinity.
  • the affinity is the highest and the isolation is the lowest.
  • the VMs in the host cannot run.
  • Different VMs run on different hosts, but in the same chassis.
  • the hosts communicate with each other through the chassis and communicate with other components in the chassis, such as power supplies, fans, and management modules.
  • the affinity between the VMs is lower than that of the same host running different VMs, and the isolation is higher than that of the same host running different VMs.
  • the affinity between the VMs is not as high as that of the VMs in the same chassis, but the isolation is more isolated than the VMs in the same chassis. High degree, once there is a problem with the power supply or fan in one chassis, the remaining chassis in the rack can work normally.
  • the VM "same rack, different sites” has lower affinity and higher isolation than VM “same chassis, different racks”. It should be noted that in addition to physical resources can be divided into “host, chassis, rack, site, geographical disaster recovery", you can also subdivide "host” into CPU, memory and hard disk. A single host can contain several CPUs, a certain size of memory and a hard disk. At the same time, the "site” can be subdivided into two levels, the same LAN and across LAN.
  • the infrastructure management node obtains the physical resource configuration. As shown in Figure 2, the location of the specific physical resource is determined when the physical resource architecture is set up. Therefore, the physical resource configuration can be input manually or through the infrastructure management node.
  • the real-time or timing acquisition may be performed by real-time or timing group-sending physical resource configuration request information to each host in the management layer of the node, and each host reports information about its own resource configuration status to the infrastructure management node.
  • the host A belongs to the chassis A
  • the chassis A belongs to the rack A
  • the rack A belongs to the site A
  • the site A belongs to the geographic disaster-tolerant area A.
  • the information reported by the host A is the host A-frame A-rack.
  • A-site A-geographic disaster tolerance A while host B belongs to chassis B, and chassis B belongs to rack A, then the information reported by host B is "host B-frame B-rack A-site A-ground Management disaster A".
  • the infrastructure management node After receiving the physical resource configuration information reported by the physical host, the infrastructure management node saves the physical resource configuration information of the different levels according to a tree hierarchical structure.
  • the tree structure includes the root node, the parent node, the sibling node, the child node and the descendant node, etc., the largest node is defined as the root node, and the remaining layers are in order from the largest to the smallest. Structure to save.
  • the root node is "geographic disaster tolerance A”
  • the child node is "site A”
  • the child node of site A is "rack A”
  • machine A is the descendant node of "geographic disaster tolerance A”
  • geographic disaster tolerance A is the ancestor node of "rack A.”
  • the infrastructure management node stores The physical resource configuration information may be in the form of "Geography Disaster Recovery A (Site A (rack A (Host A, Host B)))), where the comma indicates that Host A and Host B are at the same level.
  • the level of physical resources outside the parentheses is one level higher than the level in parentheses.
  • a business application is an application, and the application needs the corresponding virtual machine VM to provide resources at runtime.
  • the service application requests the resource from the infrastructure management node, and the infrastructure management node is responsible for providing the corresponding physical resource, and establishing a VM on the provided physical resource to provide the established VM to the service application.
  • the infrastructure management node provides the service application with an option for describing the resource allocation.
  • the specific provision may be to extend the interface providing the option and provide the option to the service application through the interface.
  • the options are: “host-affinity,,," host-isolation,,, “chassis-affinity,,,” chassis_isolation,,, “rack-affinity,,,” rack - at least one of isolation, ", site affinity,”, “site-isolation”, “geographic-affinity”, “geographic isolation”, and “not specified”, the embodiment of the present invention is only Reveals one of the options, other options, such as “host-affinity", although the form is different, but the options mean the same, that is, different options represent different levels of host, chassis and rack. It also falls within the scope of protection of embodiments of the present invention.
  • Do not specify means that the VM does not specify the specific requirements for establishment.
  • 103 Affinity means that different VMs are built on the same host.
  • Host_Isolation means that different VMs are built on different hosts,
  • chassis- Affinity means that different VMs are on the same chassis,
  • chassis-isolation means that different VMs are on different chassis.
  • other options indicate that the VM is or is not in the hierarchy provided by the option.
  • affinity from large to small, host, chassis, rack, site, and geography
  • Disaster tolerance from large to small, is geographical disaster tolerance, sites, racks, chassis and hosts.
  • CPU_ affinity means that the VM uses the same virtual CPU.
  • the virtual CPU is implemented by the virtualization technology of the CPU.
  • the virtualization technology of the CPU can simulate multiple CPUs in parallel with a single CPU, allowing one platform to run multiple operating systems at the same time.
  • CPU_Isolation means that the VMs use different virtual CPUs, respectively. If the site is subdivided into LAN and cross-LAN, you can add
  • the business application determines a selection result that meets its own resource requirements.
  • the business application determines the selection result that meets the resource requirements of its own operation according to the above options provided by the infrastructure management node, and feeds the selection result to the infrastructure management node. Specifically, the business application receives and saves the options provided by the infrastructure management node, determines one or more options according to its own QoS requirements, and sends the option back to the infrastructure management node. For example, in order to ensure the QoS of its own operation, the service application needs to run the two main VMs, and the interaction between the two VMs is frequent. The communication bandwidth between the two VMs is required to be no less than 100M. The performance and reliability requirements are relatively high, that is, the affinity and isolation are relatively high.
  • the affinity is "host affinity” and "chassis affinity” are satisfied. This bandwidth requirement, but because the business application requires “master-standby” two VMs to run, the active and standby VMs cannot be established on the same physical host, so the options that conform to the service application are only “chassis-affinity” and "host_”.
  • the infrastructure management node After receiving the result selected by the service application, the infrastructure management node selects the physical resource that meets the selection result, and establishes the VM on the selected physical resource.
  • the business application selects the options of "011&8818_Affinity” and "hostJ degree".
  • the options of "011&8818_Affinity” and "hostJ degree" According to the relative position relationship diagram of the physical resources provided in Figure 3, only hostl and host2 satisfy this condition, and VMs are respectively established. On hostl and host2. For the selection result that only "host_affinity" is selected and no isolation is specified, ho s 11 to ho s 16 satisfy the condition that different VMs can be established on any of ho s 11 to ho s 16 .
  • the infrastructure management node determines, according to the result of the feedback, the node corresponding to the level in the tree hierarchical structure, and the node is the physical resource information corresponding to the saved level.
  • the infrastructure management node determines that the level corresponding to the affinity is in the Determining a node in the hierarchical structure, and determining a plurality of child nodes or a plurality of descendant nodes in the hierarchical structure of the hierarchy corresponding to the isolation level, the plurality of child nodes or a plurality of descendants
  • the node is the physical resource information corresponding to the level of the result. For example, as shown in Figure 3, the business application feedback "the same chassis, different hosts” selection results, that is, "chassis-affinity" and "host-isolation", then the infrastructure management node is saved first.
  • the infrastructure management node traverses all the "frames," child nodes, only the chassis 1, that is In the chassisl of Figure 3, there are two different physical hosts, and the other chassis have only one host.
  • the infrastructure management node determines that the two physical hosts hostl and host2 under the chassis 1 are the isolation specified to meet the result. And affinity physical resources.
  • the infrastructure management node will create two VMs on hostl and host2 for service applications.
  • the infrastructure management node traverses the "rack”"Children's nodes and descendants, only rack 1, that is, rackl in Figure 3, there are 3 different hosts, and the rest of the racks have only one host, then the infrastructure management node determines the rack 1
  • the three physical hosts hostl, host2, and host3 are physical resources that meet the isolation and affinity specified by the result.
  • the infrastructure management node can establish two VMs on hostl and host2 respectively, or on hostl and host3. Or create two VMs on host2 and host3 respectively, and provide them to the service application. Specifically, select hostl and host2, or select hostl and host 3, or select host2 and host3, which can be preset or determined according to factors such as load balancing. .
  • the infrastructure management node determines that the service application selection level is incorrect, does not process or feeds the notification of the error to the service application.
  • the infrastructure management node determines that the level corresponding to the affinity corresponds to a node in the tree hierarchy, or determines the The level of isolation corresponds to a plurality of nodes in the tree hierarchy. For example, if the service application only specifies the same chassis, the infrastructure management node only needs to select a chassis. As shown in Figure 3, chassis1, different VMs are established on any physical host host in the chassis1 of Figure 3. If the service application only specifies "different hosts", the infrastructure management node only needs to set up different VMs on different hosts, as shown in the hosl-host6.
  • the management node allocates the established VM to the business application and completes the resource allocation operation.
  • the infrastructure management node sends an option for describing the resource allocation to the service application, and establishes a virtual resource on the physical resource corresponding to the result according to the selection result of the service application feedback.
  • the service application satisfies the deployment requirements of the service application and ensures the QoS of the service application.
  • FIG. 5 is a structural diagram of the infrastructure management node. As shown in Figure 5, the infrastructure management node includes:
  • An option providing unit 301 configured to provide an option for describing resource allocation to a service application, where the option corresponds to at least two different levels of physical resources;
  • the physical resources include, but are not limited to, a physical host, a chassis, a rack, a site, and a geographic disaster tolerance.
  • the physical host includes a central processing unit (CPU), a memory, and a hard disk, and the site includes a local area network and a cross-local area network.
  • the virtual resource is included in the embodiment of the present invention, including but not limited to a virtual machine.
  • Figure 3 is a diagram showing the relative positional relationship between physical resources of a tree structure.
  • the root nodes of the two trees in Figure 3 are "Geographicl” and “Geographic2" respectively, and the subsequent nodes of the root node are "Site", “Rack”, “Chassis” and "Host,”.
  • Site refers to a site that contains a large number of physical devices, such as data center devices placed in the same building.
  • Geographic refers to geographical disaster tolerance, that is, the geographical distance between the two regions should meet the requirements of natural disaster disaster tolerance.
  • the host, chassis, rack, site, and geographic disaster tolerance levels are small to large, the chassis can contain multiple hosts, the rack can contain multiple chassis, the site contains multiple racks, and the geospatial disaster contains multiple sites. .
  • the above five different levels in the physical sense can clearly correspond to a certain degree of physical isolation and affinity. For example, if the same host runs different VMs, the affinity is the highest and the isolation is the lowest. Once the host fails, the VMs in the host cannot run.
  • Different VMs run on different hosts, but in the same chassis.
  • the hosts are connected to each other through the chassis and communicate with each other.
  • Other components in the shared chassis such as power supplies, fans, management modules, etc., have lower affinity between VMs than different VMs running the same host.
  • the isolation is higher than that of the same host running different VMs.
  • One host fails, and the other VMs work as usual. However, once the power supply and fan in the chassis are faulty, the hosts in the entire chassis will be faulty. error occured.
  • the affinity between the VMs is not as high as that of the VMs in the same chassis, but the isolation is more isolated than the VMs in the same chassis. High degree, once there is a problem with the power supply or fan in one chassis, the remaining chassis in the rack can work normally.
  • VM "same rack, different sites” than VM "same chassis, Different racks have low affinity and high isolation.
  • the physical resource configuration request information needs to be sent to one or more physical hosts within the jurisdiction of the node, if the physical host in the node management scope Only one device can send a request message to the physical host. If there are multiple physical hosts in the management scope, the sending unit of the infrastructure management node sends the request information group to all the physics in the node level in real time or at a time. The configuration of the host and the different physical hosts may be changed. For example, if the power module is powered on and off, and the chassis or the chassis is replaced, the acquisition unit of the infrastructure management node obtains the resource configuration information through real-time or timed transmission. Resource allocation situation.
  • the physical host After receiving the resource configuration request, the physical host sends the configuration information of the network itself to the receiving unit of the infrastructure management node.
  • the receiving unit of the infrastructure management node receives physical resource configuration information sent by the one or more physical hosts, where the physical resource configuration information includes different hierarchical correspondences.
  • the host A belongs to the chassis A
  • the chassis A belongs to the rack A
  • the rack A belongs to the site A
  • the site A belongs to the geographic disaster-tolerant area A.
  • the information reported by the host A is the host A-frame A-rack. A-site A-geographic disaster tolerance A
  • host B belongs to chassis B
  • chassis B belongs to rack A
  • the information reported by host B is "host B-frame B-rack A-site A-ground Management disaster A”.
  • the infrastructure management node storage unit After receiving the physical resource configuration information reported by the physical host, the infrastructure management node storage unit saves the physical resource configuration information of the different levels according to a tree hierarchical structure.
  • the tree structure includes the root node, the parent node, the sibling node, the child node and the descendant node, etc., the largest node is defined as the root node, and the remaining layers are in order from the largest to the smallest. Structure to save.
  • the root node is "geographic disaster tolerance A”
  • the child node is "site A”
  • the child node of site A is "rack A”
  • “rack A” is "geographic disaster tolerance A”
  • the descendant node, "Geography Disaster Recovery A” is the ancestor node of "Rack A”.
  • the physical resource configuration information stored in the storage unit can be in the form of "Geography Disaster Recovery A (Site A (Frame A (Host A, Host B))) " , where the comma indicates that host A and host B are at the same level, and the physical resource level outside the parentheses is higher than the physical resource level in parentheses.
  • the hierarchy of chassis A is one level higher than that of host A and host B.
  • Frame A is one level higher than the level of chassis A.
  • a determining unit 302 configured to determine, according to a result of the selection of the option that is fed back by the service application, a physical resource corresponding to the level of the selection result;
  • the service application Determining, by the service application, one or more options satisfying the isolation and affinity according to requirements of isolation and affinity that can guarantee the quality of service of the QoS, and feeding back the selection result of the option to the determining of the node Unit 302.
  • the service application needs to run the two main VMs, and the interaction between the two VMs is frequent.
  • the communication bandwidth between the two VMs is required to be no less than 100 M.
  • the application performance and reliability requirements are relatively high, that is, the affinity and isolation are relatively high. It is necessary to ensure that two VMs are built in the same chassis, but they are built on different physical hosts under the same chassis. After a physical host fails, another physical host works as usual, and two physical hosts communicate through the bus in the chassis, which is efficient and fast. Therefore, the business application feeds back to the determining unit 302 the result of the selection of "same chassis, different hosts”.
  • the determining unit 302 determines, according to the result of the feedback, the node of the result corresponding level in the tree hierarchical structure, where the node is the saved physical resource information corresponding to the level.
  • the determining unit 302 determines that the level corresponding to the affinity is in the tree. Determining a node in the hierarchical structure, and determining a plurality of child nodes or a plurality of descendant nodes in the hierarchical structure of the hierarchy corresponding to the isolation, the plurality of child nodes or a plurality of descendant nodes The point is the physical resource information corresponding to the level of the result. For example, as shown in FIG. 3, when the service application feeds back the selection result of "same frame, different host", the determining unit 302 first determines the "frame, the level, and the frame in the saved tree hierarchy.
  • the next level only has the level of the host.
  • the determining unit 302 traverses all the child nodes of the "frame". Only the chassis 1, that is, the cha ssisl in Figure 3, has two different physical hosts, and the other chassis have only one. For the host, the determining unit 302 determines that the two physical hosts hos 11 and hos t 2 under the chassis 1 are physical resources that satisfy the isolation and affinity specified by the result.
  • the infrastructure management node provides the unit in the ho stl and Two VMs are respectively set up on hos t 2 for service applications.
  • the infrastructure management node traverses the child nodes and descendant nodes of the "rack", only rack 1, that is, in Figure 3.
  • Rackl there are 3 different hosts, and the remaining racks have only one host, then the determining unit 302 determines that the three physical hosts hostl, host2 and host3 under the rack 1 are the isolation and affinity specified by the result.
  • the physical resource determining unit 302 may separately establish two VMs on the hostl and the host2, or may establish two VMs on the hostl and the host3, or on the host2 and the host3, so that the resource providing unit provides the service to the service application, specifically Select hostl and host2, or choose hostl and host3, or select host2 and host3, which can be set in advance, or can be determined according to factors such as load balancing.
  • the infrastructure management node judges the service application selection level error through the judgment unit, does not process or feeds the notification of the error to the service application.
  • the determining unit 302 determines that the level corresponding to the affinity corresponds to a node in the tree hierarchical structure, or determines the isolation.
  • the level corresponds to a plurality of nodes in the tree hierarchy. For example, if the service application only specifies the "same chassis", the determining unit 302 only needs to select a chassis, such as chassisl in FIG. 3, to establish different VMs on any physical host host under the chassis1 of FIG. If the service application only specifies "different hosts", the determining unit 302 only needs to set up different VMs on different hosts, as shown in the figure, any number of hosts in the hosl_host6.
  • the virtual resource providing unit 303 is configured to establish a virtual resource on the physical resource corresponding to the level of the selection result, and provide the virtual resource to the service application.
  • 6 depicts a hardware architecture diagram of an infrastructure management node provided by another embodiment of the present invention, including at least one processor 401 (eg, a CPU), at least one network interface 402 or other communication interface, memory 403, and at least one communication bus. 404, used to implement connection communication between these devices.
  • the processor 401 is configured to execute an executable module, such as a computer program, stored in the memory 403.
  • the memory 403 may include a high speed random access memory (RAM: Random Access Memory), and may also include a non-volatile memory, such as at least one disk storage. Reservoir.
  • the communication connection between the system gateway and at least one other network element is implemented by at least one network interface 402 (which may be wired or wireless), and may use an Internet, a wide area network, a local network, a metropolitan area network, or the like.
  • the memory 403 stores program instructions, and the program instructions may be executed by the processor 401, where the program instructions include an option providing unit 301, a determining unit 302, and a virtual resource providing unit 303, where specific implementations of the units are referred to The corresponding units disclosed in FIG. 5 are not described here.
  • the infrastructure management node for allocating resources provided by the embodiment of the present invention
  • the option providing unit provides an option for describing the resource allocation to the service application
  • the determining unit provides the unit according to the selection result of the service application feedback, and provides the unit with the physical resource corresponding to the result.
  • the establishment of virtual resources is provided to service applications, which meets the deployment requirements of service applications and ensures the QoS of service applications.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a computer.
  • computer readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage media or other magnetic storage device, or can be used for carrying or storing in the form of an instruction or data structure.
  • the desired program code and any other medium that can be accessed by the computer may suitably be a computer readable medium.
  • the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable , fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media.
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media.
  • a disc (Di sk ) and a disc (di sc ) include a compact disc (CD), a laser disc, a compact disc, a digital versatile disc (DVD), a floppy disc, and a Blu-ray disc, wherein the disc is usually magnetically copied,
  • the disc uses a laser to optically replicate the data. Combinations of the above should also be included within the protection hierarchy of computer readable media.

Abstract

本发明涉及一种资源分配的方法和装置,方法包括:基础设施管理节点将选项提供给业务应用;根据所述业务应用反馈的所述选项的选择结果,确定所述结果对应层级的物理资源;在所述结果对应的物理资源上建立虚拟资源,并将所述虚拟资源提供给所述业务应用。相应地,本发明提供一种基础设施管理节点,满足了业务应用的部署要求,保证了业务应用的QoS。

Description

一种分配资源的方法及装置 本申请要求于 2012年 8月 14日提交中国专利局、 申请号为 201210288410.1、 发明名称为 "一种分配资源的方法及装置" 的中国专利申 请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
发明涉及 I T领域, 尤其涉及一种分配资源的方法及装置。
背景技术
随着云概念的不断推广,应用程序与基础设施管理分层的架构被广泛应用 于要求资源动态分配的解决方案中。 该架构分为三层, 分别是上层的应用层、 处于中间的基础设施管理层和下层的物理资源层。 其中, 上层的应用程序向基 础设施管理层申请虚拟资源以保证该应用程序的运行,应用程序的运行需要一 个或多个虚拟资源。 基础设施管理层选择合适的物理资源, 并在所选择的物理 资源上建立虚拟资源, 以提供给应用程序。 由于要求支持资源动态分配, 应用 程序并不感知自身被部署哪些具体物理设备上。 然而, 不同的虚拟资源是否部 署在同一物理资源上,对所承载的业务的可靠性和性能等质量属性有着显著的 影响。例如,部署到同一物理资源的两个虚拟资源之间可以获得最高效的通信, 应用程序运行的效率高; 部署到不同物理资源的两个虚拟资源之间有更好的故 障隔离性, 应用程序运行的可靠性高, 即两个虚拟资源不会由于一个主机的故 障而同时故障。 应用程序的运行可以用两个维度来衡量, 即隔离度和亲和度。 一般两个虚 拟资源之间隔离度越高, 则亲和度越低, 表示其支撑的应用程序可靠性越高。 相反地, 亲和度越高, 则隔离度越低, 则表示虚拟资源之间配合越密切, 从而 其支撑的应用程序可获得更高的业务性能。
现有技术中,基础设施管理层只能对上层应用程序提供资源分开部署或不 分开部署这两种情况, 对于隔离度和亲和度都要同时兼顾的应用程序来说, 例 如对于釆用先进的电信计算平台 ( Advanced Telecom Computing Architecture, ATCA )标准硬件架构的电信系统, 其部署要求为: 既要保证一 定程度的隔离度, 又要考虑一定程度的亲和度, 但是, 基础设施管理层却不能 提供同时兼顾隔离度和亲和度的部署策略, 不能满足上述应用程序的部署要 求。 发明内容
有鉴于此, 本发明实施例提供了一种一种分配资源的方法及装置, 用于解 决基础设施管理层无法提供同时兼顾隔离度和亲和度的部署策略的问题, 满足 应用程序的部署要求, 保证业务应用的服务质量(Qua l i ty of Service , QoS ) 。
第一方面, 本发明实施例提供了一种分配资源的方法包括:
基础设施管理节点将用于描述资源分配的选项提供给业务应用, 所述选项 对应至少两个不同层级的物理资源;
根据所述业务应用反馈的所述选项的选择结果, 确定所述选择结果对应层 级的物理资源;
在所述选择结果对应层级的物理资源上建立虚拟资源, 并将所述虚拟资源 提供给所述业务应用。
在第一方面的第一种可能的实现方式中, 将所述选项提供给业务应用之 前, 还包括:
发送物理资源配置请求信息至所述基础设施管理节点管理范围内的物理 主机;
接收所述物理主机发送的物理资源配置信息, 所述物理资源配置信息包括 不同层级的物理资源的对应关系;
按照树形层级结构保存所述物理资源配置信息。
结合在第一方面的第一种可能的实现方式, 在第二种可能的实现方式中, 确定所述结果对应层级的物理资源, 包括: 确定所述结果对应层级在所述树形 层级结构中的结点, 所述结点为保存的所述结果对应层级的物理资源的信息。 结合第一方面的第二种可能的实现方式中, 在第三种可能的实现方式中, 若所述结果指定不同层级的隔离度和亲和度,且所述亲和度对应的层级高于所 述隔离度对应的层级, 所述确定所述树形层级结构中所述结果对应层级的结 点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的结点, 并确定所述隔 离度对应层级在所述结点层级结构中的多个孩子结点或多个子孙结点, 所述多 个孩子结点或多个子孙结点为所述结果对应层级的物理资源信息。
结合第一方面的第二种可能的实现方式中, 在第四种可能的实现方式中, 若所述结果只指定亲和度对应的层级或隔离度对应的层级, 所述确定所述树形 层级结构中所述结果对应层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所 述隔离度对应的层级在所述树形层级结构中的多个结点。 结合第一方面或第一方面的第一种至第四种任一种可能的实现方式,在第 五种可能的实现方式中, 所述选项的选择结果是由所述业务应用根据保障自 身服务质量 QoS的隔离度和亲和度的要求, 以及所述选项对应层级的隔离度和 亲和度所确定的。
第二方面, 本发明实施例提供一种分配资源的基础设施管理节点, 包括: 选项提供单元, 用于将用于描述资源分配的选项提供给业务应用, 所述选 项对应至少两个不同层级的物理资源;
确定单元, 用于根据所述业务应用反馈的所述选项的选择结果, 确定所述 选择结果对应层级的物理资源;
虚拟资源提供单元, 用于在所述选择结果对应层级的物理资源上建立虚拟 资源, 并将所述虚拟资源提供给所述业务应用在第二方面的第一种可能的实现 方式中, 所述管理节点还包括: 发送单元, 用于物理资源配置请求信息至所述基础设施管理节点管理范围 内的物理主机;
接收单元, 用于接收所述物理主机发送的物理资源配置信息, 所述物理资 源配置信息包括不同层级的物理资源的对应关系;
存储单元, 用于按照树形层级结构保存所述物理资源配置信息。
结合在第二方面的第一种可能的实现方式, 在第二种可能的实现方式中, 确定所述结果对应层级的物理资源, 包括: 所述确定单元确定所述结果对应层 级在所述树形层级结构中的结点, 所述结点为保存的所述结果对应层级的物理 资源的信息。
结合第二方面的第二种可能的实现方式中, 在第三种可能的实现方式中, 若所述结果指定不同层级的隔离度和亲和度,且所述亲和度对应的层级高于所 述隔离度对应的层级, 所述所述确定单元确定所述树形层级结构中所述结果对 应层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的结点, 并确定所述隔 离度对应层级在所述结点层级结构中的多个孩子结点或多个子孙结点, 所述多 个孩子结点或多个子孙结点为所述结果对应层级的物理资源信息。
结合第二方面的第二种可能的实现方式中, 在第四种可能的实现方式中, 若所述结果只指定亲和度对应的层级或隔离度对应的层级, 所述确定单元确定 所述树形层级结构中所述结果对应层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所 述隔离度对应的层级在所述树形层级结构中的多个结点。
通过上述方案, 本发明实施例提供的分配资源的管理节点, 将用于基础设 施管理节点将用于描述资源分配的选项发送至业务应用, 并根据业务应用反馈 的选择结果, 在该结果对应的物理资源上建立虚拟资源提供给业务应用, 满足 了业务应用的部署要求, 保证了业务应用的 QoS。 附图说明
图 1 为本发明实施例应用环境的架构图;
图 2为本发明实施例 1的流程图;
图 3为本发明实施例 1中的物理资源相对位置关系图;
图 4为本发明实施例 2中的流程图;
图 5为本发明实施例 3中管理节点的结构图;
图 6为本发明实施例 3中的管理节点硬件组成架构图。
具体实施方式
本发明实施例的应用环境可以分为三层, 分别是上层的应用层、 处于中间 的基础设施管理层和下层的物理资源层。 应用程序指运行在应用层上的程序, 运行时包含一个或多个进程,可能分布在一个或多个虚拟机( Vi r tua l Machine, VM )上。 基础设施管理层将物理资源虚拟化, 对上提供 VM、 VM集群、 虚拟卷、 虚拟网络等虚拟资源。 其中, VM集群是对 VM的分组, 每个应用程序对应一个 VM集群。 VM是由基础设施管理层统一调度及管理, 并依附于物理主机上。 一 台物理主机可以建立一个或多个 VM, VM可以固定在一台物理主机上, 也可以 迁移至别的物理主机。 物理资源层提供对物理主机的操作, 如安装、 部署、 升 级、 上下电等等, 物理资源层在 ATCA硬件架构中, 包括主机、 机框和机架等。 上述本发明实施例揭示的方法可以中央处理器中, 或者说由中央处理器以 实现。 中央处理器可能是一种集成电路芯片, 具有信号的处理能力。 在实现过 程中, 上述方法的各步骤可以通过中央处理器中的硬件的集成逻辑电路或者软 件形式的指令完成。 用于执行本发明实施例揭示的方法, 上述的中央处理器可 以是通用处理器、 数字信号处理器 (DSP )、 专用集成电路(ASIC )、 现成可编 程门阵列 (FPGA )或者其他可编程逻辑器件、 分立门或者晶体管逻辑器件、 分
框图。 通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器 等。 结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完 成, 或者用处理器中的硬件及软件模块组合执行完成。 软件模块可以位于随 机存储器, 闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、 寄存器等本领域成熟的存储介质中。 该存储介质位于存储器, 中央处理器读取 存储器中的信息, 结合其硬件完成上述方法的步骤。 实施例一:
如图 1所示, 本发明实施例分配资源的流程如下:
S10 基础设施管理节点将用于描述资源分配的选项提供给业务应用, 所 述选项对应至少两个不同层级的物理资源;
本发明实施例中, 物理资源包括但不限于物理主机、 机框、 机架、 站点和 地理容灾。 其中, 物理主机包括中央处理器(CPU )、 内存和硬盘, 所述站点包 括局域网和跨局域网。 虚拟资源在本发明实施例中, 包括但不限于虚拟机。
基础设施管理节点将用于描述资源分配的选项提供给业务应用,基础设施 管理节点可以是 Λ良务器, 业务应用即应用层的应用程序。
物理资源之间的相对位置关系可以用图 3来概括。 需要说明的是, 本发明 实施例中所说物理资源的划分不限于图 3的划分, 诸如将主机细分为 CPU和内 存, 或将站点细分为局域网和跨局域网, 或取消站点这一层级, 也是本发明实 施例保护的范畴。 图 3是一个树形结构的物理资源之间的相对位置关系图。 图 3中两棵树的根结点分别为 "Geographicl"和 "Geographic2" , 根结点的后继 结点为 "Site"、 "Rack" 、 "Chassis,,及 "Host"。 其中, "Site,,是 "Rack" 的前驱结点, "Rack" 是 "Chassis" 的前驱结点, "Chassis" 是 "Host" 前驱 结点。 ATCA标准中的 "主机-机框-机架"对应图 3中的 "Host_Chassis_Rack"。
"Site" 即指包含大量物理设备的站点, 例如放置在同一建筑内的数据中心设 备。 "Geographic" 指地理容灾, 即两个区域之间的地理距离应该满足自然灾 害容灾的要求。 主机、 机框、 机架、 站点、 地理容灾的层级从小到大, 机框 可包含多个主机, 机架可包含多个机框, 站点包含多个机架, 地理容灾包含多 个站点。 上述 5个不同的物理意义上的层级能够明确对应某一程度的物理隔离 度和亲和度。例如, 同一主机分别运行不同的 VM,则亲和度最高, 隔离度最低, 一旦主机故障, 则主机内的 VM均无法运行; 不同的 VM在不同的主机上运行, 但在相同的机框内, 主机之间通过机框互相连接并进行通信, 共享机框中的其 他部件, 如电源、 风扇、 管理模块等, 则 VM之间的亲和度比同一主机运行不 同 VM的亲和度低, 而隔离度比同一主机运行不同 VM的隔离度高, 其中一台主 机故障, 另外的主机上的 VM照常工作, 但是, 一旦机框内的电源、 风扇等故 障, 则整个机框内的主机都会出现故障。 若不同的 VM在不同机框, 但均在同 一个机架内, 则 VM之间的亲和度不如 VM都在相同机框的亲和度高, 但隔离度 比 VM在相同机框的隔离度高, 一旦一个机框内的电源或风扇有问题, 机架内 其余的机框可以正常工作。 以此类推, VM "同机架, 不同站点" 比 VM "同机框, 不同机架" 的亲和度低, 隔离度高。
基础设施管理节点在将用于描述资源分配的选项提供给业务应用之前,还 可以发送物理资源配置请求信息至本节点管辖范围内的一台或多台物理主机, 若基础设施管理节点管理范围内的物理主机只有一台, 则只需发送请求消息至 该物理主机即可, 若管理范围内有多台物理主机, 则基础设施管理节点实时或 定时将该请求信息群发至节点层级内的所有物理主机, 不同物理主机的配置情 况可能会改变, 如不同时间段上下电, 更换机框、 机架等, 则基础设施管理节 点通过实时或定时发送的方式获取资源配置信息, 更新自身保存的资源配置情 况。 物理主机接收到该资源配置请求后, 将组网时自身的配置情况信息发送给 管理节点。基础设施管理节点接收所述一台或多台物理主机发送的物理资源配 置信息, 所述物理资源配置信息包括不同的层级的对应关系。 例如, 主机 A属 于机框 A, 机框 A属于机架 A, 机架 A属于站点 A, 站点 A属于地理容灾 A区, 则主机 A上报的信息即 "主机 A-机框 A-机架 A-站点 A-地理容灾 A" , 而主机 B 属于机框 B, 机框 B属于机架 A, 则主机 B上报的信息即 "主机 B-机框 B-机架 A-站点 A-地理容灾 A"。 收到物理主机上报的物理资源配置信息之后,基础设施管理节点按照树形 层级结构保存所述不同层级的物理资源配置信息。 在数据结构中, 树形结构包 括根结点、 父母结点、 兄弟结点、 孩子结点及子孙结点等, 定义层级最大的为 根结点,其余的层级从大到小依次按照树形结构来保存。如上例,根结点是"地 理容灾 A" , 其孩子结点是 "站点 A" , 站点 A的孩子结点是 "机架 A" , 那么 "机 架 A" 即是 "地理容灾 A" 的子孙结点, "地理容灾 A" 是 "机架 A" 的祖先结 点。 按照上例主机 A和 B上报的信息, 基础设施管理节点存储的物理资源配置 信息的形式可以为 "(地理容灾 A(站点 A(机架 A(机框 A(主机 A,主机 B ) ) ) )" , 其中, 逗号表示主机 A和主机 B位于同一层级, 括号外的物理资源层级比该括 号内的物理资源层级高,例如,机框 A的层级比主机 A及主机 B的层级高一级, 机架 A比机框 A的层级高一级。
S102、 根据所述业务应用反馈的所述选项的选择结果, 确定所述选择结果 对应层级的物理资源;
所述业务应用根据能够保障自身服务质量 QoS的隔离度和亲和度的要求, 确定一个或多个满足所述隔离度和亲和度的选项,将所述选项的选择结果反馈 至所述节点。 例如, 为保证业务应用运行的 QoS , 业务应用需要 "主 -备" 两台 VM运行, 且两台 VM之间的交互频繁, 要求两台 VM之间的通信带宽不得低于 100M,则业务应用的性能和可靠性要求都比较高,即亲和度与隔离度都比较高, 要保证两台 VM建立在同一个机框,但建立在同一个机框下的不同物理主机上, 保证当一台物理主机故障后, 另外的物理主机照常工作, 且两台物理主机通过 机框内的总线进行通信,效率高,速度快。 因此,业务应用向管理节点反馈"同 机框, 不同主机" 的选择结果。
基础设施管理节点根据反馈的结果, 确定所述结果对应层级在所述树形层 级结构中的结点, 所述结点为保存的所述结果对应层级的物理资源信息。
若所述结果指定不同层级的隔离度和亲和度, 且所述亲和度对应的层级高 于所述隔离度对应的层级, 则管理节点确定所述亲和度对应的层级在所述树形 层级结构中的结点, 并确定所述隔离度对应层级在所述结点层级结构中的多个 孩子结点或多个子孙结点, 所述多个孩子结点或多个子孙结点为所述结果对应 层级的物理资源信息。 例如, 如图 3所示, 业务应用反馈 "同机框, 不同主机" 的选择结果, 则管理节点先在保存的树形层级结构中, 确定 "机框"这一层级, 在机框下一层级只有主机这一层级, 管理节点遍历所有 "机框,, 的孩子结点, 只有机框 1, 即图 3中的 chassisl, 有两个不同的物理主机, 其余机框都只有 一台主机, 则管理节点确定机^ I 1下的两台物理主机 hostl和 host2为满足所 述结果指定的隔离度和亲和度的物理资源。 管理节点将在 hostl和 host2上分 别建立两台 VM, 提供给业务应用。
若业务应用选择的是 "同机架, 不同主机", 按照上述管理节点处理的步 骤, 管理节点遍历 "机架" 的孩子结点和子孙结点, 只有机架 1, 即图 3中的 rackl, 有 3个不同的主机, 其余机架都只有一台主机, 则管理节点确定机架 1 下的 3台物理主机 hostl、 host2和 host3为满足所述结果指定的隔离度和亲 和度的物理资源,管理节点可以在 hostl和 host2上分别建立两台 VM,也可以 在 hostl和 host3上,或者在 host2和 host3上分别建立两台 VM,提供给业务 应用, 具体是选择 hostl和 host 2, 还是选择 hostl和 host3, 或者选择 host2 和 host3, 可以预先设置, 也可以根据负载均衡等因素来确定。
若业务应用选择的是 "同机架, 不同站点", 管理节点判断该业务应用选 择层级错误, 不做处理或将报错的通知反馈给业务应用。
若所述结果只指定亲和度对应的层级或隔离度对应的层级, 则管理节点确 定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所述隔离 度对应的层级在所述树形层级结构中的多个结点。 例如, 业务应用只指定 "同 机框", 则管理节点只需要选择一个机框, 如图 3中的 chassisl, 可以将不同 的 VM建立在图 3的 chassisl下的任一个物理主机 host上。 若业务应用只指 定 "不同主机", 则管理节点只需要满足将不同的 VM建立在不同的主机上, 如 图 3中的 hosl— host6中的任意多个 host。 S103、 在所述选择结果对应层级的物理资源上建立虚拟资源, 并将所述虚 拟资源提供给所述业务应用。
本发明实施例提供的分配资源的方法, 基础设施管理节点将用于描述资源 分配的选项发送至业务应用, 并根据业务应用反馈的选择结果, 在该结果对应 的物理资源上建立虚拟资源提供给业务应用, 满足了业务应用的部署要求, 保 证了业务应用的 QoS。 实施例二:
图 4是本发明实施例的流程图, 如图 4所示, 本发明实施例流程如下: S20 基础设施管理节点获取物理资源配置情况;
在本实施例中,基础设施管理节点是负责调度物理资源及为应用层提供虚 拟资源的服务器。 物理资源是依照先进的电信计算平台 (Advanced Te lecom Comput ing Archi tecture , ATCA )标准的物理资源, 包括主机、 机框、 机架等, 其中, 主机包括 CPU、 内存、 磁盘存储、 网络接口等。 虚拟资源包括 VM、 VM 集群、 虚拟卷、 虚拟网络等, 本实施例以 VM为例进行说明。
物理资源的相对位置关系如图 3所示, ATCA标准中的 "主机 -机框-机架" 对应图 X 中的 "Hos t_Chas s i s_Rack"。 "S i te" 即指包含大量物理设备的站 点, 例如放置在同一建筑内的数据中心设备。 "Geographic" 指地理容灾, 即 两个区域之间的地理距离应该满足自然灾害容灾的要求。 主机、 机框、 机架、 站点、地理容灾的层级从小到大,机框可包含多个主机,机架可包含多个机框, 站点包含多个机架, 地理容灾包含多个站点。 上述 5个不同的物理意义上的层 级能够明确对应某一程度的物理隔离度和亲和度。 例如, 同一主机分别运行不 同的 VM, 则亲和度最高, 隔离度最低, 一旦主机故障, 则主机内的 VM均无法 运行; 不同的 VM在不同的主机上运行, 但在相同的机框内 , 主机之间通过机 框互相连接并进行通信,共享机框中的其他部件, 如电源、风扇、 管理模块等, 则 VM之间的亲和度比同一主机运行不同 VM的亲和度低, 而隔离度比同一主机 运行不同 VM的隔离度高, 其中一台主机故障, 另外的主机上的 VM照常工作, 但是, 一旦机框内的电源、 风扇等故障, 则整个机框内的主机都会出现故障。 若不同的 VM在不同机框, 但均在同一个机架内, 则 VM之间的亲和度不如 VM 都在相同机框的亲和度高, 但隔离度比 VM在相同机框的隔离度高, 一旦一个 机框内的电源或风扇有问题, 机架内其余的机框可以正常工作。 以此类推, VM "同机架, 不同站点" 比 VM "同机框, 不同机架" 的亲和度低, 隔离度高。 需要说明的是, 除了物理资源可以分为 "主机、 机框、 机架、 站点、 地理 容灾" 之外, 还可以将 "主机" 细分为中央处理器 CPU、 内存和硬盘。 单个主 机可以包含数个 CPU、 一定规格的内存和硬盘。 同时, "站点" 可以细分为同一 局域网和跨局域网两个层级。
基础设施管理节点获取物理资源配置情况, 如图 2所示, 具体的物理资源 的位置在搭建物理资源架构时已确定, 故物理资源配置情况可以通过人为方式 进行输入, 也可以通过基础设施管理节点实时或定时获取, 其获取方式可以是 实时或定时群发物理资源配置请求信息至该节点管理层级内的每个主机, 由主 机各自上报自身的资源配置情况的信息至基础设施管理节点。 例如, 主机 A属 于机框 A, 机框 A属于机架 A, 机架 A属于站点 A, 站点 A属于地理容灾 A区, 则主机 A上报的信息即 "主机 A-机框 A-机架 A-站点 A-地理容灾 A" , 而主机 B 属于机框 B, 机框 B属于机架 A, 则主机 B上报的信息即 "主机 B-机框 B-机架 A-站点 A-地理容灾 A"。
收到物理主机上报的物理资源配置信息之后,基础设施管理节点按照树形 层级结构保存所述不同层级的物理资源配置信息。 在数据结构中, 树形结构包 括根结点、 父母结点、 兄弟结点、 孩子结点及子孙结点等, 定义层级最大的为 根结点,其余的层级从大到小依次按照树形结构来保存。如上例,根结点是"地 理容灾 A" , 其孩子结点是 "站点 A" , 站点 A的孩子结点是 "机架 A" , 那么 "机 架 A" 即是 "地理容灾 A" 的子孙结点, "地理容灾 A" 是 "机架 A" 的祖先结 点。 按照上例主机 A和 B上报的信息, 基础设施管理节点存储的物理资源配置 信息的形式可以为 "(地理容灾 A(站点 A(机架 A(机框 A(主机 A,主机 B ) ) ) )" , 其中, 逗号表示主机 A和主机 B位于同一层级, 括号外的物理资源层级比括号 内的层级高一级。
5202、 业务应用请求资源;
业务应用即应用程序, 应用程序在运行时需要相应的虚拟机 VM为其提供 资源。 在业务应用运行初始化时, 业务应用向基础设施管理节点请求资源, 基 础设施管理节点负责提供相应的物理资源,并在提供的物理资源上建立 VM,将 建立的 VM提供给业务应用。
5203、 将用于描述资源分配的选项提供给业务应用;
基础设施管理节点将用于描述资源分配的选项提供给业务应用, 具体提供 方式可以是扩展提供选项的接口, 将选项通过该接口提供给业务应用。 该选项 形式包括: "host—亲和度,,、 " host—隔离度,,、 "chassis—亲和度,,、 " chassis _隔离度,,、 "rack—亲和度,,、 " rack—隔离度,,、 "site 亲和度,,、 " site—隔离度"、 "geographic—亲和度"、 " geographic 隔离度 "以及 "不 指定" 中的至少一种,本发明实施例只是揭示了选项的其中一种形式, 其他的 选项形式, 诸如 "host-affinity", 虽然形式不同, 但选项表示的意义相同, 即不同的选项表示提供主机、 机框和机架等不同的层级, 也属于本发明实施例 保护的范围。
"不指定"表示 VM不指定建立的具体要求, "103 亲和度" 表示不同的 VM建立在同一个主机上, " host_隔离度" 表示不同的 VM建立在不同的主机 上, "chassis-亲和度 " 表示不同的 VM在同一个机框上, "chassis-隔离度" 表示不同的 VM在不同的机框上。 以此类推, 其他的选项即表示 VM在或不在该 选项提供的层级内。 在亲和度从大到小依次为主机、 机框、 机架、 站点和地理 容灾, 隔离度从大到小依次是地理容灾、 站点、 机架、 机框和主机。
若 "主机" 细分为 CPU、 内存和硬盘, 则该选项将添加如下几项: "CPU_ 亲和度,,、 " CPU—隔离度,,、 memory-亲和度,,、 " memory—隔离度,,、 harddisk- 亲和度"、 " harddisk_隔离度", 上述细分的几个选项用于替换掉原选项中的
"host_亲和度" 和 " host_隔离度" 。 "CPU_亲和度" 表示 VM使用同一个虚 拟 CPU, 虚拟 CPU是通过 CPU的虚拟化技术实现的, CPU的虚拟化技术可以单 CPU模拟多 CPU并行,允许一个平台同时运行多个操作系统; "CPU_隔离度 "表 示 VM分别使用不同的虚拟 CPU。若站点细分为局域网和跨局域网, 则可以添加
"局域网 -亲和度"、 " 局域网 -隔离度"、 "跨局域网 -亲和度"、 " 跨局域网_ 隔离度" 几项, 用于替换掉原选项中的 "site_亲和度"、 " site_隔离度"。
S204、 业务应用确定符合自身资源需求的选择结果;
业务应用根据基础设施管理节点提供的上述选项, 确定符合自身运行的资 源需求的选择结果, 并将选择结果反馈到基础设施管理节点。 具体可以是: 业 务应用接收并保存基础设施管理节点提供的选项,根据自身的 QoS要求确定一 个或多个选项, 并将该选项发送回基础设施管理节点。 例如, 为保证自身运行 的 QoS, 业务应用需要 "主 -备" 两台 VM运行, 且两台 VM之间的交互频繁, 要 求两台 VM之间的通信带宽不得低于 100M, 则业务应用的性能和可靠性要求都 比较高, 即亲和度与隔离度都比较高, 在基础设施管理节点所提供的选项中, 亲和度为 "host 亲和度" 和 "chassis 亲和度" 均满足此带宽要求, 但由于 业务应用需要 "主 -备" 两台 VM运行, 主备 VM不能建立在同一个物理主机上, 故符合该业务应用的选项只有 "chassis-亲和度" 和 "host_隔离度"; 若业务 应用对性能要求高, 而对可靠性要求不高, 则不指定隔离度, 只选择相应的亲 和度选项, 例如 "host_亲和度", 而不管是不是同一机框、 机架或站点; 诸如 网页的业务应用对可靠性和性能要求都不高, 则可以选择 "不指定", 不指定 相应的亲和度和隔离度, 即不同的 VM建立在不管是不是同一主机、 机框、 机 架或站点上均可。 S205、 选取符合该选择结果的物理资源;
基础设施管理节点接收到业务应用所选择的结果后, 选取符合该选择结果 的物理资源, 并将 VM建立所选择的物理资源上。
例如, 业务应用选择的是 "011&8818_亲和度" 和 "hostJ 离度" 的选项, 根据图 3中提供的物理资源的相对位置关系图, 只有 hostl和 host2满足此条 件, 并将 VM分别建立在 hostl和 host2上。 而对于只选择了 "host_亲和度" 而没有指定隔离度的选择结果, ho s 11至 ho s 16都满足条件,即在 ho s 11至 ho s 16 任一个上均可以建立不同的 VM。
基础设施管理节点根据反馈的结果, 确定所述结果对应层级在所述树形层 级结构中的结点, 所述结点为保存的所述结果对应层级的物理资源信息。
若所述结果指定不同层级的隔离度和亲和度, 且所述亲和度对应的层级高 于所述隔离度对应的层级, 则基础设施管理节点确定所述亲和度对应的层级在 所述树形层级结构中的结点, 并确定所述隔离度对应层级的在所述结点层级结 构中的多个孩子结点或多个子孙结点, 所述多个孩子结点或多个子孙结点为所 述结果对应层级的物理资源信息。 例如, 如图 3所示, 业务应用反馈 "同机框, 不同主机" 的选择结果, 即 "chassis-亲和" 和 "host-隔离" 这两个选项, 则基础设施管理节点先在保存的树形层级结构中, 确定 "机框,, 这一层级, 在 机框下一层级只有主机这一层级, 基础设施管理节点遍历所有 "机框,, 的孩子 结点, 只有机框 1, 即图 3中的 chassisl, 有两个不同的物理主机, 其余机框 都只有一台主机, 则基础设施管理节点确定机框 1下的两台物理主机 hostl和 host2为满足所述结果指定的隔离度和亲和度的物理资源。 基础设施管理节点 将在 hostl和 host2上分别建立两台 VM, 提供给业务应用。
若业务应用选择的是 "同机架, 不同主机", 即 "rack_亲和" 和 "host_ 隔离" 这两个选项, 按照上述基础设施管理节点处理的步骤, 基础设施管理节 点遍历 "机架" 的孩子结点和子孙结点, 只有机架 1, 即图 3中的 rackl, 有 3 个不同的主机, 其余机架都只有一台主机, 则基础设施管理节点确定机架 1下 的 3台物理主机 hostl、 host2和 host3为满足所述结果指定的隔离度和亲和 度的物理资源, 基础设施管理节点可以在 hostl和 host2上分别建立两台 VM, 也可以在 hostl和 host3上,或者在 host2和 host3上分别建立两台 VM,提供 给业务应用, 具体是选择 hostl和 host2, 还是选择 hostl和 host 3, 或者选 择 host2和 host3, 可以预先设置, 也可以根据负载均衡等因素来确定。
若业务应用选择的是 "同机架, 不同站点", 基础设施管理节点判断该业 务应用选择层级错误, 不做处理或将报错的通知反馈给业务应用。
若所述结果只指定亲和度对应的层级或隔离度对应的层级, 则基础设施管 理节点确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定 所述隔离度对应的层级在所述树形层级结构中的多个结点。 例如, 业务应用只 指定 "同机框", 则基础设施管理节点只需要选择一个机框, 如图 3 中的 chassisl,将不同的 VM建立在图 3的 chassisl下的任一个物理主机 host上。 若业务应用只指定 "不同主机", 则基础设施管理节点只需要满足将不同的 VM 建立在不同的主机上, 如图中的 hosl— host6中的任意多个 host。
S206、 将资源分配给业务应用。
管理节点将建立好的 VM分配给业务应用, 完成资源分配操作。
本发明实施例提供的分配资源的方法, 基础设施管理节点将用于描述资源 分配的选项发送至业务应用, 并根据业务应用反馈的选择结果, 在该结果对应 的物理资源上建立虚拟资源提供给业务应用, 满足了业务应用的部署要求, 保 证了业务应用的 QoS。 实施例三:
图 5^ ^础设施管理节点的组成结构图, 如图 5所示, 基础设施管理节点 包括:
选项提供单元 301, 用于将用于描述资源分配的选项提供给业务应用, 所 述选项对应至少两个不同层级的物理资源; 本发明实施例中, 物理资源包括但不限于物理主机、 机框、 机架、 站点和 地理容灾。 其中, 物理主机包括中央处理器(CPU )、 内存和硬盘, 所述站点包 括局域网和跨局域网。 虚拟资源在本发明实施例中, 包括但不限于虚拟机。
物理资源之间的相对位置关系可以用图 3来概括。 需要说明的是, 本发明 实施例中所说物理资源的划分不限于图 3的划分,如将主机细分为 CPU和内存, 或将站点细分为局域网和跨局域网, 或取消站点这一层级, 也是本发明实施例 保护的范畴。 图 3是一个树形结构的物理资源之间的相对位置关系图。 图 3中 两棵树的根结点分别为 "Geographicl" 和 "Geographic2" , 根结点的后继结 点为 "Site", "Rack" 、 "Chassis" 及 "Host,,。 其中, "Site" 是 "Rack" 的前驱结点, "Rack" 是 "Chassis" 的前驱结点, "Chassis" 是 "Host" 前驱 结点。 ATCA标准中的 "主机 -机框-机架"对应图 X中的 "Host-Chassis-Rack"。
"Site" 即指包含大量物理设备的站点, 例如放置在同一建筑内的数据中心设 备。 "Geographic" 指地理容灾, 即两个区域之间的地理距离应该满足自然灾 害容灾的要求。 主机、 机框、 机架、 站点、 地理容灾的层级从小到大, 机框 可包含多个主机, 机架可包含多个机框, 站点包含多个机架, 地理容灾包含多 个站点。 上述 5个不同的物理意义上的层级能够明确对应某一程度的物理隔离 度和亲和度。例如, 同一主机分别运行不同的 VM,则亲和度最高, 隔离度最低, 一旦主机故障, 则主机内的 VM均无法运行; 不同的 VM在不同的主机上运行, 但在相同的机框内, 主机之间通过机框互相连接并进行通信, 共享机框中的其 他部件, 如电源、 风扇、 管理模块等, 则 VM之间的亲和度比同一主机运行不 同 VM的亲和度低, 而隔离度比同一主机运行不同 VM的隔离度高, 其中一台主 机故障, 另外的主机上的 VM照常工作, 但是, 一旦机框内的电源、 风扇等故 障, 则整个机框内的主机都会出现故障。 若不同的 VM在不同机框, 但均在同 一个机架内, 则 VM之间的亲和度不如 VM都在相同机框的亲和度高, 但隔离度 比 VM在相同机框的隔离度高, 一旦一个机框内的电源或风扇有问题, 机架内 其余的机框可以正常工作。 以此类推, VM "同机架, 不同站点" 比 VM "同机框, 不同机架" 的亲和度低, 隔离度高。
在选项提供单元 301将用于描述资源分配的选项提供给业务应用之前, 还 需要发送物理资源配置请求信息至本节点管辖范围内的一台或多台物理主机, 若节点管理范围内的物理主机只有一台, 则只需发送请求消息至该物理主机即 可, 若管理范围内有多台物理主机, 则基础设施管理节点的发送单元实时或定 时将该请求信息群发至节点层级内的所有物理主机, 不同物理主机的配置情况 可能会改变, 如不同时间段上下电, 更换机框、 机架等, 则基础设施管理节点 的获取单元通过实时或定时发送的方式获取资源配置信息, 更新自身保存的资 源配置情况。 物理主机接收到该资源配置请求后, 将组网时自身的配置情况信 息发送给基础设施管理节点的接收单元。基础设施管理节点的接收单元接收所 述一台或多台物理主机发送的物理资源配置信息, 所述物理资源配置信息包括 不同的层级的对应关系。 例如, 主机 A属于机框 A, 机框 A属于机架 A, 机架 A 属于站点 A, 站点 A属于地理容灾 A区, 则主机 A上报的信息即 "主机 A-机框 A-机架 A-站点 A-地理容灾 A" , 而主机 B属于机框 B, 机框 B属于机架 A, 则主 机 B上报的信息即 "主机 B-机框 B-机架 A-站点 A-地理容灾 A"。
收到物理主机上报的物理资源配置信息之后,基础设施管理节点存储单元 按照树形层级结构保存所述不同层级的物理资源配置信息。 在数据结构中, 树 形结构包括根结点、 父母结点、 兄弟结点、 孩子结点及子孙结点等, 定义层级 最大的为根结点, 其余的层级从大到小依次按照树形结构来保存。 如上例, 根 结点是 "地理容灾 A" ,其孩子结点是 "站点 A" ,站点 A的孩子结点是 "机架 A" , 那么 "机架 A" 即是 "地理容灾 A" 的子孙结点, "地理容灾 A" 是 "机架 A" 的祖先结点。 按照上例主机 A和 B上报的信息, 存储单元存储的物理资源配置 信息的形式可以为 "(地理容灾 A(站点 A(机架 A(机框 A(主机 A,主机 B ) ) ) )" , 其中, 逗号表示主机 A和主机 B位于同一层级, 括号外的物理资源层级比括号 内的物理资源层级高, 例如, 机框 A的层级比主机 A及主机 B的层级高一级, 机架 A比机框 A的层级高一级。。 确定单元 302 , 用于根据所述业务应用反馈的所述选项的选择结果, 确定 所述选择结果对应层级的物理资源;
所述业务应用根据能够保障自身服务质量 QoS的隔离度和亲和度的要求, 确定一个或多个满足所述隔离度和亲和度的选项,将所述选项的选择结果反馈 至所述节点的确定单元 302。 例如, 为保证业务应用运行的 QoS , 业务应用需 要 "主 -备" 两台 VM运行, 且两台 VM之间的交互频繁, 要求两台 VM之间的通 信带宽不得低于 1 00M ,则业务应用的性能和可靠性要求都比较高, 即亲和度与 隔离度都比较高, 要保证两台 VM建立在同一个机框, 但建立在同一个机框下 的不同物理主机上, 保证当一台物理主机故障后, 另外的物理主机照常工作, 且两台物理主机通过机框内的总线进行通信, 效率高, 速度快。 因此, 业务应 用向确定单元 302反馈 "同机框, 不同主机" 的选择结果。
确定单元 302根据反馈的结果, 确定所述结果对应层级在所述树形层级结 构中的结点, 所述结点为保存的所述结果对应层级的物理资源信息。
若所述结果指定不同层级的隔离度和亲和度, 且所述亲和度对应的层级高 于所述隔离度对应的层级, 则确定单元 302确定所述亲和度对应的层级在所述 树形层级结构中的结点, 并确定所述隔离度对应层级的在所述结点层级结构中 的多个孩子结点或多个子孙结点, 所述多个孩子结点或多个子孙结点为所述结 果对应层级的物理资源信息。 例如, 如图 3所示, 业务应用反馈 "同机框, 不 同主机" 的选择结果, 则确定单元 302先在保存的树形层级结构中, 确定 "机 框,,这一层级,在机框下一层级只有主机这一层级,确定单元 302遍历所有 "机 框"的孩子结点, 只有机框 1 , 即图 3中的 cha s s i s l ,有两个不同的物理主机, 其余机框都只有一台主机,则确定单元 302确定机框 1下的两台物理主机 hos 11 和 hos t 2为满足所述结果指定的隔离度和亲和度的物理资源。基础设施管理节 点通过提供单元在 ho s t l和 hos t 2上分别建立两台 VM, 提供给业务应用。
若业务应用选择的是 "同机架, 不同主机", 按照上述处理的步骤, 基础 设施管理节点遍历 "机架" 的孩子结点和子孙结点, 只有机架 1 , 即图 3中的 rackl, 有 3个不同的主机, 其余机架都只有一台主机, 则确定单元 302确定 机架 1下的 3台物理主机 hostl、 host2和 host3为满足所述结果指定的隔离 度和亲和度的物理资源,确定单元 302可以在 hostl和 host2上分别建立两台 VM, 也可以在 hostl和 host3上, 或者在 host2和 host3上分另建立两台 VM, 以便资源提供单元提供给业务应用, 具体是选择 hostl 和 host2, 还是选择 hostl和 host3, 或者选择 host2和 host3, 可以预先设置, 也可以根据负载均 衡等因素来确定。
若业务应用选择的是 "同机架, 不同站点", 基础设施管理节点通过判断 单元判断该业务应用选择层级错误, 不做处理或将报错的通知反馈给业务应 用。
若所述结果只指定亲和度对应的层级或隔离度对应的层级, 则确定单元 302确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所 述隔离度对应的层级在所述树形层级结构中的多个结点。 例如, 业务应用只指 定 "同机框", 则确定单元 302只需要选择一个机框, 如图 3中的 chassisl, 将不同的 VM建立在图 3的 chassisl下的任一个物理主机 host上。 若业务应 用只指定 "不同主机", 则确定单元 302只需要满足将不同的 VM建立在不同的 主机上, 如图中的 hosl— host6中的任意多个 host。
虚拟资源提供单元 303, 用于在所述选择结果对应层级的物理资源上建立 虚拟资源, 并将所述虚拟资源提供给所述业务应用。 图 6描述了本发明另一个实施例提供的基础设施管理节点的硬件架构图, 包括至少一个处理器 401 (例如 CPU), 至少一个网络接口 402或者其他通信接 口, 存储器 403, 和至少一个通信总线 404, 用于实现这些装置之间的连接通 信。 处理器 401用于执行存储器 403中存储的可执行模块, 例如计算机程序。 存储器 403可能包含高速随机存取存储器(RAM: Random Access Memory), 也 可能还包括非不稳定的存储器 (non- volatile memory ), 例如至少一个磁盘存 储器。 通过至少一个网络接口 402 (可以是有线或者无线) 实现该系统网关与 至少一个其他网元之间的通信连接, 可以使用互联网, 广域网, 本地网, 城域 网等。
在一些实施方式中, 存储器 403存储了程序指令, 程序指令可以被处理器 401执行, 其中, 程序指令包括选项提供单元 301、 确定单元 302和虚拟资源 提供单元 303 , 其中, 各单元的具体实现参见图 5所揭示的相应单元, 这里不 再累述。
本发明实施例提供的分配资源的基础设施管理节点, 选项提供单元将用于 描述资源分配的选项提供给业务应用, 确定单元根据业务应用反馈的选择结 果, 提供单元在该结果对应的物理资源上建立虚拟资源提供给业务应用, 满足 了业务应用的部署要求, 保证了业务应用的 QoS。
通过以上的实施方式的描述, 所属领域的技术人员可以清楚地了解到本发 明可以用硬件实现, 或固件实现, 或它们的组合方式来实现。 当使用软件实现 时, 可以将上述功能存储在计算机可读介质中或作为计算机可读介质上的一个 或多个指令或代码进行传输。 计算机可读介质包括计算机存储介质和通信介 质, 其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介 质。 存储介质可以是计算机能够存取的任何可用介质。 以此为例但不限于: 计 算机可读介质可以包括 RAM、 ROM, EEPR0M、 CD-ROM或其他光盘存储、 磁盘存 储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形 式的期望的程序代码并能够由计算机存取的任何其他介质。 此外。 任何连接可 以适当的成为计算机可读介质。 例如, 如果软件是使用同轴电缆、 光纤光缆、 双绞线、 数字用户线 (DSL )或者诸如红外线、 无线电和微波之类的无线技术 从网站、 服务器或者其他远程源传输的, 那么同轴电缆、 光纤光缆、 双绞线、 DSL或者诸如红外线、 无线和微波之类的无线技术包括在所属介质的定影中。 如本发明所使用的, 盘 (Di sk )和碟(di sc ) 包括压缩光碟(CD )、 激光碟、 光碟、 数字通用光碟(DVD )、 软盘和蓝光光碟, 其中盘通常磁性的复制数据, 而碟则用激光来光学的复制数据。上面的组合也应当包括在计算机可读介质的 保护层级之内。
总之, 以上所述仅为本发明技术方案的较佳实施例而已, 并非用于限定本 发明的保护层级。凡在本发明的精神和原则之内, 所作的任何修改、等同替换、 改进等, 均应包含在本发明的保护层级之内。

Claims

权 利 要 求 书
1、 一种分配资源的方法, 其特征在于, 包括:
基础设施管理节点将用于描述资源分配的选项提供给业务应用, 所述选项 对应至少两个不同层级的物理资源;
根据所述业务应用反馈的所述选项的选择结果, 确定所述选择结果对应层 级的物理资源;
在所述选择结果对应层级的物理资源上建立虚拟资源, 并将所述虚拟资源 提供给所述业务应用。
2、 根据权利要求 1 所述的方法, 其特征在于, 所述将所述选项提供给业 务应用之前, 还包括:
发送物理资源配置请求信息至所述基础设施管理节点管理范围内的物理 主机;
接收所述物理主机发送的物理资源配置信息, 所述物理资源配置信息包括 不同层级的物理资源的对应关系;
按照树形层级结构保存所述物理资源配置信息。
3、 根据权利要求 2 所述的方法, 其特征在于, 所述确定所述结果对应层 级的物理资源, 包括:
确定所述结果对应层级在所述树形层级结构中的结点, 所述结点为保存的 所述结果对应层级的物理资源的信息。
4、 根据权利要求 3 所述的方法, 其特征在于, 若所述结果指定不同层级 的隔离度和亲和度, 且所述亲和度对应的层级高于所述隔离度对应的层级, 所 述确定所述树形层级结构中所述结果对应层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的结点, 并确定所述隔 离度对应层级在所述结点层级结构中的多个孩子结点或多个子孙结点, 所述多 个孩子结点或多个子孙结点为所述结果对应层级的物理资源信息。
5、 根据权利要求 3 所述的方法, 其特征在于, 若所述结果只指定亲和度 对应的层级或隔离度对应的层级, 所述确定所述树形层级结构中所述结果对应 层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所 述隔离度对应的层级在所述树形层级结构中的多个结点。
6、 根据权利要求 1至 5任一项所述的方法, 其特征在于, 所述选项的选 择结果是由所述业务应用根据保障自身服务质量 QoS 的隔离度和亲和度的要 求, 以及所述选项对应层级的隔离度和亲和度所确定的。
7、 根据权利要求 1至 6任一项所述的方法, 其特征在于, 所述选项对应 层级的物理资源按照所述隔离度从小到大依次为物理主机、 机框、 机架、 站点 和地理容灾, 所述选项对应层级的物理资源按照所述隔离度从小到大依次为地 理容灾、 站点、 机架、 机框和物理主机。
8、 根据权利要求 7 所述的方法, 其特征在于, 所述物理主机包括中央处 理器 CPU、 内存和硬盘;
或所述站点包括局域网和跨局域网;
或所述物理主机包括中央处理器 CPU、 内存和硬盘, 以及所述站点包括局 域网和跨局域网。
9、 一种分配资源的基础设施管理节点, 其特征在于, 包括:
选项提供单元, 用于将用于描述资源分配的选项提供给业务应用, 所述选 项对应至少两个不同层级的物理资源;
确定单元, 用于根据所述业务应用反馈的所述选项的选择结果, 确定所述 选择结果对应层级的物理资源;
虚拟资源提供单元, 用于在所述选择结果对应层级的物理资源上建立虚拟 资源, 并将所述虚拟资源提供给所述业务应用。
1 0、 根据权利要求 9所述的节点, 其特征在于, 所述节点还包括: 发送单元, 用于物理资源配置请求信息至所述基础设施管理节点管理范围 内的物理主机;
接收单元, 用于接收所述物理主机发送的物理资源配置信息, 所述物理资 源配置信息包括不同层级的物理资源的对应关系;
存储单元, 用于按照树形层级结构保存所述物理资源配置信息。
11、 根据权利要求 10所述的节点, 其特征在于, 所述确定单元确定所述 结果对应层级的物理资源, 包括:
确定所述结果对应层级在所述树形层级结构中的结点, 所述结点为保存的 所述结果对应层级的物理资源的信息。
12、 根据权利要求 11 所述的节点, 其特征在于, 若所述结果指定不同层 级的隔离度和亲和度, 且所述亲和度对应的层级高于所述隔离度对应的层级, 确定所述亲和度对应的层级在所述树形层级结构中的结点, 并确定所述隔 离度对应层级在所述结点层级结构中的多个孩子结点或多个子孙结点, 所述多 个孩子结点或多个子孙结点为所述结果对应层级的物理资源信息。
1 3、 根据权利要求 11 所述的节点, 其特征在于, 若所述结果只指定亲和 度对应的层级或隔离度对应的层级, 所述确定单元确定所述树形层级结构中所 述结果对应层级的结点, 包括:
确定所述亲和度对应的层级在所述树形层级结构中的一个结点, 或确定所 述隔离度对应的层级在所述树形层级结构中的多个结点。
14、 根据权利要求 9一 1 3任一项所述的节点, 其特征在于, 所述选项对应 层级的物理资源按照所述隔离度从小到大依次为物理主机、 机框、 机架、 站点 和地理容灾, 所述选项对应层级的物理资源按照所述隔离度从小到大依次为地 理容灾、 站点、 机架、 机框和物理主机。。
15、 根据权利要求 14 所述的节点, 其特征在于, 所述物理主机包括中央 处理器 CPU、 内存和硬盘;
或所述站点包括局域网和跨局域网; 或所述物理主机包括中央处理器 CPU、 内存和硬盘, 以及所述站点包括局 域网和跨局域网。
PCT/CN2013/079502 2012-08-14 2013-07-17 一种分配资源的方法及装置 WO2014026524A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/585,927 US9807028B2 (en) 2012-08-14 2014-12-30 Method and apparatus for allocating resources
US15/712,386 US10104010B2 (en) 2012-08-14 2017-09-22 Method and apparatus for allocating resources

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210288410.1A CN102857370B (zh) 2012-08-14 2012-08-14 一种分配资源的方法及装置
CN201210288410.1 2012-08-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/585,927 Continuation US9807028B2 (en) 2012-08-14 2014-12-30 Method and apparatus for allocating resources

Publications (1)

Publication Number Publication Date
WO2014026524A1 true WO2014026524A1 (zh) 2014-02-20

Family

ID=47403578

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/079502 WO2014026524A1 (zh) 2012-08-14 2013-07-17 一种分配资源的方法及装置

Country Status (3)

Country Link
US (2) US9807028B2 (zh)
CN (2) CN102857370B (zh)
WO (1) WO2014026524A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650541A (zh) * 2019-09-02 2020-01-03 普联技术有限公司 一种ru子信道分配方法、装置、存储介质及网络设备
CN111344688A (zh) * 2017-11-09 2020-06-26 华为技术有限公司 云计算中资源提供的方法及系统
CN112235092A (zh) * 2016-09-30 2021-01-15 Oppo广东移动通信有限公司 传输信道状态信息的方法和装置

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857370B (zh) * 2012-08-14 2016-05-25 华为技术有限公司 一种分配资源的方法及装置
CN103177334A (zh) * 2013-03-01 2013-06-26 深圳先进技术研究院 机器人资源调度方法和系统
CN104717251B (zh) * 2013-12-12 2018-02-09 中国科学院深圳先进技术研究院 OpenStack云计算管理平台Cell节点调度方法和系统
WO2015104811A1 (ja) * 2014-01-09 2015-07-16 株式会社日立製作所 計算機システム及び計算機システムの管理方法
US10055240B2 (en) 2014-09-23 2018-08-21 At&T Intellectual Property I, L.P. Service creation and management
CN106571935B (zh) * 2015-10-08 2020-01-17 阿里巴巴集团控股有限公司 一种资源调度的方法与设备
CN106095564A (zh) * 2016-05-26 2016-11-09 浪潮(北京)电子信息产业有限公司 一种资源分配方法及系统
CN107818013A (zh) * 2016-09-13 2018-03-20 华为技术有限公司 一种应用调度方法及装置
CN107919975B (zh) * 2016-10-09 2022-06-03 中兴通讯股份有限公司 一种业务资源分配方法和装置
CN108462658B (zh) * 2016-12-12 2022-01-11 阿里巴巴集团控股有限公司 对象分配方法及装置
US10579427B2 (en) * 2017-02-01 2020-03-03 Datera, Inc. Method and system for translating resource requirement of application into tangible infrastructural resources
CN108667864B (zh) * 2017-03-29 2020-07-28 华为技术有限公司 一种进行资源调度的方法和装置
CN109286513B (zh) * 2017-07-20 2021-11-19 华为技术有限公司 资源部署方法和装置
EP3673372A1 (en) * 2017-08-22 2020-07-01 Convida Wireless, LLC Overlay resource trees in a communications network
CN111417198B (zh) * 2019-01-07 2023-05-09 中国移动通信有限公司研究院 一种资源配置方法、网络侧设备及终端
US20220156125A1 (en) * 2019-04-02 2022-05-19 Telefonaktiebolaget Lm Ericsson (Publ) Technique for Simplifying Management of a Service in a Cloud Computing Environment
US10999159B2 (en) * 2019-04-04 2021-05-04 Cisco Technology, Inc. System and method of detecting application affinity using network telemetry
KR20200135684A (ko) 2019-05-24 2020-12-03 삼성전자주식회사 활성 영역과 반도체 층 사이의 배리어 층을 포함하는 반도체 소자
CN111683040B (zh) * 2020-04-21 2023-07-14 视联动力信息技术股份有限公司 一种网络隔离方法、装置、电子设备及存储介质
CN114780300B (zh) * 2022-06-20 2022-09-09 南京云信达科技有限公司 一种基于资源分层的备份系统权限管理方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060085668A1 (en) * 2004-10-15 2006-04-20 Emc Corporation Method and apparatus for configuring, monitoring and/or managing resource groups
CN101540771A (zh) * 2008-03-20 2009-09-23 Sap股份公司 利用隔离等级条款自动供给托管应用
CN102420850A (zh) * 2011-11-08 2012-04-18 东软集团股份有限公司 一种资源调度方法及系统
CN102419718A (zh) * 2011-10-28 2012-04-18 浪潮(北京)电子信息产业有限公司 资源调度方法
CN102857370A (zh) * 2012-08-14 2013-01-02 华为技术有限公司 一种分配资源的方法及装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6744767B1 (en) * 1999-12-30 2004-06-01 At&T Corp. Method and apparatus for provisioning and monitoring internet protocol quality of service
JP4434235B2 (ja) * 2007-06-05 2010-03-17 株式会社日立製作所 計算機システムまたは計算機システムの性能管理方法
US7711789B1 (en) * 2007-12-07 2010-05-04 3 Leaf Systems, Inc. Quality of service in virtual computing environments
US20090265707A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Optimizing application performance on virtual machines automatically with end-user preferences
FI20086111A0 (fi) * 2008-11-21 2008-11-21 Nokia Corp Resurssien allokointi viestintäjärjestelmässä
US8385356B2 (en) * 2010-03-31 2013-02-26 International Business Machines Corporation Data frame forwarding using a multitiered distributed virtual bridge hierarchy
US8639793B2 (en) * 2010-10-29 2014-01-28 Cisco Technology, Inc. Disaster recovery and automatic relocation of cloud services
CN102143063B (zh) * 2010-12-29 2014-04-02 华为技术有限公司 集群系统中业务保护的方法和装置
US9285992B2 (en) * 2011-12-16 2016-03-15 Netapp, Inc. System and method for optimally creating storage objects in a storage system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060085668A1 (en) * 2004-10-15 2006-04-20 Emc Corporation Method and apparatus for configuring, monitoring and/or managing resource groups
CN101540771A (zh) * 2008-03-20 2009-09-23 Sap股份公司 利用隔离等级条款自动供给托管应用
CN102419718A (zh) * 2011-10-28 2012-04-18 浪潮(北京)电子信息产业有限公司 资源调度方法
CN102420850A (zh) * 2011-11-08 2012-04-18 东软集团股份有限公司 一种资源调度方法及系统
CN102857370A (zh) * 2012-08-14 2013-01-02 华为技术有限公司 一种分配资源的方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235092A (zh) * 2016-09-30 2021-01-15 Oppo广东移动通信有限公司 传输信道状态信息的方法和装置
CN112235092B (zh) * 2016-09-30 2023-09-26 Oppo广东移动通信有限公司 传输信道状态信息的方法和装置
CN111344688A (zh) * 2017-11-09 2020-06-26 华为技术有限公司 云计算中资源提供的方法及系统
CN111344688B (zh) * 2017-11-09 2022-12-06 华为技术有限公司 云计算中资源提供的方法及系统
CN110650541A (zh) * 2019-09-02 2020-01-03 普联技术有限公司 一种ru子信道分配方法、装置、存储介质及网络设备
CN110650541B (zh) * 2019-09-02 2022-05-06 普联技术有限公司 一种ru子信道分配方法、装置、存储介质及网络设备

Also Published As

Publication number Publication date
CN102857370A (zh) 2013-01-02
US10104010B2 (en) 2018-10-16
CN105939290B (zh) 2019-07-09
US20150113149A1 (en) 2015-04-23
CN102857370B (zh) 2016-05-25
CN105939290A (zh) 2016-09-14
US9807028B2 (en) 2017-10-31
US20180026909A1 (en) 2018-01-25

Similar Documents

Publication Publication Date Title
WO2014026524A1 (zh) 一种分配资源的方法及装置
US10908936B2 (en) System and method for network function virtualization resource management
US10838890B2 (en) Acceleration resource processing method and apparatus, and network functions virtualization system
US10713071B2 (en) Method and apparatus for network function virtualization
JP6196322B2 (ja) 管理システム、仮想通信機能管理ノード及び管理方法
US20190230004A1 (en) Network slice management method and management unit
CN107209710B (zh) 节点系统、服务器设备、缩放控制方法和程序
JP6174716B2 (ja) 管理システム、全体管理ノード及び管理方法
US9350682B1 (en) Compute instance migrations across availability zones of a provider network
JP2021526768A (ja) 警報方法および警報装置
WO2018072612A1 (zh) 一种切片实例的管理方法及装置
US20140379921A1 (en) Resource silos at network-accessible services
US20190034219A1 (en) Application management method and apparatus in network functions virtualization environment
US11671489B2 (en) High availability and high utilization cloud data center architecture for supporting telecommunications services
WO2018137520A1 (zh) 一种业务恢复方法及装置
WO2020001409A1 (zh) 一种虚拟网络功能vnf部署方法及装置
WO2019056771A1 (zh) 分布式存储系统升级管理的方法、装置及分布式存储系统
JP2015191246A (ja) 通信システムおよび管理方法
JP2016116184A (ja) 網監視装置および仮想ネットワーク管理方法
WO2016121879A1 (ja) 仮想化制御装置、配置先選択方法及びプログラム
JP5734421B2 (ja) 管理情報生成方法、管理情報生成プログラムおよび管理情報生成装置
CN107408058A (zh) 一种虚拟资源的部署方法、装置及系统
CN112306625A (zh) 一种部署虚拟机的方法及相关装置
US11824943B1 (en) Managed connectivity between cloud service edge locations used for latency-sensitive distributed applications
JP6460743B2 (ja) 設定情報生成システム及び設定情報生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13829469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13829469

Country of ref document: EP

Kind code of ref document: A1