WO2014183574A1 - 计算节点部署方法、处理节点、控制器及系统 - Google Patents
计算节点部署方法、处理节点、控制器及系统 Download PDFInfo
- Publication number
- WO2014183574A1 WO2014183574A1 PCT/CN2014/076828 CN2014076828W WO2014183574A1 WO 2014183574 A1 WO2014183574 A1 WO 2014183574A1 CN 2014076828 W CN2014076828 W CN 2014076828W WO 2014183574 A1 WO2014183574 A1 WO 2014183574A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- deployed
- deployment
- node
- computing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 285
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000004891 communication Methods 0.000 claims abstract description 165
- 230000004044 response Effects 0.000 claims abstract description 70
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 20
- 238000007726 management method Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 5
- 230000001131 transforming effect Effects 0.000 description 5
- 239000013256 coordination polymer Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- YDDGKXBLOXEEMN-IABMMNSOSA-N chicoric acid Chemical compound O([C@@H](C(=O)O)[C@@H](OC(=O)\C=C\C=1C=C(O)C(O)=CC=1)C(O)=O)C(=O)\C=C\C1=CC=C(O)C(O)=C1 YDDGKXBLOXEEMN-IABMMNSOSA-N 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 230000008014 freezing Effects 0.000 description 3
- 238000007710 freezing Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/508—Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
- H04L41/5096—Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to distributed or central networked applications
Definitions
- the embodiments of the present invention relate to the field of communications, and in particular, to a computing node deployment method, a processing node, a controller, and a system. Background technique
- VMs virtual machines
- CPs content providers
- DC data center
- the virtual machine when the virtual machine is deployed, it is determined whether the remaining resources of the server in the DC, such as the memory and the central processing unit (CPU), satisfy the computing requirement, thereby determining whether the virtual server needs to be deployed on the server. machine. For example, if the number of CPUs for the virtual machine deployment request is two, the memory is 1024M, and the number of hard disk reads and writes is 50, select one server from the resource pool to determine the remaining resources of the selected server: the number of CPUs. Whether it is at least two, whether the memory is at least 1024M, and whether the hard disk read and write times are at least 50 times. If one of the disks does not meet the requirements, mark the server does not meet the requirements of this selection, and then select a new one in the resource pool. server.
- the remaining resources of the server in the DC such as the memory and the central processing unit (CPU)
- the embodiment of the present invention provides a computing node deployment method, a processing node, a controller, and a system.
- the controller provides a reasonable deployment plan according to the link information and the traffic information, and the processing node can be configured according to the deployment plan.
- the compute nodes to be deployed are deployed to reduce communication traffic between data centers and improve communication quality.
- an embodiment of the present invention provides a method for deploying a computing node, including:
- the link information includes link information between data centers managed by the processing node, and/or managed by the processing node Link information between each data center and each data center that is not managed by the processing node;
- the traffic information is flow information between the computing node to be deployed and the computing node related to the computing node to be deployed
- the computing node associated with the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed;
- a deployment suggestion response message including the deployment scenario is sent to the processing node.
- the deployment suggestion request message further carries deployment requirement information of the to-be-deployed computing node
- Determining a deployment plan according to the link information and/or the traffic information including:
- the deployment requirement information includes:
- Relative location information between the computing nodes to be deployed Relative location information between the computing nodes to be deployed, relative location information between the computing node to be deployed and the deployed computing node, communication quality requirement information between the computing nodes to be deployed, and the to-be-deployed computing Communication quality requirement information between the node and the deployed computing node, one of the information about the total data flow requirement of the computing node to be deployed across the data center, or a combination thereof.
- the description information includes the identifier information of the computing node to be deployed The quantity information of the computing node to be deployed or the tenant identifier to which the computing node to be deployed belongs
- the first, the second, or the third possible implementation manner of the first aspect in a fourth possible implementation manner of the first aspect, Suggest a request message, where the deployment suggestion request message carries the description information of the computing node to be deployed, including: receiving, by the proxy, a deployment suggestion request message sent by the processing node;
- the to-be-deployed computing Nodes include new compute nodes or deployed compute nodes.
- an embodiment of the present invention provides a method for deploying a computing node, including:
- link information and/or traffic information where the link information includes link information between data centers managed by the processing node, and/or each data center managed by the processing node And the traffic information between the data nodes that are to be deployed by the processing node and the computing node that is to be deployed, where the traffic information is not related to the traffic information between the data centers managed by the processing node, where
- the computing node related to the deployment of the computing node is a computing node that has communication requirements with the computing node to be deployed;
- a deployment information response message containing the link information and/or traffic information is sent to the processing node.
- the description information includes the identifier information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs.
- the receiving a processing information request message sent by the processing node the deploying The information request message carries the description information of the computing node to be deployed, including:
- the computing node to be deployed includes a newly added computing node or A compute node has been deployed.
- an embodiment of the present invention provides a method for deploying a computing node, including:
- the deployment suggestion request message carries description information of the computing node to be deployed, so that the controller determines a deployment plan according to the link information and/or the traffic information, where the link
- the information includes link information between data centers managed by the processing node, and/or a chain between each data center managed by the processing node and each data center not managed by the processing node
- the traffic information is the traffic information between the computing node to be deployed and the computing node related to the computing node to be deployed, where the computing node related to the computing node to be deployed is the same as the to-be-deployed a computing node that has a communication requirement for the computing node;
- the deployment suggestion request message sent to the controller further carries deployment requirement information of the to-be-deployed computing node, so that the controller is configured according to the The link information and/or the flow information determines a deployment plan that satisfies the deployment requirement information.
- the deployment requirement information includes: relative location information between the computing nodes to be deployed, Determining the relative location information between the deployed computing node and the deployed computing node, the communication quality requirement information between the computing node to be deployed, the communication quality requirement information between the computing node to be deployed and the deployed computing node, The information to be deployed by the computing node across the data center communication total traffic requirement information or a combination thereof.
- the description information includes the identifier information of the computing node to be deployed The quantity information of the computing node to be deployed or the tenant identifier to which the computing node to be deployed belongs
- the sending the deployment proposal to the controller Request cancellation Information including: sending a deployment suggestion request message to the controller through the proxy;
- the to-be-deployed computing node includes Add a compute node or a deployed compute node.
- an embodiment of the present invention provides a method for deploying a computing node, including:
- the controller sending a deployment information request message to the controller, where the deployment information request message carries description information of the computing node to be deployed, so that the controller acquires link information and/or traffic information, where the link information includes the Link information between data centers managed by the processing node, and/or link information between each data center managed by the processing node and each data center not managed by the processing node
- the traffic information is the traffic information between the computing node to be deployed and the computing node related to the computing node to be deployed, where the computing node associated with the computing node to be deployed has communication with the computing node to be deployed. Computational node of demand;
- a deployment plan is determined based on the link information and/or traffic information.
- the description information includes the identifier information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs.
- the sending, by the proxy, the deployment information request message includes: The controller sends a deployment information request message;
- Receiving, by the controller, a response message that includes the link information and/or traffic information including: receiving, by the proxy, a deployment that includes the link information and/or traffic information sent by the controller Information response message.
- the computing node to be deployed includes a newly added computing node or A compute node has been deployed.
- an embodiment of the present invention provides a controller, including: a receiving module, configured to receive a deployment suggestion request message sent by the processing node, where the deployment suggestion request message carries description information of the computing node to be deployed;
- a determining module configured to determine a deployment scenario according to link information and/or traffic information, where the link information includes link information between data centers managed by the processing node, and/or The link information between the data centers managed by the processing node and the data centers not managed by the processing node; the traffic information is the computing node related to the to-be-deployed computing node and the computing node to be deployed The flow information between the computing nodes associated with the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed;
- a sending module configured to send, to the processing node, a deployment suggestion response message that includes the deployment scenario.
- the deployment request request message received by the receiving module further carries the deployment requirement information of the to-be-deployed computing node
- the determining module is further configured to determine, according to the link information and/or the traffic information, a deployment plan that meets the deployment requirement information.
- the deployment requirement information includes:
- Relative location information between the computing nodes to be deployed Relative location information between the computing nodes to be deployed, relative location information between the computing node to be deployed and the deployed computing node, communication quality requirement information between the computing nodes to be deployed, and the to-be-deployed computing Communication quality requirement information between the node and the deployed computing node, one of the information about the total data flow requirement of the computing node to be deployed across the data center, or a combination thereof.
- the description information includes the identifier information of the computing node to be deployed The quantity information of the computing node to be deployed or the tenant identifier to which the computing node to be deployed belongs
- the receiving module is further configured to pass Receiving, by the proxy, a deployment suggestion request message sent by the processing node;
- the sending module is further configured to send, by the proxy, a deployment suggestion response message including the deployment scenario to the processing node.
- the to-be-deployed computing node includes a newly added computing node or a deployed computing node.
- an embodiment of the present invention provides a controller, including:
- a receiving module configured to receive a deployment information request message sent by the processing node, where the deployment information request message carries description information of the computing node to be deployed;
- An obtaining module configured to acquire link information and/or traffic information, where the link information includes link information between data centers managed by the processing node, and/or managed by the processing node.
- the traffic information between the data center and the data centers that are not managed by the processing node, the traffic information is the traffic between the computing node to be deployed and the computing node related to the computing node to be deployed.
- Information, where the computing node related to the computing node to be deployed is a computing node that has communication requirements with the computing node to be deployed;
- a sending module configured to send, to the processing node, an deployment information response message that includes the link information and/or traffic information.
- the description information includes the identifier information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs.
- the receiving module is further configured to receive, by using a proxy, the sending by the processing node a deployment information request message, where the deployment information request message carries description information of the computing node to be deployed;
- the sending module is further configured to send, by the proxy, a deployment information response message including the link information and/or traffic information to the processing node, so that the processing node is configured according to the link information and/or traffic Information determines the deployment scenario.
- the to-be-deployed computing node includes a newly added computing node or A compute node has been deployed.
- an embodiment of the present invention provides a processing node, including:
- a sending module configured to send a deployment suggestion request message to the controller, where the deployment suggestion request message carries description information of the computing node to be deployed, so that the controller determines a deployment plan according to the link information and/or the traffic information, where
- the link information includes each data managed by the processing node The link information between the cores, and/or the link information between the data centers managed by the processing node and the data centers not managed by the processing node; the traffic information is the Deploying, by the computing node, the traffic information between the computing node and the computing node to be deployed, where the computing node associated with the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed;
- a receiving module configured to receive a deployment suggestion response message that is sent by the controller and includes the deployment solution.
- the sending module is further configured to send, to the controller, the deployment suggestion request message that carries the deployment requirement information of the to-be-deployed computing node, so that The controller determines a deployment plan that satisfies the deployment requirement information according to link information and/or traffic information.
- Relative location information between the computing nodes to be deployed Relative location information between the computing nodes to be deployed, relative location information between the computing node to be deployed and the deployed computing node, communication quality requirement information between the computing nodes to be deployed, and the to-be-deployed computing Communication quality requirement information between the node and the deployed computing node, one of the information about the total data flow requirement of the computing node to be deployed across the data center, or a combination thereof.
- the description information includes the identifier information of the computing node to be deployed The quantity information of the computing node to be deployed or the tenant identifier to which the computing node to be deployed belongs
- the sending module is further configured to pass The proxy sends a deployment suggestion request message to the controller;
- the receiving module is further configured to receive, by the proxy, a deployment suggestion response message sent by the controller that includes the deployment scenario.
- the to be deployed Compute nodes include new compute nodes or deployed compute nodes.
- the processing node of the embodiment of the present invention includes: a sending module, configured to send a deployment information request message to the controller, where the deployment information request message carries description information of the computing node to be deployed, so that the controller acquires link information and/or traffic information, where the chain
- the road information includes link information between data centers managed by the processing node, and/or between each data center managed by the processing node and each data center not managed by the processing node.
- Link information, where the traffic information is the traffic information between the computing node to be deployed and the computing node related to the computing node to be deployed, where the computing node related to the computing node to be deployed is Deploying a computing node with a computing node having communication requirements;
- a receiving module configured to receive a deployment information response message that is sent by the controller and includes the link information and/or traffic information
- a determining module configured to determine a deployment scenario according to the link information and/or the traffic information.
- the description information includes the node identifier information of the to-be-deployed computing node, the quantity information of the to-be-deployed computing node, and the tenant identification information to which the computing node to be deployed belongs. Or, the feature information to which the computing node to be deployed belongs.
- the sending module is further configured to send a deployment information request to the controller by using a proxy Message
- the receiving module is further configured to receive, by the proxy, a deployment information response message that is sent by the controller and includes the link information and/or traffic information.
- the computing node to be deployed includes a newly added computing node or A compute node has been deployed.
- the ninth aspect the embodiment of the present invention provides a service system, comprising the controller according to the fifth aspect, and the processing node according to the seventh aspect above.
- an embodiment of the present invention provides a service system, including the controller according to the sixth aspect, and the processing node according to the eighth aspect above.
- the computing node management method, the processing node, the controller, and the system provided by the embodiment of the present invention the controller may be deployed according to the link information between the data centers that can be deployed in the service system and the data nodes to be deployed.
- Deployment plan, making the processing node available for deployment The solution is to deploy the new computing node.
- the deployment scenario of the redeployment location can be given, so that the processing node can adjust the position of the deployed computing node according to the deployment plan, and the data center can be The communication traffic between the two is transformed into traffic in the data center, thereby improving the communication quality between the computing nodes and reducing the communication traffic between the data centers.
- FIG. 1 is a schematic diagram of a first service system architecture applicable to a method for deploying a computing node according to the present invention
- FIG. 2 is a flowchart of Embodiment 1 of a method for deploying a computing node according to the present invention
- Embodiment 3 is a flowchart of Embodiment 2 of a method for deploying a computing node according to the present invention
- Embodiment 4 is a flowchart of Embodiment 3 of a method for deploying a computing node according to the present invention
- Embodiment 4 is a flowchart of Embodiment 4 of a method for deploying a computing node according to the present invention
- FIG. 6 is a schematic diagram of a second service system architecture applicable to a method for deploying a computing node according to the present invention
- FIG. 7 is a schematic structural diagram of Embodiment 1 of a controller according to the present invention.
- FIG. 8 is a schematic structural diagram of a second embodiment of a controller according to the present invention.
- Embodiment 9 is a schematic structural diagram of Embodiment 1 of a processing node according to the present invention.
- Embodiment 2 is a schematic structural diagram of Embodiment 2 of a processing node according to the present invention.
- Embodiment 3 of a controller according to the present invention is a schematic structural diagram of Embodiment 3 of a controller according to the present invention.
- Embodiment 4 of a controller according to the present invention is a schematic structural diagram of Embodiment 4 of a controller according to the present invention.
- FIG. 13 is a schematic structural diagram of Embodiment 3 of a processing node according to the present invention.
- FIG. 14 is a schematic structural diagram of Embodiment 4 of a processing node according to the present invention. Specific lung
- FIG. 1 is a schematic diagram of a first service system architecture applicable to a method for deploying a computing node according to the present invention.
- the processing node manages data centers DC-A, DC-B, and DC-C.
- the data center DC-D is not managed by the processing node, and the data center
- DC—A, DC—B, DC—C communication exists between data center DC—B and DC—D
- computing nodes can be deployed or deployed in each data center (not shown) .
- the computing node refers to various computing modules capable of data processing, such as a virtual machine (Virtual Machine, hereinafter referred to as VM), a computing container (Linux Container, hereinafter referred to as LXC), and a physical server.
- VM Virtual Machine
- LXC Linux Container
- the implemented functional module wherein the computing container is, for example, a virtual execution environment for the process at the operating system level, has a specific proportion of CPU allocation time, input and output (Input Output, hereinafter referred to as 10), limits the memory size that can be used, etc.
- the processing node may be implemented by hardware or software.
- it may include a hardware-implemented computing node management center or a third-party application that may be implemented by software, a tenant, etc., and the present invention is not limited thereto.
- the processing node is specifically a computing node management center, it is also responsible for the management of each computing node in the service system. As shown by the dotted line in the figure, it can perform unified management on the computing nodes of each data center, or manage the computing nodes according to the requirements.
- the processing node can be divided into sub-management centers, for example, divided into virtual machine management sub-centers, container management sub-centers, etc.
- the virtual machine management sub-center is responsible for virtual machine management, such as virtual machine creation, startup, and deletion. , freeze and migration, etc.;
- the container management sub-center is responsible for the management of the calculation container, such as the creation, startup, deletion, freezing, and migration of the calculation container.
- the processing node completes management of the computing node by interacting with servers of each data center.
- the controller and the processing node may communicate based on a Hyper Text Transport Protocol (HTTP) or a Transmission Control Protocol (TCP); or It is also possible to communicate based on the User Datagram Protocol (UDP).
- HTTP Hyper Text Transport Protocol
- TCP Transmission Control Protocol
- UDP User Datagram Protocol
- the controller through the connection with the processing node, receives the deployment suggestion request message sent by the processing node, determines the deployment plan of the computing node to be deployed according to the link information and/or the traffic information, and sends the deployment plan to the deployment suggestion response message.
- the controller receives the deployment information request message sent by the processing node through the connection with the processing node, acquires the link information and/or the traffic information, and carries the obtained link information and/or traffic information in the deployment.
- the information response message is sent to the processing node, so that the processing node determines the calculation section to be deployed Point deployment plan.
- the link information includes: link information between data centers managed by the processing node, and/or a chain between each data center managed by the processing node and each data center not managed by the processing node.
- the traffic information is the traffic information between the computing node to be deployed and the computing node to be deployed.
- the computing node associated with the computing node to be deployed is a computing node that has communication requirements with the computing node to be deployed. Referring to FIG.
- the data center managed by the processing node is DC-A, DC-B, DC-C
- the controller determines the deployment plan according to the link information between DC-A, DC-B, and DC-C
- the deployment plan may be determined according to the link information between DC-A, DC-B, DC-C, and DC-D.
- the traffic information between the computing nodes to be deployed may be considered to determine the deployment plan. For example, for a new computing node, a preferred deployment location may be determined according to link information, traffic information, or deployment requirement information. For a deployed node in a service system, based on traffic information between computing nodes and between data centers. The link information or deployment requirement information, etc., gives a suggestion of the location of the redeployment, or a traffic list or traffic matrix that gives the flow information of the deployed compute node.
- the link information refers to the link state information of each data center in the service system, including the total bandwidth of the link, idle bandwidth, delay, jitter, and packet loss rate.
- the links between data centers are bidirectional, and the state of the two-way may be different.
- the link information is not static, but changes over time.
- the link information can be collected in real time, event triggered, or periodically by the controller or other application. Stored in the database to form a link information database.
- Table 1 is a link information table obtained by collecting link information between the data centers DC-A, DC-B, and DC-C in Fig. 1.
- L-AB indicates the total bandwidth, remaining bandwidth, delay, jitter, packet loss rate, etc. when DC-A sends data to DC-B.
- Table 1 shows that L-AB and L-BA are different.
- the traffic information of the computing node refers to the traffic state information of the communication between the computing nodes, including the total communication traffic, the average bandwidth, the burst time length, and the burst bandwidth of the certain time period.
- the traffic information between the computing nodes is not static, but changes with time.
- the traffic information can be collected in real time, event triggered or periodically through the controller or other applications, and the collected traffic will be collected.
- the information is stored in the database to form a flow information database.
- a computing node as a virtual machine refer to Figure 1. Assume that two virtual machines VM-1 and VM-2 have been deployed in the data center DC-A. Two virtual machines VM-3 and VM have been deployed in the DC-B.
- T12 can represent information such as total communication traffic, average bandwidth, burst time, and burst bandwidth between VM-1 and VM-2.
- the present invention is described in detail in FIG. 1 by taking the processing node and the controller independently as an example.
- the present invention is not limited thereto.
- the processing node and the controller may be deployed on the same server in the service system by software or hardware, or may be deployed separately. On the server.
- FIG. 2 is a flowchart of Embodiment 1 of a method for deploying a computing node according to the present invention.
- This embodiment can be applied to a scenario in which a computing node in a service system does not meet the service requirements and needs to deploy a new computing node, or a scenario in which a computing node needs to be adjusted in a deployment position in the service system.
- the embodiments of the present invention are described in detail. Specifically, the embodiment includes the following steps:
- the content provider (Content Provider, hereinafter referred to as CP) requires a small number of computing nodes.
- CP Content Provider
- the number of computing nodes required by the CP is correspondingly increasing.
- the compute node performs position adjustment.
- the processing section Sending a deployment suggestion request message to the controller, and the controller receives the description of the computing node that is to be deployed.
- the description information includes the identification information of the computing node to be deployed, the number of the computing nodes to be deployed, the tenant identifier to which the computing node to be deployed belongs, and the tenant identifier is, for example, the name of the tenant or the ID number corresponding to the name (IDentity, Hereinafter referred to as ID), such as Tencent, Baidu, 3456123, 7890123 ( 3456123, 7890123 is the corresponding digital ID of Tencent and Baidu under the controller), etc.
- ID IDentity
- the tenant ID when using the tenant ID as the description information of the computing node to be deployed, it means the tenant All compute nodes; identifiers of one or more compute nodes, such as ⁇ VM_a ⁇ for a virtual machine a, ⁇ VM a , VM b , VM-c ⁇ representing a set of virtual machines, b, c;
- a certain type of information that can identify the computing node to be deployed can be used as the identification information of the computing node to be
- IP address is the first 24 bits of 192.168.1.
- the link information includes link information between data centers managed by the processing node, and/or each data center managed by the processing node
- the link information between the data centers that are not managed by the processing node; the traffic information is the traffic information between the computing node to be deployed and the computing node related to the computing node to be deployed.
- the computing node related to the computing node to be deployed is a computing node that has communication requirements with the computing node to be deployed.
- the controller After receiving the deployment suggestion request message sent by the processing node, the controller according to the link information between the data centers managed by the processing node, each data center managed by the processing node, and each data center managed by the processing node Determining a deployment plan of the computing node to be deployed, at least one of the link information and the traffic information between the computing nodes related to the computing node to be deployed. For example, for a new computing node, a better deployment location can be determined; for a deployed computing node in a service system, a suggestion for redeploying a location is given based on traffic information, link information, and the like between computing nodes.
- the deployment suggestion request message may also carry the deployment requirement information of the computing node to be deployed, and the controller determines the deployment plan that meets the deployment requirement information according to the link information and/or the traffic information. Specifically, if no computing node is deployed on each data center in the service system, only the link information between the data centers managed by the processing node may be considered, and the deployment plan that meets the deployment requirement information may be determined according to the link information.
- the deployment requirement information can be between the computing nodes to be deployed or the computing to be deployed The relative position of the deployment between the node and the deployed compute node, the communication quality requirement, or the constraint that the compute node to be deployed crosses the total traffic requirements of the data center communication.
- the deployment suggestion response message including the deployment scenario is sent to the processing node, so that the processing node deploys the to-be-deployed computing node according to the deployment scenario.
- the processing node may deploy the computing node to be deployed in a data center with a large remaining bandwidth, a small delay, and a low packet loss rate according to the deployment suggestion response message; or, deploy two computing nodes that have a large amount of network communication in the data center. The same data center.
- the deployment scheme may be determined by the controller according to at least one of the link information or the information about the traffic information to be deployed and the computing node associated with the computing node, the deployment requirement information, and the like; or the processing node may not adopt Deployment suggestion request message.
- the method for deploying a computing node provided by the embodiment of the present invention, the controller according to the link information between the data centers managed by the processing node in the service system or the data centers managed by the processing node and the data managed by the processing node
- the link information between the centers, the traffic information between the computing nodes related to the computing node to be deployed, etc., for the new computing node, the recommendation of the preferred deployment location can be determined; for the deployed computing nodes in the service system , can give the suggestion of redeploying the location, by deploying the computing nodes with a large amount of network communication in the same data center, transforming the communication traffic between the data centers into the traffic in the data center, thereby improving the calculation between the nodes Communication quality, reducing communication traffic between data centers.
- FIG. 3 is a flowchart of Embodiment 2 of a method for deploying a computing node according to the present invention.
- This embodiment describes the embodiment of the present invention in detail by using the controller as the execution subject. Specifically, the embodiment includes the following steps:
- link information and/or traffic information where the link information includes link information between data centers managed by the processing node, and/or each data center managed by the processing node does not belong to the processing node.
- Link information between the data centers to be managed, and the traffic information is the traffic information between the computing node to be deployed and the computing node to be deployed, where the computing node related to the computing node to be deployed is to be deployed.
- the controller receives the deployment information request message sent by the processing node, acquires link information between the data centers managed by the processing node, and/or, each data center managed by the node to be processed and the data center not managed by the processing node Link information between the data centers, and the traffic information is the computing node to be deployed and
- the traffic information between the computing nodes related to the computing node to be deployed may be, for example, a traffic list or a traffic matrix of the deployed computing node that needs to be adjusted.
- the controller may obtain the relevant link information or the traffic information immediately after receiving the deployment information request message sent by the processing node. Alternatively, the controller may obtain the traffic of the computing node to be deployed only after the traffic request message is carried in the deployment suggestion request message. Information, etc., in order to meet the requirements, a specific format of the specific deployment information request message may be designed, and the present invention is not limited thereto.
- the controller is not sent to the processing node after determining the deployment plan in the first step 102, but the acquired link information is obtained.
- the response message of the traffic information is directly sent to the processing node, and the processing node determines the deployment plan according to the information.
- the processing node may determine the deployment solution according to at least one of the link information, the traffic information, or the deployment requirement information determined according to the requirement.
- the method for deploying a computing node provided by the embodiment of the present invention, the controller obtains the link information, the traffic information between the computing nodes related to the computing node to be deployed, and the like, and sends the information to the processing node, so that the processing node performs the deployed calculation in the service system.
- Nodes which can give suggestions for redeployment locations, by deploying compute nodes with a large amount of network traffic in the same data center, transforming traffic between data centers into traffic within the data center, thereby increasing Communication quality, reducing communication traffic between data centers.
- FIG. 4 is a flowchart of Embodiment 3 of a method for deploying a computing node according to the present invention.
- This embodiment is applicable to a scenario in which a deployed computing node in a service system does not meet the service requirement and needs to deploy a new computing node, or a scenario in which the deployed computing node needs to adjust the deployment location in the service system.
- the executive body elaborates on the embodiments of the present invention. Specifically, the embodiment includes the following steps:
- the controller sends a deployment suggestion request message to the controller, where the deployment suggestion request message carries description information of the computing node to be deployed, so that the controller determines a deployment plan according to the link information, and/or the traffic information, where the link information includes processing Link information between data centers managed by the node, and/or link information between each data center managed by the processing node and each data center not managed by the processing node; traffic information is to be deployed for calculation
- the traffic information between the node and the computing node related to the computing node to be deployed, where the computing node related to the computing node to be deployed is a computing node that has communication requirements with the computing node to be deployed.
- the controller After receiving the deployment suggestion request message sent by the processing node and determining the deployment plan, the controller sends a deployment suggestion response message including the deployment plan to the processing node, and the processing node receives the deployment suggestion response message.
- the processing node may select to deploy the to-be-deployed computing node by interacting with the server of the data center under the jurisdiction according to the deployment scenario included in the deployment suggestion response message, for example, New compute node creation, startup, deletion, freeze, and migration of deployed compute nodes; or, the processing node may not adopt the deployment scenario returned by the deployment suggestion response message.
- the deployment scenario included in the deployment suggestion response message for example, New compute node creation, startup, deletion, freeze, and migration of deployed compute nodes.
- the computing node deployment method provided by the embodiment of the present invention, for the new computing node, the processing node performs the creation of the computing node according to the recommendation of the preferred deployment location given by the controller, etc., for the deployed computing node in the service system, according to The controller gives a suggestion of the redeployment location, and performs position adjustments, for example, freezing one or some of the deployed compute nodes, recreating the compute nodes in other data centers, and thus having a large number of network communication calculations Nodes are deployed in the same data center, transforming communication traffic between data centers into traffic in the data center, or multiple data centers with better link quality, thereby improving communication quality between computing nodes and lowering data centers. Communication Flow Between FIG.
- Embodiment 4 is a flowchart of Embodiment 4 of a method for deploying a computing node according to the present invention.
- This embodiment describes the embodiment of the present invention in detail by using a processing node as an execution subject. Specifically, the embodiment includes the following steps:
- the controller sends a deployment information request message to the controller, where the deployment information request message carries description information of the computing node to be deployed, so that the controller acquires link information and/or traffic information, where the link information includes each managed by the processing node.
- the link information between the data centers, and/or the link information between the data centers managed by the processing node and the data centers not managed by the processing node, and the traffic information is the computing node to be deployed and the calculation to be deployed.
- the traffic information between the computing nodes associated with the node, wherein the computing node associated with the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed.
- the controller may obtain the relevant link information or the traffic information immediately after receiving the deployment information request message sent by the processing node. Alternatively, the controller may obtain the traffic of the computing node to be deployed only after the traffic request message is carried in the deployment suggestion request message. Information, etc., in order to meet the requirements, a specific format of the specific deployment information request message may be designed, and the present invention is not limited thereto.
- the processing node receives a deployment information response message sent by the controller that includes link information and/or traffic information.
- the processing node determines the deployment plan based on the link information and/or traffic information. Specifically, the processing node may determine the deployment solution according to at least one of link information, traffic information, or deployment requirement information determined according to requirements.
- the method for deploying a computing node according to the embodiment of the present invention, the processing node according to the link information between the data centers related to the data center where the computing node is to be deployed in the service system, and the computing nodes related to the computing node to be deployed Traffic information, etc., for the deployed compute nodes in the business system, determine the location of the redeployment location, and make location adjustments, for example, freeze one or some deployed compute nodes, and recreate these in other data centers.
- Compute the nodes so that the computing nodes with a large amount of network communication are deployed in the same data center, and the communication traffic between the data centers is converted into traffic in the data center, or multiple data centers with better link quality, thereby improving Calculate the communication quality between nodes and reduce the communication traffic between data centers.
- the first embodiment, the second embodiment, the third embodiment, and the fourth embodiment respectively describe the present invention from the perspective of the controller and the processing node as the executing entity, wherein in the first embodiment and the third embodiment, the controller determines The deployment scheme of the computing node to be deployed, in the second embodiment and the fourth embodiment, after the processing node receives the link information and the traffic information related to the to-be-deployed computing node that is obtained and sent by the controller, the processing node determines to be deployed.
- the deployment scenario for the compute node In the following, the invention will be explained in detail by means of different embodiments.
- FIG. 6 is a schematic diagram of a second service system architecture applicable to a method for deploying a computing node according to the present invention.
- the processing node manages three data centers DC-A, DC-B, DC-C as an example in the service system, and the data is described in detail on each server of the data center.
- Compute nodes (not shown) can be deployed or already deployed. Additionally, processing nodes and controllers are not shown in the figures.
- the deployment suggestion request message sent by the processing node to the controller carries the description information of the computing node to be deployed.
- the description message indicates that two new computing nodes need to be created.
- Table 3 shows the links between the data centers DC A, DC B, and DC C in Figure 6. Link information table.
- the controller After receiving the deployment suggestion request message sent by the processing node, the controller needs to create two new computing nodes, based on the link information stored in the link information database, for example, the link information table shown in Table 3.
- the data center is traversed to determine the data center with the largest sum of the remaining bandwidth of the egress link and the remaining bandwidth of the ingress link of each data center, and the determined data center is used as a data center for deploying a new computing node.
- Table 3 please refer to Table 3.
- the remaining bandwidth of the egress link of DC-A is 10G (L-AB is 5G, L-AC is 5G), and the remaining bandwidth of the ingress link is 10G (L-BA is 5G, L-CA) 5G), the total remaining bandwidth is 20G; the remaining bandwidth of the DC-B egress link is 7G (L-BA is 5G, L-BC is 2G), and the remaining bandwidth of the ingress link is 5.2G (L-AB is 5G, L-CB is 200M), the total remaining bandwidth is 12.2G; the remaining bandwidth of the DC-C egress link is 5.2G (L-CA is 5G, L-CB is 200M), and the remaining bandwidth of the ingress link is 7G (L- AC is 5G, L-BC is 2G, and the total remaining bandwidth is 7G).
- DC-A has the largest total remaining bandwidth and a large total remaining bandwidth.
- DC-A can better meet the communication requirements of computing nodes across data centers and improve communication quality. Therefore, the controller proposes to deploy these two compute nodes in DC-A, that is, to determine the deployment scheme in which DC-A can deploy the two compute nodes.
- the deployment suggestion request message sent by the processing node to the controller not only carries the description information of the computing node to be deployed, but also carries the deployment requirement information of the computing node to be deployed to limit the computing node to be deployed.
- Relative location information between, for example, the description message indicates that two new compute nodes need to be created, and the deployment requirement information indicates that the two new compute nodes are in different data centers.
- Table 4 is a second link information table obtained by collecting link information between the data centers DC-A, DC-B, and DC-C in Figure 4.
- the controller After receiving the indication sent by the processing node, the controller needs to create two new computing nodes, and the two computing nodes need to be in different data center deployment suggestion request messages, based on the link information stored in the link information database at this time. For example, in the link information table shown in Table 4, the controller traverses the table to determine two data centers with the largest remaining bandwidth of the link between the data centers, and the two data centers are deployed as new computing nodes. data center.
- the remaining bandwidth of the link between DC-A and DC-B is 10G (L-AB is 5G, L-BA is 5G); the link between DC-A and DC-C The remaining bandwidth is 5.2G (L-AC is 5G, L-CA is 200M); the remaining bandwidth of the link between DC-B and DC-C is 2.2G (L-BC is 2G, L-CB is 200M).
- the controller proposes to deploy one compute node in each of DC-A and DC-B, that is, to determine the deployment scheme of deploying one compute node in each of DC-A and DC-B.
- the deployment requirement information may also indicate the communication quality requirement between the new computing nodes.
- the deployment suggestion request message carries description information and deployment requirement information of the computing node to be deployed, and describes The message indicates that two new compute nodes need to be created.
- the deployment requirement information indicates that the two new compute nodes are in different data centers, and the jitter of communication between the two new compute nodes is less than 30ms and greater than 15ms.
- Table 5 is the third list of the link information collected between the data centers DC-A, DC-B, and DC-C in Figure 6.
- the controller receives an indication sent by the processing node that two new computing nodes need to be created, the two computing nodes need to be in different data centers, and the jitter of communication between the two new computing nodes is less than
- the controller traverses the table to determine the communication between the data centers. Jitter two data centers that meet the needs, and the two data centers serve as data centers for deploying new compute nodes. Specifically, please refer to Table 5, the jitter of communication between DC-A and DC-B is 20ms; the jitter of communication between DC-A and DC-C is 10ms; the jitter of communication between DC-B and DC-C It is 40ms. As a result, only the jitter of communication between DC-A and DC-B satisfies the conditions in the deployment requirement information. Therefore, the controller proposes to deploy one compute node in each of DC-A and DC-B, that is, to determine the deployment scheme of deploying one compute node in each of DC-A and DC-B.
- the deployment requirement information may also indicate a communication quality requirement between the new computing nodes, the new computing node, and the deployed computing node.
- the deployment suggestion request message carries description information and deployment requirement information of the computing node to be deployed, and the description message indicates that two new computing nodes VM1 and VM2 need to be created, and the deployment requirement information indicates that VM1 and VM2 are in different data centers, VM1.
- the jitter is communicated with the deployed compute node VMX in a different data center and the communication between VM1 and the deployed compute node VMX is less than 15ms.
- the VMX was previously deployed in DC-A.
- Table 6 is the fourth link information table obtained by collecting link information between the data centers DC-A, DC-B, and DC-C in Figure 6.
- the controller receives the indication sent by the processing node to create two new computing nodes VM1 and VM2.
- the deployment requirement information indicates that VM1 and VM2 are in different data centers, and VM1 and deployed computing node VMX are deployed in different data centers and VM1 and
- the controller traverses the table. , to determine the two data centers that meet the demand for jitter in the communication between the data centers, and the two data centers as the deployment of new computing nodes data center. Specifically, please refer to Table 6.
- the jitter of communication between DC-A and DC-B is 20ms; the jitter of communication between DC-A and DC-C is 10ms; the jitter of communication between DC-B and DC-C It is 40ms. It can be obtained that only the jitter of communication between DC-A and DC-B satisfies the conditions in the deployment requirement information, and the VMX is previously deployed in DC-A. Therefore, the controller proposes to deploy the compute node VM1 in DC-C, and to deploy the compute node VM2 in DC-A or DC-B, that is, to determine the deployment of compute node VM1 in DC-C, in DC-A or DC-B. A deployment scenario in which the compute node VM2 is deployed.
- the computing node to be deployed is a service system to deploy a computing node, such as at least one computing node belonging to a certain subnet, and the deployment suggestion request message sent by the processing node to the controller carries the to-be-deployed computing.
- the description information of the node may include the subnet identifier in addition to the number of the computing nodes to be deployed; the deployment requirement information may also be the lowest communication traffic of the computing node indicating the subnet across the data center.
- the compute node is a virtual machine, and the subnet identifier is 192.168.10.X/24.
- Table 7A is a first communication flow table obtained by collecting communication traffic between VM-1, VM-2, VM-3 and VM-4 in Fig. 6. For the sake of simplicity, only the communication traffic between the computing nodes is listed here, which is calculated in MB. In other feasible implementation manners, the communication traffic table may also include the average bandwidth and burst time between the computing nodes. Length, burst bandwidth, etc.
- the controller receives the deployment recommendation request message sent by the processing node indicating that the virtual machine has the lowest total communication traffic across the data center under the subnet identifier of 192.168.10.X/24 and deploys two virtual machines in each data center.
- the controller divides the deployment locations of the virtual machines under the subnet, and the division requires that the four virtual machines be divided into data centers DC-C and DC-A belonging to the subnet 192.168.10.X/24. For each data center, two virtual machines are divided. The possible division results are shown in Table 7B.
- Table 7B is the division result summary table in Table 7A.
- VM— 2 VM— 3 VM— 1 VM— 4 160MB is a behavior example with sequence number 1 in Table 7B. This line indicates that the controller divides VM-1 and VM-2 into DC—C. - 1 and VM-2 are located in the same data center, the traffic flow of VM-1 to VM-2 and the traffic flow of VM-2 to VM-1 are equivalent to the internal traffic in DC-C, so it can be excluded from the data center.
- gPDC-C and DC-A communication traffic are shown in the upper left shaded part of Table 7A; similarly, VM-3 and VM-4 are divided into DC-A, at this time VM-3
- the traffic flow of VM-3 to VM-4 and the traffic of VM-4 to VM-3 are equivalent to the internal traffic of DC-A, so it can be excluded from the data center.
- the communication traffic between gPDC-C and DC-A is shown in the shaded area in the lower right corner of Table 7A. Therefore, if VM-1, VM-2, VM-3, and VM-4 are deployed in the partitioning method shown by the partition number 1, the traffic inside the data center in the upper left corner and the lower right corner is not considered according to Table 7A.
- the sum of communication traffic between Bay ijDC-A and DC-C is 240MB.
- the controller traverses Table 7B and finds that when VM-2 and VM-3 are in the same data center as shown in the sequence number 3 or the sequence number 6, the total communication traffic between DC-A and DC-C is the lowest, 160 MB. Found that when VM-2 and VM-3 are in different data centers, the total communication traffic between DC-A and DC-C is 240MB, which is relatively large. Therefore, the controller recommends re-deploying VM-1, VM-2, VM-3, and VM-4, deploying VM-2 and VM-3 in DC-A, and deploying VM-1 and VM-4 in In DC-C, VM-2 and VM-3 are deployed in DC-C, and VM-1 and VM-4 are deployed in DC-A.
- the computing node to be deployed is a deployed computing node in the service system, such as at least one computing node belonging to a tenant, and the deployment suggestion request message sent by the processing node to the controller carries the computing node to be deployed.
- the description information may include a tenant identifier in addition to the number of computing nodes to be deployed; the deployment requirement information may also indicate that the computing nodes of the subnet have the lowest communication traffic across the data center, and each data center is deployed with two calculate node.
- the compute node is a virtual machine, and the tenant ID is CP-1234.
- VM-4, VM-5 and VM-6 (not shown) belong to the tenant, the current deployment location is VM-1, VM-2 is in DC-C, VM-3, VM-4 is in DC-A. VM-5 and VM-6 are in DC-B.
- Table 8A is a second communication flow table obtained by collecting communication traffic between VM-1, VM-2, VM-3, VM-4, VM-5 and VM-6 in FIG. For the sake of simplicity, only the communication traffic between the computing nodes is listed here, which is calculated in MB. In other feasible implementation manners, the communication traffic table may also include the average bandwidth and burst time between the computing nodes. Length, burst bandwidth, etc.
- the controller divides the deployment location of each virtual machine under the tenant, and the division requires that the six virtual machines be divided.
- each data center divides two virtual machines, and the possible division results are shown in Table 8B, and Table 8B is in Table 8A.
- Division result table 8B 11 VM— 3 VM— 4 VM— 1 VM— 5 VM— 2 VM— 6 780MB
- VM— 3 VM— 6 VM— 1 VM— 2 VM— 4 VM— 5 780MB is a behavior example with a sequence number of 3 in Table 8B.
- This line indicates that the controller divides VM-1 and VM-4 into DC— In C, at this time, VM-1 and VM-4 are located in the same data center, and the traffic flow of VM-1 to VM-4 and the traffic flow of VM-4 to VM-1 are equivalent to the internal traffic in DC-C, so It can not be counted in the communication traffic between data centers A; similarly, VM-2 and VM-3 are divided into data DC-A, at this time VM-2 and VM-3 communication traffic and VM-3 to VM The communication traffic of 2 is equivalent to the internal traffic in DC-A, so it can be excluded from the communication traffic between data centers; similarly, VM-5 and VM-6 are divided into data DC-B.
- the communication traffic of VM-5 and VM-6 and the traffic of VM-6 to VM-5 are equivalent to the internal traffic in DC-B, so it can be excluded from the communication traffic between data centers. Specifically, this kind The portion of the division that does not count toward communication traffic between data centers is shown in the shaded portion of Table 8A. Therefore, if VM-1, VM-2, VM-3, VM-4, VM-5, and VM-6 are deployed in the partitioning method indicated by the partitioning requirement of 3, according to Table 8A, the case shown in Table 8A is not considered. In the shaded part, the sum of communication traffic between Bay ijDC—A, DC—B and DC—C is 480MB.
- the controller traverses Table 8B and finds that when VM-1 and VM-4 are in the same data center, VM-2 and VM-3 are in the same data center, VM-5 and VM-6 are in the same data center, each data
- the total communication traffic between the centers is 480MB, which is relatively small. Therefore, the controller recommends re-deploying VM-1, VM-2, VM-3, VM-4, VM-5 and VM-6, deploying VM-1 and VM-4 in the same data center, VM-2 and VM-3 are deployed in the same data center, and VM-5 and VM-6 are deployed in the same data center.
- the division method shown by the serial number 3 or 10 is divided.
- the deployment suggestion request message sent by the processing node to the controller is carried in the service system, and the deployment suggestion request message sent by the processing node to the controller carries the description of the to-be-deployed computing node.
- Traffic request information requesting to know the traffic status of the deployed compute node.
- VMs, VM-1, VM-2, VM-3 are deployed in DC-A, VM-4, VM-5, VM-6 are deployed in DC-B, VM-7, VM-8, VM- 9 Deployed in DC-C, the specific virtual machine is not shown in the figure.
- Table 9A is VM-1, VM-2, VM-3, VM-4, VM-5, VM-6, VM-7, VM-8 in Figure 4.
- the third communication flow table obtained by collecting traffic traffic between VM-9.
- the communication traffic table may also include the average bandwidth and burst time between the computing nodes. Length, burst bandwidth, etc.
- the controller receives the traffic request message of the VM1 carried by the deployment suggestion request message, it traverses the table 9A, extracts other virtual machines that have communication between the VM1, and then extracts the corresponding traffic, and extracts the traffic flow information.
- Table 9B is a schematic table showing the results of the communication flow information extraction for the VM in Table 9A.
- the communication traffic of VM-1 to VM-2 when VM-1 communicates with VM-2, the communication traffic of VM-1 to VM-2, that is, the outgoing traffic is 100MB; the communication traffic of VM-2 to VM-1, The incoming traffic is 90MB.
- the controller extracts the communication traffic information of the VM1, and sends the communication traffic information to the processing node, so that the processing node can determine VM-1, VM-2, VM-3, VM-4, and VM according to the traffic information. 5.
- the controller may send the identifier of the VM1 and the content shown in the table 9B in the deployment suggestion response message to the processing node, or Calculate the sum of the communication traffic of each communication object and then send it to the processing node, for example, the communication object is
- VM-2 is an example.
- the outgoing traffic is 100MB. If the traffic is 90MB, the total communication traffic is 190MB.
- the deployment suggestion response message sent by the controller to the compute node management is the identifier of the VM-1, and VM-1 and VM— The sum of 2 communication traffic is 190MB.
- the controller may select to transmit the traffic flow information of VM1 and all communication objects or the traffic information of a part of the communication object to the processing node.
- the processing node and the controller communicate point-to-point, the processing node directly sends a deployment suggestion request message to the controller, and receives the deployment suggestion response message sent by the controller; the controller directly receives the deployment suggestion request sent by the processing node.
- the message after determining the deployment scenario, sends a deployment suggestion response message directly to the processing node.
- the processing node and the controller can also communicate indirectly.
- the controller may receive the deployment suggestion request message sent by the processing node by using the proxy, and after the deployment scenario is determined, the deployment suggestion response message is sent to the processing node by the proxy; similarly, the processing node The deployment suggestion request message may also be sent to the controller through the proxy; the deployment suggestion response message sent by the controller is received by the proxy. Because the agent can realize the conversion of the message format and the content conversion and the decoupling between the processing node and the controller, the limitation is small and the scope of use is wider. Therefore, in an actual application scenario, the processing node may be directly in communication with the controller according to requirements, or the processing node and the controller may be indirectly communicated through a proxy.
- the link information is that the link information includes link information between data centers managed by the processing node, but the link information may also be data centers managed by the processing node. It belongs to the link information between the data centers managed by the processing node. For example, referring to Figure 1, if a new computing node needs to be deployed in DC-B or DC-C, and the new computing node to be deployed has communication with the computing node in DC-D, DC-B and Link information between DC-C, DC-B and DC-D, DC-C and DC-D.
- FIG. 7 is a schematic structural diagram of Embodiment 1 of a controller according to the present invention.
- the controller 100 provided in this embodiment may implement various steps of the method applied to the controller in any embodiment of the present invention, and the specific implementation process is not described herein again.
- the controller 100 provided in this embodiment may include:
- the receiving module 11 is configured to receive a deployment suggestion request message sent by the processing node, where the deployment suggestion request message carries description information of the computing node to be deployed;
- the determining module 12 is configured to determine a deployment scenario according to the link information and/or the traffic information, where the link information includes link information between the data centers managed by the processing node, and/or the processing node The information about the link between the data center and the data center that is not managed by the processing node; the traffic information is the traffic information between the computing node to be deployed and the computing node to be deployed, where The computing node associated with the computing node is a computing node having communication requirements with the computing node to be deployed;
- the sending module 13 is configured to send a deployment suggestion response message including a deployment scenario to the processing node.
- the controller provided by the embodiment of the present invention is configured according to the link information between the data center and the data center that can be deployed by the computing node to be deployed in the service system, or the data center that can be deployed by the computing node to be deployed, and the data that cannot be deployed by the computing node to be deployed.
- the link information between the centers, the traffic information between the computing nodes related to the computing node to be deployed, etc., for the new computing node, the recommendation of the preferred deployment location can be determined; for the deployed computing nodes in the service system , can give the suggestion of redeploying the location, by deploying the computing nodes with a large amount of network communication in the same data center, transforming the communication traffic between the data centers into the traffic in the data center, thereby improving the calculation between the nodes Communication quality, reducing communication traffic between data centers.
- the deployment suggestion request message received by the receiving module 11 further carries the deployment requirement information of the computing node to be deployed;
- the determining module 12 is further configured to determine a deployment plan that satisfies the deployment requirement information based on the link information and/or the traffic information.
- the deployment requirement information includes: relative location information between the computing nodes to be deployed, relative location information between the computing node to be deployed and the deployed computing node, communication quality requirement information between the computing nodes to be deployed, to be deployed.
- the description information includes the identification information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs.
- the receiving module 11 is further configured to receive, by using a proxy, a deployment suggestion request message sent by the processing node;
- the sending module 13 is further configured to send, by the proxy, a deployment suggestion response message including a deployment scenario to the processing node.
- the compute nodes to be deployed include new compute nodes or deployed compute nodes.
- FIG. 8 is a schematic structural diagram of Embodiment 2 of a controller according to the present invention.
- the controller 200 provided in this embodiment can implement various steps of the method applied to the controller provided by any embodiment of the present invention, and is specifically implemented. The process is not repeated here.
- the controller 200 provided in this embodiment may include:
- the receiving module 21 is configured to receive a deployment information request message sent by the processing node, where the deployment information request message carries description information of the computing node to be deployed;
- the obtaining module 22 is configured to obtain link information and/or traffic information, where the link information includes link information between data centers managed by the processing node, and/or each data center managed by the processing node
- the information about the link between the data centers managed by the processing node is not the traffic information of the computing node to be deployed, and the computing node related to the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed;
- the sending module 23 is configured to send, to the processing node, a deployment information response message that includes link information and/or traffic information.
- the controller provided by the embodiment of the present invention obtains the link information, the traffic information between the computing nodes related to the computing node to be deployed, and the like, and sends the information to the processing node, so that the processing node can perform the deployed computing node in the service system.
- Suggestions for redeploying locations by deploying compute nodes with large amounts of network traffic in the same data center, transforming traffic between data centers into traffic within the data center, thereby improving communication quality between compute nodes Reduce communication traffic between data centers.
- the description information includes the identification information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs. .
- the receiving module 21 is further configured to receive, by using a proxy, a deployment information request message sent by the processing node, where the deployment information request message carries description information of the computing node to be deployed;
- the sending module 23 is further configured to send, by the proxy, a deployment information response message including link information and/or traffic information to the processing node, so that the processing node determines the deployment scheme according to the link information and/or the traffic information.
- the compute nodes to be deployed include new compute nodes or deployed compute nodes.
- FIG. 9 is a schematic structural diagram of Embodiment 1 of a processing node according to the present invention.
- the processing node 300 provided in this embodiment may implement various steps of the method for processing a node provided by any embodiment of the present invention, and the specific implementation process is not described herein again.
- the processing node 300 may include: a sending module 31, configured to send a deployment suggestion request message to the controller, where the deployment suggestion request message carries description information of the computing node to be deployed, so that the controller according to the link information And/or traffic information, determining a deployment scenario, where the link information includes link information between data centers managed by the processing node, and/or each data center managed by the processing node is not managed by the processing node In each data Link information between the cores; the traffic information is the traffic information between the compute node to be deployed and the compute node to be deployed, where the compute node associated with the compute node to be deployed has communication requirements with the compute node to be deployed Computational node;
- the receiving module 32 is configured to receive a deployment suggestion response message that is sent by the controller and includes a deployment scenario.
- the processing node provided by the embodiment of the present invention, for the new computing node, the processing node performs the creation of the computing node according to the recommendation of the preferred deployment location given by the controller, etc., for the deployed computing node in the service system, according to the controller Suggestions for the location of the redeployment, location adjustment, for example, freezing one or some of the deployed compute nodes, re-creating these compute nodes in other data centers, thereby deploying compute nodes with a large amount of network traffic In the same data center, the communication traffic between the data centers is converted into traffic in the data center, or multiple data centers with better link quality, thereby improving the communication quality between the computing nodes and reducing the data center. Communication traffic.
- the sending module 31 is further configured to send a deployment suggestion request message carrying the deployment requirement information of the computing node to be deployed to the controller, so that the controller determines, according to the link information and/or the traffic information, the deployment that meets the deployment requirement information. Program.
- the deployment requirement information includes: relative location information between the computing nodes to be deployed, relative location information between the computing node to be deployed and the deployed computing node, communication quality requirement information between the computing nodes to be deployed, to be deployed.
- the description information includes the identification information of the computing node to be deployed, the quantity information of the computing node to be deployed, or the tenant identification information to which the computing node to be deployed belongs. .
- the sending module 31 is further configured to send a deployment suggestion request message to the controller through the proxy;
- the receiving module 32 is further configured to receive, by the proxy, a deployment suggestion response message that is sent by the controller and includes a deployment solution.
- the compute nodes to be deployed include new compute nodes or deployed compute nodes.
- FIG. 10 is a schematic structural diagram of Embodiment 2 of a processing node according to the present invention.
- the processing node 400 provided in this embodiment may implement various steps of the method applied to the processing node provided by any embodiment of the present invention, and the specific implementation process is not described herein again.
- the processing node 400 provided in this embodiment may include: a sending module 41, configured to send a deployment information request message to the controller, where the deployment information request message carries description information of the computing node to be deployed, so that the controller acquires link information.
- the traffic information is the traffic information between the computing node to be deployed and the computing node to be deployed, where the computing node associated with the computing node to be deployed is a computing node having communication requirements with the computing node to be deployed;
- the receiving module 42 is configured to receive a deployment information response message that is sent by the controller and includes link information and/or traffic information.
- the determining module 43 is configured to determine a deployment scenario based on link information and/or traffic information.
- the processing node provided by the embodiment of the present invention, according to the link information between the data centers related to the data center where the computing node is to be deployed in the service system, and the traffic information between the computing nodes related to the computing node to be deployed, etc.
- identify the location of the redeployment location adjust the location, for example, freeze one or some of the deployed compute nodes, and recreate the compute nodes in the other data centers, thereby
- the computing nodes with a large amount of network communication are deployed in the same data center, and the communication traffic between the data centers is converted into traffic in the data center, or multiple data centers with better link quality, thereby improving the computing nodes. Communication quality, reducing communication traffic between data centers.
- the description information includes the identification information of the computing node to be deployed, the number of computing nodes to be deployed, or the tenant identification information to which the computing node to be deployed belongs.
- the sending module 41 is further configured to send a deployment information request message to the controller by using a proxy; the receiving module 42 is further configured to receive, by the proxy, a deployment information response message that is sent by the controller and includes link information and/or traffic information.
- FIG. 11 is a schematic structural diagram of Embodiment 3 of a controller according to the present invention.
- the controller 500 provided in this embodiment includes a processor 51 and a memory 52.
- the controller 500 can also include a transmitter 53, a receiver 54. Transmitter 53 and receiver 54 can be coupled to processor 51.
- the memory 52 stores execution instructions. When the controller 500 is running, the processor 51 communicates with the memory 52, and the processor 51 calls the execution instructions in the memory 52 for executing the method embodiment shown in FIG. 2, the implementation principle thereof. Similar to the technical effect, it will not be described here.
- FIG. 12 is a schematic structural diagram of Embodiment 4 of a controller according to the present invention.
- the controller 600 provided in this embodiment includes a processor 61 and a memory 62.
- the controller 600 can also include a transmitter 63 and a receiver 64. Transmitter 63 and receiver 64 can be coupled to processor 61.
- the memory 62 stores execution instructions. When the controller 600 is running, the processor 61 communicates with the memory 62. The processor 61 calls the execution instructions in the memory 62 for executing the method embodiment shown in FIG. Similar to the technical effect, it will not be described here.
- FIG. 13 is a schematic structural diagram of Embodiment 3 of a processing node according to the present invention.
- the processing node 700 is provided with a processor 71 and a memory 72.
- Processing node 700 may also include a transmitter 73, a receiver 74. Transmitter 73 and receiver 74 can be coupled to processor 71.
- the memory 72 stores execution instructions. When the processing node 700 is running, the processor 71 communicates with the memory 72, and the processor 71 calls the execution instructions in the memory 72 for executing the method embodiment shown in FIG. 4, the implementation principle thereof. Similar to the technical effect, it will not be described here.
- FIG. 14 is a schematic structural diagram of Embodiment 4 of a processing node according to the present invention.
- the processing node 800 provided in this embodiment includes a processor 81 and a memory 82.
- Processing node 800 can also include a transmitter 83, a receiver 84. Transmitter 83 and receiver 84 can be coupled to processor 81.
- the memory 82 stores execution instructions.
- the processor 81 communicates with the memory 82, and the processor 81 calls the execution instructions in the memory 82 for executing the method embodiment shown in FIG. 5, the implementation principle thereof. Similar to the technical effect, it will not be described here.
- the present invention further provides a service system, which may include a controller as shown in FIG. 7 or FIG. 11 and a processing node as shown in FIG. 9 or FIG. 13; 8 or the controller shown in FIG. 12 and the processing node shown in FIG. 10 or FIG. 14 .
- a service system which may include a controller as shown in FIG. 7 or FIG. 11 and a processing node as shown in FIG. 9 or FIG. 13; 8 or the controller shown in FIG. 12 and the processing node shown in FIG. 10 or FIG. 14 .
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
- the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
- the units described as separate components may or may not be physically separate, and the components displayed as the units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solution of the embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本发明实施例提供一种计算节点部署方法、处理节点、控制器及系统。该方法包括:接收处理节点发送的部署建议请求消息,根据链路信息和/或流量信息,确定部署方案;向处理节点发送包含部署方案的部署建议响应消息,以使得处理节点根据部署方案部署待部署计算节点。本发明实施例提供的计算节点部署方法,控制器根据业务系统中各个数据中心之间的链路信息、各计算节点间的流量信息等,对于新的计算节点,可以给出较佳的部署位置的部署方案;对于业务系统中已部署计算节点,可以给出重新部署位置的部署方案,使得处理节点可根据部署方案对该待部署计算节点进行部署,从而提高计算节点之间的通信质量、降低数据中心之间的通信流量。
Description
计算节点部署方法、 处理节点、 控制器及系统 技术领域
本发明实施例涉及通信领域, 尤其涉及一种计算节点部署方法、 处理节点、 控制器及系统。 背景技术
虚拟技术的出现为业务部署提供了更多的方式, 如云计算中大量使用虚拟 机( Virtual Machine, 以下简称 VM ), 当内容提供商( Content Provider, 以下简 称 CP) 提供的内容量比较大, 需要更多的计算资源时, 可以向数据中心 (Data Center, 以下简称 DC ) 申请部署更多的虚拟机; 当 CP的业务量下降时, 可以 减少虚拟机的数量。
现有技术中, 部署虚拟机的时候, 通过判断 DC中服务器的剩余资源, 如内 存、中央处理单元(Central Processing Unit, 以下简称 CPU)是否满足计算需求, 从而确定是否需要在该服务器上部署虚拟机。 例如, 若接收到虚拟机部署请求 为 CPU个数为两个, 内存为 1024M, 硬盘读写次数为 50次, 则从资源池中选 择一个服务器, 判断选中的服务器的剩余资源: CPU 的个数是否至少为两个, 内存是否至少为 1024M, 硬盘读写次数是否至少为 50次, 若其中一项不符合要 求, 则标记该服务器不符合本次选择的要求, 重新到资源池中选择新的服务器。
然而, 上述虚拟机部署方法, 当需要部署多个虚拟机的时候, 每次从资源 池中选择一个服务器, 判断该服务器的剩余资源是否满足需求进而部署虚拟机, 各个虚拟机可能部署在不同的数据中心的服务器上。 若虚拟机之间存在大量的 网络通信, 参与通信的虚拟机位于不同的数据中心, 则会导致数据中心之间的 网络流量增大、 数据拥塞, 通信质量差。 发明内容
本发明实施例提供一种计算节点部署方法、 处理节点、 控制器及系统, 控 制器根据链路信息、 流量信息等对待部署计算节点给出合理的部署方案, 使得 处理节点可根据部署方案对该待部署计算节点进行部署从而降低数据中心之间 的通信流量, 提高通信质量。
第一个方面, 本发明实施例提供一种计算节点部署方法, 包括:
接收处理节点发送的部署建议请求消息, 所述部署建议请求消息携带待部 署计算节点的描述信息;
根据链路信息和 /或流量信息, 确定部署方案, 其中, 所述链路信息包括所 述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理 的各数据中心与不属于所述处理节点所管理的各数据中心之间的链路信息; 所 述流量信息为所述待部署计算节点与所述待部署计算节点相关的计算节点之间 的流量信息, 其中, 与所述待部署计算节点相关的计算节点为与所述待部署计 算节点有通信需求的计算节点;
向所述处理节点发送包含所述部署方案的部署建议响应消息。
在第一个方面的第一种可能的实现方式中, 所述部署建议请求消息中还携 带所述待部署计算节点的部署要求信息;
所述根据链路信息和 /或流量信息, 确定部署方案, 包括:
根据所述链路信息和 /或流量信息,确定满足所述部署要求信息的部署方案。 结合第一个方面的第一种可能的实现方式, 在第一个方面的第二种可能的 实现方式中, 所述部署要求信息包括:
所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
结合第一个方面、 第一个方面的第一种或第二种可能的实现方式, 在第一 个方面的第三种可能的实现方式中, 所述描述信息包括待部署计算节点的标识 信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属的租户标识
I Ft自、。
结合第一个方面、 第一个方面的第一种、 第二种或第三种可能的实现方式, 在第一个方面的第四种可能的实现方式中, 所述接收处理节点发送的部署建议
请求消息, 所述部署建议请求消息携带待部署计算节点的描述信息, 包括: 通过代理接收所述处理节点发送的部署建议请求消息;
所述向所述处理节点发送包含所述部署方案的部署建议响应消息, 以使得 所述处理节点根据所述部署方案部署所述待部署计算节点, 包括:
通过所述代理向所述处理节点发送包含所述部署方案的部署建议响应消 息, 以使得所述处理节点根据所述部署方案部署所述待部署计算节点。
结合第一个方面、 第一个方面的第一种至第四种可能的实现方式中的任一 中实现方式, 在第一个方面的第五种可能的实现方式中, 所述待部署计算节点 包括新增计算节点或已部署计算节点。
第二个方面, 本发明实施例提供一种计算节点部署方法, 包括:
接收处理节点发送的部署信息请求消息, 所述部署信息请求消息携带待部 署计算节点的描述信息;
获取链路信息和 /或流量信息, 其中, 所述链路信息包括所述处理节点所管 理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理的各数据中心与 不属于所述处理节点所管理的各数据中心之间的链路信息, 所述流量信息为所 述待部署计算节点与所述待部署计算节点相关的计算节点之间的流量信息, 其 中, 所述待部署计算节点相关的计算节点为与待部署计算节点有通信需求的计 算节点;
向所述处理节点发送包含所述链路信息和 /或流量信息的部署信息响应消 息。
在第二个方面的第一种可能的实现方式中, 所述描述信息包括待部署计算 节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属 的租户标识信息。
结合第二个方面或第二个方面的第一种可能的实现方式, 在第第二个方面 的第二种可能的实现方式中, 所述接收处理节点发送的部署信息请求消息, 所 述部署信息请求消息携带待部署计算节点的描述信息, 包括:
通过代理接收所述处理节点发送的部署信息请求消息, 所述部署信息请求 消息携带待部署计算节点的描述信息;
所述向所述处理节点发送包含所述链路信息和 /或流量信息的部署信息响应 消息, 以使所述处理节点根据所述链路信息和 /或流量信息确定部署方案,包括:
通过所述代理向所述处理节点发送包含所述链路信息和 /或流量信息的部署 信息响应消息, 以使所述处理节点根据所述链路信息和 /或流量信息确定部署方 案。
结合第二个方面、 第二个方面的第一种或第二种可能的实现方式, 在第二 个方面的第三种可能的实现方式中, 所述待部署计算节点包括新增计算节点或 已部署计算节点。
第三个方面, 本发明实施例提供一种计算节点部署方法, 包括:
向控制器发送部署建议请求消息, 所述部署建议请求消息携带待部署计算 节点的描述信息, 以使得所述控制器根据链路信息和 /或流量信息, 确定部署方 案, 其中, 所述链路信息包括所述处理节点所管理的各数据中心之间的链路信 息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处理节点所管理的 各数据中心之间的链路信息; 所述流量信息为所述待部署计算节点与所述待部 署计算节点相关的计算节点之间的流量信息, 其中, 与所述待部署计算节点相 关的计算节点为与所述待部署计算节点有通信需求的计算节点;
接收所述控制器发送的包含所述部署方案的部署建议响应消息。
在第三个方面的第一种可能的实现方式中, 向所述控制器发送的所述部署 建议请求消息中还携带所述待部署计算节点的部署要求信息, 以使得所述控制 器根据所述链路信息和 /或流量信息, 确定满足所述部署要求信息的部署方案。
结合第三个方面的第一种可能的实现方式, 在第三个方面的第二种可能的 实现方式中, 所述部署要求信息包括: 所述待部署计算节点之间的相对位置信 息、 所述待部署计算节点与已部署计算节点之间的相对位置信息、 所述待部署 计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之 间的通信质量要求信息、 所述待部署计算节点跨数据中心通信总流量要求信息 中的一种信息或其组合。
结合第三个方面、 第三个方面的第一种或第二种可能的实现方式, 在第三 个方面的第三种可能的实现方式中, 所述描述信息包括待部署计算节点的标识 信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属的租户标识
I Ft自、。
结合第三个方面、 第三个方面的第一种、 第二种或第三种可能的实现方式, 在第三个方面的第四种可能的实现方式中, 所述向控制器发送部署建议请求消
息, 包括: 通过代理向控制器发送部署建议请求消息;
所述接收所述控制器发送的包含所述部署方案的部署建议响应消息, 包括: 通过所述代理接收所述控制器发送的包含所述部署方案的部署建议响应消息。
结合第三个方面的、 第三个方面的第一种、 第二种或第三种可能的实现方 式, 在第三个方面的第四种可能的实现方式中, 所述待部署计算节点包括新增 计算节点或已部署计算节点。
第四个方面, 本发明实施例提供一种计算节点部署方法, 包括:
向控制器发送部署信息请求消息, 所述部署信息请求消息携带待部署计算 节点的描述信息, 以使得所述控制器获取链路信息和 /或流量信息, 其中, 所述 链路信息包括所述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述 处理节点所管理的各数据中心与不属于所述处理节点所管理的各数据中心之间 的链路信息, 所述流量信息为所述待部署计算节点与所述待部署计算节点相关 的计算节点之间的流量信息, 其中, 与所述待部署计算节点相关的计算节点为 与所述待部署计算节点有通信需求的计算节点;
接收所述控制器发送的包含所述链路信息和 /或流量信息的部署信息响应消 息;
根据所述链路信息和 /或流量信息确定部署方案。
在第四个方面的第一种可能的实现方式中, 所述描述信息包括待部署计算 节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属 的租户标识信息。
结合第四个方面或第四个方面的第一种可能的实现方式, 在第四个方面的 第二种可能的实现方式中, 所述向控制器发送部署信息请求消息, 包括: 通过代理向控制器发送部署信息请求消息;
所述接收所述控制器发送的包含所述链路信息和 /或流量信息的响应消息, 包括; 通过所述代理接收所述控制器发送的包含所述链路信息和 /或流量信息的 部署信息响应消息。
结合第四个方面、 第四个方面的第一种或第二种可能的实现方式, 在第四 个方面的第三种可能的实现方式中, 所述待部署计算节点包括新增计算节点或 已部署计算节点。
第五个方面, 本发明实施例提供一种控制器, 包括:
接收模块, 用于接收处理节点发送的部署建议请求消息, 所述部署建议请 求消息携带待部署计算节点的描述信息;
确定模块, 用于根据链路信息和 /或流量信息, 确定部署方案, 其中, 所述 链路信息包括所述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述 处理节点所管理的各数据中心与不属于所述处理节点所管理的各数据中心之间 的链路信息; 所述流量信息为所述待部署计算节点与所述待部署计算节点相关 的计算节点之间的流量信息, 其中, 与所述待部署计算节点相关的计算节点为 与所述待部署计算节点有通信需求的计算节点;
发送模块, 用于向所述处理节点发送包含所述部署方案的部署建议响应消 息。
在第五个方面的第一种可能的实现方式中, 所述接收模块接收到的部署建 议请求消息中还携带所述待部署计算节点的部署要求信息;
所述确定模块还用于根据所述链路信息和 /或流量信息, 确定满足所述部署 要求信息的部署方案。
结合第五个方面的第一种可能的实现方式, 在第五个方面的第二种可能的 实现方式中, 所述部署要求信息包括:
所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
结合第五个方面、 第五个方面的第一种或第二种可能的实现方式, 在第五 个方面的第三种可能的实现方式中, 所述描述信息包括待部署计算节点的标识 信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属的租户标识
I Ft自、。
结合第五个方面、 第五个方面的第一种、 第二种或第三种可能的实现方式, 在第五个方面的第四种可能的实现方式中, 所述接收模块还用于通过代理接收 所述处理节点发送的部署建议请求消息;
所述发送模块还用于通过所述代理向所述处理节点发送包含所述部署方案 的部署建议响应消息。
结合第五个方面、 第五个方面的第一种、 第二种、 第三种或第四可能的实
现方式, 在第五个方面的第五种可能的实现方式中, 所述待部署计算节点包括 新增计算节点或已部署计算节点。
第六个方面, 本发明实施例提供一种控制器, 包括:
接收模块, 用于接收处理节点发送的部署信息请求消息, 所述部署信息请 求消息携带待部署计算节点的描述信息;
获取模块, 用于获取链路信息和 /或流量信息, 其中, 所述链路信息包括所 述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理 的各数据中心与不属于所述处理节点所管理的各数据中心之间的链路信息, 所 述流量信息为所述待部署计算节点与所述待部署计算节点相关的计算节点之间 的流量信息, 其中, 所述待部署计算节点相关的计算节点为与待部署计算节点 有通信需求的计算节点;
发送模块, 用于向所述处理节点发送包含所述链路信息和 /或流量信息的部 署信息响应消息。
在第六个方面的第一种可能的实现方式中, 所述描述信息包括待部署计算 节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属 的租户标识信息。
结合第六个方面或第六个方面的第一种可能的实现方式, 在第六个方面的 第二种可能的实现方式中, 所述接收模块还用于通过代理接收所述处理节点发 送的部署信息请求消息, 所述部署信息请求消息携带待部署计算节点的描述信 息;
所述发送模块还用于通过所述代理向所述处理节点发送包含所述链路信息 和 /或流量信息的部署信息响应消息, 以使所述处理节点根据所述链路信息和 /或 流量信息确定部署方案。
结合第六个方面、 第六个方面的第一种或第二种可能的实现方式, 在第六 个方面的第三种可能的实现方式中, 所述待部署计算节点包括新增计算节点或 已部署计算节点。
第七个方面, 本发明实施例提供一种处理节点, 包括:
发送模块, 用于向控制器发送部署建议请求消息, 所述部署建议请求消息 携带待部署计算节点的描述信息, 以使得所述控制器根据链路信息和 /或流量信 息, 确定部署方案, 其中, 所述链路信息包括所述处理节点所管理的各数据中
心之间的链路信息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处 理节点所管理的各数据中心之间的链路信息; 所述流量信息为所述待部署计算 节点与所述待部署计算节点相关的计算节点之间的流量信息, 其中, 与所述待 部署计算节点相关的计算节点为与所述待部署计算节点有通信需求的计算节 点;
接收模块, 用于接收所述控制器发送的包含所述部署方案的部署建议响应 消息。
在第七个方面的第一种可能的实现方式中, 所述发送模块还用于向所述控 制器发送携带所述待部署计算节点的部署要求信息的所述部署建议请求消息, 以使得所述控制器根据链路信息和 /或流量信息, 确定满足所述部署要求信息的 部署方案。
结合第七个方面的第一种可能的实现方式, 在第七个方面的第二种可能的 实现方式中, 所述部署要求信息包括:
所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
结合第七个方面、 第七个方面的第一种或第二种可能的实现方式, 在第七 个方面的第三种可能的实现方式中, 所述描述信息包括待部署计算节点的标识 信息、 所述待部署计算节点的数量信息或所述待部署计算节点所属的租户标识
I Ft自、。
结合第七个方面、 第七个方面的第一种、 第二种或第三种可能的实现方式, 在第七个方面的第四种可能的实现方式中, 所述发送模块还用于通过代理向控 制器发送部署建议请求消息;
所述接收模块还用于通过所述代理接收所述控制器发送的包含所述部署方 案的部署建议响应消息。
结合第七个方面、 第七个方面的第一种、 第二种、 第三种或第四种可能的 实现方式, 在第七个方面的第五种可能的实现方式中, 所述待部署计算节点包 括新增计算节点或已部署计算节点。
第八个方面, 本发明实施例一种处理节点, 包括:
发送模块, 用于向控制器发送部署信息请求消息, 所述部署信息请求消息 携带待部署计算节点的描述信息, 以使得所述控制器获取链路信息和 /或流量信 息, 其中, 所述链路信息包括所述处理节点所管理的各数据中心之间的链路信 息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处理节点所管理的 各数据中心之间的链路信息, 所述流量信息为所述待部署计算节点与所述待部 署计算节点相关的计算节点之间的流量信息, 其中, 与所述待部署计算节点相 关的计算节点为与所述待部署计算节点有通信需求的计算节点;
接收模块, 用于接收所述控制器发送的包含所述链路信息和 /或流量信息的 部署信息响应消息;
确定模块, 用于根据所述链路信息和 /或流量信息确定部署方案。
在第八个方面的第一种可能的实现方式中, 所述描述信息包括待部署计算 节点的节点标识信息、 所述待部署计算节点的数量信息、 所述待部署计算节点 所属的租户标识信息, 或者, 所述待部署计算节点所属的特征信息。
结合第八个方面或第八个方面的第一种可能的实现方式, 在第八个方面的 第二种可能的实现方式中, 所述发送模块还用于通过代理向控制器发送部署信 息请求消息;
所述接收模块还用于通过所述代理接收所述控制器发送的包含所述链路信 息和 /或流量信息的部署信息响应消息。
结合第八个方面、 第八个方面的第一种或第二种可能的实现方式, 在第八 个方面的第三种可能的实现方式中, 所述待部署计算节点包括新增计算节点或 已部署计算节点。
第九个方面, 本发明实施例提供一种业务系统, 包括如上第五个方面所述 的控制器及如上第七个方面所述的处理节点。
第十个方面, 本发明实施例提供一种业务系统, 包括如上第六个方面所述 的控制器及如上第八个方面所述的处理节点。
本发明实施例提供的计算节点管理方法、 处理节点、 控制器及系统, 控制 器根据业务系统中与待部署计算节点可部署的各数据中心之间的链路信息或待 部署计算节点可部署的数据中心与待部署计算节点不可部署的各数据中心之间 的链路信息、 与待部署计算节点相关的各计算节点间的流量信息等, 对于新的 计算节点, 可以确定出较佳的部署位置的部署方案, 使得处理节点可根据部署
方案对新的计算节点进行部署; 对于业务系统中已部署的计算节点, 可以给出 重新部署位置的部署方案, 使得处理节点可根据部署方案对该已部署计算节点 进行位置调整, 将数据中心之间的通信流量转变为数据中心内的流量, 从而提 高计算节点之间的通信质量、 降低数据中心之间的通信流量。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实施 例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见地, 下面描 述中的附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出 创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明计算节点部署方法所适用的第一业务系统架构示意图; 图 2为本发明计算节点部署方法实施例一的流程图;
图 3为本发明计算节点部署方法实施例二的流程图;
图 4为本发明计算节点部署方法实施例三的流程图;
图 5为本发明计算节点部署方法实施例四的流程图;
图 6为本发明计算节点部署方法所适用的第二业务系统架构示意图; 图 7为本发明控制器实施例一的结构示意图;
图 8为本发明控制器实施例二的结构示意图;
图 9为本发明处理节点实施例一的结构示意图;
图 10为本发明处理节点实施例二的结构示意图;
图 11为本发明控制器实施例三的结构示意图;
图 12为本发明控制器实施例四的结构示意图;
图 13为本发明处理节点实施例三的结构示意图;
图 14为本发明处理节点实施例四的结构示意图。 具体实肺式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发明 实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于本发明中 的实施例, 本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其
他实施例, 都属于本发明保护的范围。
图 1为本发明计算节点部署方法所适用的第一业务系统架构示意图。 请参照 图 1, 本实施例中, 处理节点管理数据中心 DC— A、 DC— B、 DC— C, 如图中黑色 实线所示, 数据中心 DC— D不属于处理节点管理, 且数据中心 DC— A、 DC— B、 DC— C之间存在通信, 数据中心 DC— B与 DC— D之间存在通信, 各个数据中心上可 以部署、 或者已经部署了计算节点 (图中未示出) 。 其中, 计算节点指各种具 有计算能力的、 可以进行数据处理的功能模块, 如虚拟机(Virtual Machine, 以 下简称 VM) 、 计算容器 (Linux Container, 以下简称 LXC) 和物理服务器等通 过软件或硬件实现的功能模块, 其中, 计算容器例如是在操作系统层次上为进 程提供虚拟执行环境的、具有特定比例的 CPU分配时间、输入输出(Input Output, 以下简称 10) 、 限制可以使用的内存大小等的容器
处理节点可以通过硬件或软件实现, 例如, 其可以包括硬件实现的计算节 点管理中心或者可以为软件实现的第三方应用程序、 租户等, 本发明并不以此 为限制。 当处理节点具体为计算节点管理中心时, 还负责业务系统中各个计算 节点的管理, 如图中虚线所示, 其可对各个数据中心的计算节点进行统一管理, 或者根据需求管理部分计算节点。 另外, 也可以将处理节点针对性的划分成子 管理中心, 例如划分为虚拟机管理子中心、 容器管理子中心等, 虚拟机管理子 中心负责虚拟机的管理, 如虚拟机的创建、 启动、 删除、 冻结和迁移等; 容器 管理子中心负责计算容器的管理, 如计算容器的创建、 启动、 删除、 冻结和迁 移等。 具体的, 处理节点通过和各数据中心的服务器进行交互来完成计算节点 的管理。
如图 1中点划线所示, 控制器与处理节点可以基于超文本传输协议 (Hyper Text Transport Protocol,以下简称 HTTP)或是传输控制协议(Transmission Control Protocol, 以下简称 TCP) 进行通信; 或者, 也可以基于用户数据报协议 (User Datagram Protocol, 以下简称 UDP) 进行通信。 控制器通过与处理节点的连接, 接收处理节点发送的部署建议请求消息, 根据链路信息和 /或流量信息确定出待 部署计算节点的部署方案, 将部署方案携带在部署建议响应消息中发送给处理 节点; 或者, 控制器通过与处理节点的连接, 接收处理节点发送的部署信息请 求消息, 获取链路信息和 /或流量信息, 并将获取到的链路信息和 /或流量信息携 带在部署信息响应消息中发送给处理节点, 使得处理节点确定出待部署计算节
点的部署方案。 其中, 链路信息包括: 处理节点所管理的各数据中心之间的链 路信息, 和 /或, 处理节点所管理的各数据中心与不属于该处理节点所管理的各 数据中心之间的链路信息; 流量信息为待部署计算节点与待部署计算节点相关 的计算节点之间的流量信息, 其中, 与待部署计算节点相关的计算节点为与待 部署计算节点有通信需求的计算节点。 请参照图 1, 处理节点所管理的数据中心 为 DC— A、 DC— B、 DC— C, 则控制器根据 DC— A、 DC— B、 DC— C之间的链路信息 确定部署方案; 或者还可以根据 DC— A、 DC— B、 DC— C与 DC— D之间的链路信息 确定部署方案。 另外, 除了考虑数据中心间的链路信息外, 还可以考虑待部署 计算节点间的流量信息从而确定部署方案。 例如, 对于新增计算节点, 可以根 据链路信息、 流量信息或部署要求信息等确定出较佳的部署位置; 对于业务系 统中已部署计算节点, 根据计算节点间的流量信息、 数据中心间的链路信息或 部署要求信息等给出重新部署的位置的建议, 或者, 给出已部署计算节点的流 量信息的流量列表或流量矩阵。
其中, 链路信息指业务系统中各数据中心之间的链路状态信息, 包括链路 的总带宽、 空闲带宽、 时延、 抖动、 丢包率等。 一般来说, 数据中心间的链路 是双向的, 双向的状态可能不一样。 另外, 链路信息不是一成不变的, 而是随 着时间变化的, 可以通过控制器或其他应用对链路信息进行实时的、 事件触发 性的或是周期性的采集, 将采集到的链路信息存放到数据库中形成链路信息数 据库。 表 1为对图 1中数据中心 DC— A、 DC— B、 DC— C之间进行链路信息采集而得 出的链路信息表。以第一行为例, L—AB表示 DC— A向 DC— B发送数据时的总带宽、 剩余带宽、 时延、 抖动、 丢包率等信息, 由表 1可知, L— AB与 L— BA是不一样 的。
计算节点的流量信息是指计算节点之间通信的流量状态信息, 包括某一时 间段的总的通信流量、 平均带宽、 突发时间长度、 突发带宽等信息。 一般来说,
计算节点间的流量信息不是一成不变的, 而是随着时间的变化而变化, 可以通 过控制器或其他应用对流量信息进行实时的、 事件触发性的或是周期性的采集, 将采集到的流量信息存放到数据库中形成流量信息数据库。 以计算节点为虚拟 机为例, 请参照图 1, 假设数据中心 DC— A中已部署两个虚拟机 VM—1和 VM— 2, DC— B中已部署两个虚拟机 VM— 3和 VM— 4, VM— 1、 VM— 2、 VM— 3、 VM— 4相互 之间存在通信, 某一时间段对 VM— 1、 VM— 2、 VM— 3、 VM— 4之间进行流量信息 采集即可得到各计算节点之间通信的流量状态信息。表 2为 VM—1、VM— 2、 VM— 3、 VM— 4之间的流量信息表。 其中, 以 T12为例, T12可以表示 VM— 1与 VM— 2之间 的总的通信流量、 平均带宽、 突发时间 、 突发带宽等信息。
需要说明的是, 图 1中是以处理节点和控制器独立部署为例对本发明进行详 细阐述。 然而, 本发明并不以此为限制, 在实际的应用场景中, 通过软件或硬 件方式实现的将处理节点和控制器也可以部署在业务系统中的同一个服务器 上, 或者, 分别部署在不同的服务器上。
图 2为本发明计算节点部署方法实施例一的流程图。 本实施例可适用于业务 系统中已部署计算节点不满足业务需求需要部署新的计算节点的场景, 或者业 务系统中已部署计算节点需要调整部署位置的场景, 本实施例以控制器为执行 主体对本发明实施例进行详细阐述。 具体的, 本实施例包括以下歩骤:
101、 接收处理节点发送的部署建议请求消息, 部署建议请求消息携带待部 署计算节点的描述信息。
业务运行初期, 内容提供商 (Content Provider, 以下简称 CP) 需要的计算 节点的数量不多, 但是, 随着业务估摸的不断发展, CP需要的计算节点的数量 也相应的越来越多, 需要在业务系统中的某些数据中心上部署新的计算节点; 或者, 由于处理同一个业务的、 已部署的多个计算节点被部署在不同的数据中 心中, 根据需求, 需要对该些已部署的计算节点进行位置调整。 此时, 处理节
点向控制器发送部署建议请求消息, 控制器接收该携带待部署计算节点的描述
I Ft自、。
其中, 描述信息包括待部署计算节点的标识信息、 待部署计算节点的数量、 待部署计算节点所属的租户标识等, 其中, 租户标识例如为租户的名称或与名 称对应的身份识别号码(IDentity,以下简称 ID),如腾讯、百度、 3456123、 7890123 ( 3456123、 7890123是腾讯和百度在控制器下对应的数字标识) 等, 使用租户 标识作为待部署计算节点的描述信息时, 表示该租户下的所有的计算节点; 一 个或多个计算节点的标识, 如 {VM— a}表示一个虚拟机 a, { VM a , VM b , VM— c}表示一组虚拟机 、 b、 c; 另外, 也可以将某一类能够标识待部署计算节 点的特征信息作为待部署计算节点的标识信息,如互联网协议(Internet Protocol, 以下简称 IP) 地址信息、 网段地址信息或介质访问控制 (media access control, 以下检测 MAC) 地址信息、 IP地址范围信息等等, 其中, 192.168.3.X/24表示子 网标识 192.168.3.X下的、 互联网协议(Internet Protocol, 以下简称 IP) 地址前 24 位为 192.168.1的所有计算节点。
102、 根据链路信息和 /或流量信息, 确定部署方案, 其中, 链路信息包括处 理节点所管理的各数据中心之间的链路信息, 和 /或, 处理节点所管理的各数据 中心与不属于处理节点所管理的各数据中心之间的链路信息; 流量信息为待部 署计算节点与待部署计算节点相关的计算节点之间的流量信息。 其中, 待部署 计算节点相关的计算节点为与待部署计算节点有通信需求的计算节点。
在接收到处理节点发送的部署建议请求消息后, 控制器根据处理节点所管 理的各数据中心之间的链路信息、 处理节点所管理的各数据中心与不属于处理 节点所管理的各数据中心之间的链路信息、 与待部署计算节点相关的计算节点 之间的流量信息中的至少一种信息, 确定待部署计算节点的部署方案。 例如, 对于新的计算节点, 可以确定出较佳的部署位置; 对于业务系统中已部署的计 算节点, 根据计算节点间的流量信息、 链路信息等给出重新部署位置的建议。
进一歩的, 部署建议请求消息中还可以携带待部署计算节点的部署要求信 息, 控制器根据链路信息和 /或流量信息, 确定满足部署要求信息的部署方案。 具体的, 若业务系统中的各个数据中心上未部署任何计算节点, 此时可以仅考 虑处理节点所管辖的各数据中心间的链路信息, 根据链路信息确定满足部署要 求信息的部署方案。 部署要求信息可以是对待部署计算节点之间或待部署计算
节点与已部署计算节点之间的部署相对位置、 通信质量要求或待部署计算节点 跨数据中心通信总流量要求的约束条件。
103、 向处理节点发送包含部署方案的部署建议响应消息。
当控制器为待部署计算节点确定出部署方案后向处理节点发送包含部署方 案的部署建议响应消息, 使得处理节点根据部署方案部署待部署计算节点。 例 如, 处理节点可以根据部署建议响应消息, 将待部署计算节点部署在链路剩余 带宽大、 时延小、 丢包率低的数据中心; 或者, 将两个存在大量网络通信的计 算节点部署在同一个数据中心。 部署方案可以是控制器根据链路信息或待部署 计算节点与与其相关的计算节点之间的流量信息、 部署要求信息等信息中的至 少一种信息确定出的; 或者, 处理节点也可以不采纳部署建议请求消息。
本发明实施例提供的计算节点部署方法, 控制器根据业务系统中处理节点 所管理的各数据中心之间的链路信息或处理节点所管理的各数据中心与不属于 处理节点所管理的各数据中心之间的链路信息、 与待部署计算节点相关的各计 算节点间的流量信息等, 对于新的计算节点, 可以确定出较佳的部署位置的建 议; 对于业务系统中已部署的计算节点, 可以给出重新部署位置的建议, 通过 将存在大量网络通信的计算节点部署在同一个数据中心中, 将数据中心之间的 通信流量转变为数据中心内的流量, 从而提高计算节点之间的通信质量、 降低 数据中心之间的通信流量。
图 3为本发明计算节点部署方法实施例二的流程图。 本实施例以控制器为执 行主体对本发明实施例进行详细阐述。 具体的, 本实施例包括以下歩骤:
201、 接收处理节点发送的部署信息请求消息, 部署信息请求消息携带待部 署计算节点的描述信息。
202、 获取链路信息和 /或流量信息, 其中, 链路信息包括处理节点所管理的 各数据中心之间的链路信息, 和 /或, 处理节点所管理的各数据中心与不属于处 理节点所管理的各数据中心之间的链路信息, 流量信息为待部署计算节点与待 部署计算节点相关的计算节点之间的流量信息, 其中, 与待部署计算节点相关 的计算节点为与待部署计算节点有通信需求的计算节点。
控制器接收到处理节点发送的部署信息请求消息, 获取处理节点所管理的 各数据中心之间的链路信息, 和 /或, 待处理节点所管理的各数据中心与不属于 处理节点所管理的各数据中心之间的链路信息, 流量信息为待部署计算节点与
待部署计算节点相关的计算节点之间的流量信息, 例如, 可以给出该已部署的 需要进行位置调整的计算节点的流量列表或流量矩阵。 控制器可以在接收到处 理节点发送的部署信息请求消息后, 立即获取相关的链路信息或流量信息; 或 者, 也可以仅在部署建议请求消息中携带流量请求消息后获取待部署计算节点 的流量信息等, 为满足需求, 可以设计具体的部署信息请求消息的具体格式, 本发明并不以此为限制。
203、 向处理节点发送包含链路信息和 /或流量信息的部署信息响应消息。 本实施例与上述实施例一的差异之处在于, 本实施例中, 控制器并未如实 施例一歩骤 102中确定出部署方案后再发送给处理节点, 而是将获取到的链路信 息和 /或流量信息的响应消息直接发送给处理节点, 由处理节点自行根据该些信 息确定部署方案。 具体的, 处理节点可以根据链路信息、 流量信息或是根据需 求确定出的部署要求信息中的至少一种信息确定出部署方案。
本发明实施例提供的计算节点部署方法, 控制器获取链路信息、 与待部署 计算节点相关的各计算节点间的流量信息等并发送给处理节点, 使得处理节点 对于业务系统中已部署的计算节点, 可以给出重新部署位置的建议, 通过将存 在大量网络通信的计算节点部署在同一个数据中心中, 将数据中心之间的通信 流量转变为数据中心内的流量, 从而提高计算节点之间的通信质量、 降低数据 中心之间的通信流量。
图 4为本发明计算节点部署方法实施例三的流程图。 本实施例可适用于业务 系统中已部署的计算节点不满足业务需求需要部署新的计算节点的场景, 或者 业务系统中已部署的计算节点需要调整部署位置的场景, 本实施例以处理节点 为执行主体对本发明实施例进行详细阐述。 具体的, 本实施例包括以下歩骤:
301、 向控制器发送部署建议请求消息, 部署建议请求消息携带待部署计算 节点的描述信息, 以使得控制器根据链路信息, 和 /或流量信息, 确定部署方案, 其中, 链路信息包括处理节点所管理的各数据中心之间的链路信息, 和 /或, 处 理节点所管理的各数据中心与不属于处理节点所管理的各数据中心之间的链路 信息; 流量信息为待部署计算节点与待部署计算节点相关的计算节点之间的流 量信息, 其中, 与待部署计算节点相关的计算节点为与待部署计算节点有通信 需求的计算节点。
本实施例中关于链路信息和流量信息的描述可参见图 2所示实施例, 在此不
再赘述。
302、 接收控制器发送的包含部署方案的部署建议响应消息。
当控制器接收到处理节点发送的部署建议请求消息并确定部署方案后, 向 处理节点发送包含部署方案的部署建议响应消息, 处理节点接收该部署建议响 应消息。
在接收到控制器发送的部署建议响应消息后, 处理节点可以选择根据该部 署建议响应消息包含的部署方案, 通过与所管辖的数据中心的服务器进行交互 来完成待部署计算节点的部署, 例如, 新的计算节点的创建、 启动、 已部署的 计算节点的删除、 冻结和迁移等; 或者, 处理节点也可以不采纳部署建议响应 消息返回的部署方案。
本发明实施例提供的计算节点部署方法, 对于新的计算节点, 处理节点根 据控制器给出的较佳部署位置的建议, 进行计算节点的创建等, 对于业务系统 中已部署的计算节点, 根据控制器给出的重新部署位置的建议, 进行位置调整, 例如, 冻结某个或某些已部署的计算节点, 在其他的数据中心重新创建这几个 计算节点, 从而将存在大量网络通信的计算节点部署在同一个数据中心中, 将 数据中心之间的通信流量转变为数据中心内的流量, 或者链路质量比较好的多 个数据中心, 从而提高计算节点之间的通信质量、 降低数据中心之间的通信流 图 5为本发明计算节点部署方法实施例四的流程图。 本实施例以处理节点为 执行主体对本发明实施例进行详细阐述。 具体的, 本实施例包括以下歩骤:
401、 向控制器发送部署信息请求消息, 部署信息请求消息携带待部署计算 节点的描述信息, 以使得控制器获取链路信息和 /或流量信息, 其中, 链路信息 包括处理节点所管理的各数据中心之间的链路信息, 和 /或, 处理节点所管理的 各数据中心与不属于处理节点所管理的各数据中心之间的链路信息, 流量信息 为待部署计算节点与待部署计算节点相关的计算节点之间的流量信息, 其中, 与待部署计算节点相关的计算节点为与待部署计算节点有通信需求的计算节 点。
本实施例中关于链路信息和流量信息的描述可参见图 2所示实施例, 在此不 再赘述。
402、 接收控制器发送的包含链路信息和 /或流量信息的部署信息响应消息。
控制器可以在接收到处理节点发送的部署信息请求消息后, 立即获取相关 的链路信息或流量信息; 或者, 也可以仅在部署建议请求消息中携带流量请求 消息后获取待部署计算节点的流量信息等, 为满足需求, 可以设计具体的部署 信息请求消息的具体格式, 本发明并不以此为限制。 处理节点接收控制器发送 的包含链路信息和 /或流量信息的部署信息响应消息。
403、 根据链路信息和 /或流量信息确定部署方案。
处理节点自行根据链路信息和 /或流量信息确定部署方案。 具体的, 处理节 点可以根据链路信息、 流量信息或是根据需求确定出的部署要求信息中的至少 一种信息确定出部署方案。
本发明实施例提供的计算节点部署方法, 处理节点根据业务系统中与待部 署计算节点所在的数据中心相关的各个数据中心之间的链路信息、 与待部署计 算节点相关的各计算节点间的流量信息等, 对于业务系统中已部署的计算节点, 确定出重新部署位置的建议, 进行位置调整, 例如, 冻结某个或某些已部署的 计算节点, 在其他的数据中心重新创建这几个计算节点, 从而将存在大量网络 通信的计算节点部署在同一个数据中心中, 将数据中心之间的通信流量转变为 数据中心内的流量, 或者链路质量比较好的多个数据中心, 从而提高计算节点 之间的通信质量、 降低数据中心之间的通信流量。
上述实施例一、 实施例二、 实施例三及实施例四分别从控制器和处理节点 为执行主体的角度对本发明进行了阐述, 其中, 实施例一与实施例三中, 由控 制器确定出待部署计算节点的部署方案, 实施例二与实施例四中, 处理节点接 收到根据控制器获取并发送的与待部署计算节点相关的链路信息及流量信息 后, 由处理节点确定出待部署计算节点的部署方案。 下面, 通过不同的实施例 来对本发明进行详细阐述。
图 6为本发明计算节点部署方法所适用的第二业务系统架构示意图。 请参照 图 6, 本实施例中, 是以业务系统中, 处理节点管理 3个数据中心 DC— A、 DC— B、 DC— C为例对本发明进行详细阐述的, 每个数据中心的服务器上可以部署、 或者 已经部署了计算节点 (图中未示出) 。 另外, 图中未示出处理节点和控制器。
在本发明计算节点部署方法实施例五中, 处理节点向控制器发送的部署建 议请求消息携带待部署计算节点的描述信息, 例如, 该描述消息中指示需要创 建两个新的计算节点。 表 3为图 6中数据中心 DC A、 DC B、 DC C之间进行链路
链路信息表。
控制器接收到处理节点发送的指示需要创建两个新的计算节点的部署建议 请求消息后, 基于此时链路信息数据库中存储的链路信息, 例如为表 3所示的链 路信息表, 遍历该表, 确定出各个数据中心的出口链路剩余带宽和入口链路剩 余带宽的总和最大的数据中心, 将该确定出的数据中心作为部署新的计算节点 的数据中心。 具体的, 请参照表 3, DC— A的出口链路剩余带宽为 10G (L— AB为 5G, L— AC为 5G) , 入口链路剩余带宽为 10G (L— BA为 5G, L— CA为 5G) , 总 剩余带宽为 20G; DC— B的出口链路剩余带宽为 7G (L— BA为 5G, L— BC为 2G) , 入口链路剩余带宽为 5.2G (L— AB为 5G, L— CB为 200M) , 总剩余带宽为 12.2G; DC— C的出口链路剩余带宽为 5.2G (L— CA为 5G, L— CB为 200M) , 入口链路剩 余带宽为 7G (L— AC为 5G, L— BC为 2G, 总剩余带宽为 7G) 。 由此可得, DC— A 的总剩余带宽最大, 总剩余带宽大, 一般来说, 更能满足计算节点跨数据中心 的通信要求, 提高通信质量。 因此, 控制器建议在 DC— A中部署这 2个计算节点, 即确定在 DC— A能够部署该 2个计算节点的部署方案。
在本发明计算节点部署方法实施例六中, 处理节点向控制器发送的部署建 议请求消息除了携带待部署计算节点的描述信息外, 还携带待部署计算节点的 部署要求信息以限制待部署计算节点之间的相对位置信息, 例如, 该描述消息 中指示需要创建两个新的计算节点, 部署要求信息指示该两个新的计算节点处 于不同的数据中心。 表 4为图 4中数据中心 DC— A、 DC— B、 DC— C之间进行链路信 息采集而得出的第二链路信息表。
L BA 10G 5G 50ms 20ms 20ppm
L CA 1G 200M 50ms 10ms 50ppm
L CB 1G 200M 50ms 40ms lOppm
控制器接收到处理节点发送的指示需要创建两个新的计算节点、 且该两个 计算节点需要处于不同的数据中心的部署建议请求消息后, 基于此时链路信息 数据库中存储的链路信息, 例如为表 4所示的链路信息表, 控制器遍历该表, 确 定出数据中心之间的链路剩余带宽最大的两个数据中心, 将该两个数据中心作 为部署新的计算节点的数据中心。 具体的, 请参照表 4, DC— A与 DC— B之间的链 路剩余带宽为 10G (L— AB为 5G, L— BA为 5G) ; DC— A与 DC— C之间的链路剩余 带宽为 5.2G (L— AC为 5G, L— CA为 200M ) ; DC— B与 DC— C之间的链路剩余带宽 为 2.2G (L— BC为 2G, L— CB为 200M) 。 由此可得, DC— A与 DC— B之间的链路剩 余带宽最大。 因此, 控制器建议在 DC— A与 DC— B中各部署一个计算节点, 即确 定出在 DC— A与 DC— B中各部署一个计算节点的部署方案。 此时, 由于每个计算 节点的部署位置有两个, 因此, 可以根据需求确定某一个计算节点的优先部署 数据中心, 建议在确定出的优先级别比较高的部署数据中心中部署该计算节点, 将剩下的计算节点部署在其他数据中心中; 或者, 若无优先级要求, 则建议将 两个计算节点随机的部署在 DC— A与 DC— B中。
基于上述实施例六, 可选的, 部署要求信息还可以对新的计算节点之间的 通信质量要求进行指示, 例如, 部署建议请求消息中携带待部署计算节点的描 述信息与部署要求信息, 描述消息中指示需要创建两个新的计算节点, 部署要 求信息指示该两个新的计算节点处于不同的数据中心, 且该两个新的计算节点 之间通信的抖动小于 30ms大于 15ms。表 5为图 6中数据中心 DC— A、 DC— B、 DC— C 之间进行链路信息采集而得出的第三链 息表。
30ms大于 15ms的部署建议请求消息后, 基于此时链路信息数据库中存储的链路 信息, 例如为表 5所示的链路信息表, 控制器遍历该表, 确定出数据中心之间通 信的抖动满足需求的两个数据中心, 将该两个数据中心作为部署新的计算节点 的数据中心。具体的,请参照表 5, DC— A与 DC— B之间通信的抖动为 20ms; DC— A 与 DC— C之间通信的抖动为 10ms; DC— B与 DC— C之间通信的抖动为 40ms。 由此 可得, 仅有 DC— A与 DC— B之间通信的抖动满足部署要求信息中的条件。 因此, 控制器建议在 DC— A与 DC— B中各部署一个计算节点, 即确定出在 DC— A与 DC— B 中各部署一个计算节点的部署方案。
基于上述实施例六, 可选的, 部署要求信息还可以对新的计算节点之间、 新的计算节点与已部署计算节点之间的通信质量要求进行指示。 例如, 部署建 议请求消息中携带待部署计算节点的描述信息与部署要求信息, 描述消息中指 示需要创建两个新的计算节点 VM1与 VM2, 部署要求信息指示 VM1与 VM2处于 不同的数据中心, VM1与已部署计算节点 VMX部署在不同的数据中心且 VM1与 已部署计算节点 VMX之间通信的抖动小于 15ms。 请参照图 4, VMX事先被部署 在 DC— A中。 表 6为图 6中数据中心 DC— A、 DC— B、 DC— C之间进行链路信息采集 而得出的第四链路信息表。
控制器接收到处理节点发送的指示需要创建两个新的计算节点 VM1与 VM2, 部署要求信息指示 VM1与 VM2处于不同的数据中心, VM1与已部署计算 节点 VMX部署在不同的数据中心且 VM1与已部署计算节点 VMX之间通信的抖 动小于 15ms的部署建议请求消息后, 基于此时链路信息数据库中存储的链路信 息, 例如为表 6所示的链路信息表, 控制器遍历该表, 确定出数据中心之间通信 的抖动满足需求的两个数据中心, 将该两个数据中心作为部署新的计算节点的
数据中心。 具体的, 请参照表 6, DC— A与 DC— B之间通信的抖动为 20ms; DC— A 与 DC— C之间通信的抖动为 10ms; DC— B与 DC— C之间通信的抖动为 40ms。 由此 可得, 仅有 DC— A与 DC— B之间通信的抖动满足部署要求信息中的条件, VMX事 先被部署在 DC— A中。 因此, 控制器建议在 DC— C中部署计算节点 VM1 , 在 DC— A 或 DC— B中部署计算节点 VM2,即确定出在 DC— C中部署计算节点 VM1,在 DC— A 或 DC— B中部署计算节点 VM2的部署方案。
基于上述实施例六, 可选的, 待部署计算节点为业务系统中以部署计算节 点, 如属于某一子网的至少一个计算节点, 处理节点向控制器发送的部署建议 请求消息携带待部署计算节点的描述信息, 描述信息除了包括待部署计算节点 的数量, 还可以包括子网标识; 部署要求信息还可以为指示该子网的计算节点 跨数据中心的通信流量最低。 例如, 计算节点为虚拟机, 子网标识为 192.168.10.X/24 , 共有 4个计算节点: VM— 1、 VM— 2、 VM— 3与 VM— 4 (图中未示 出) 属于该子网, 当前的部署位置为前两个处于 DC— C, 后两个处于 DC— A。 表 7A为图 6中 VM— 1、 VM— 2、 VM— 3与 VM— 4之间进行通信流量采集而得出的第一 通信流量表。 为了简单说明起见, 这里仅是列出了各个计算节点之间的通信流 量, 以 MB计算, 在其他可行的实施方式中, 该通信流量表也可以包括各计算节 点间的平均带宽、 突发时间长度、 突发带宽等信息。
表 7A
控制器接收到处理节点发送的指示子网标识为 192.168.10.X/24下的各虚拟 机跨数据中心的通信总流量最低、 且每个数据中心部署 2个虚拟机的部署建议请 求消息后, 控制器对该子网下的各虚拟机的部署位置进行划分, 划分的要求是 将该 4个虚拟机划分到属于子网 192.168.10.X/24的数据中心 DC— C与 DC— A, 每个 数据中心划分两个虚拟机, 则可能的划分结果如表 7B所示, 表 7B为表 7A中划分 结果示意表。
表 7B
划分 通信流
DC— C DC— A
序号 量总和
1 VM— 1 VM— 2 VM— 3 VM— 4 240MB
2 VM— 1 VM— 3 VM— 2 VM— 4 240MB
3 VM— 1 VM— 4 VM— 2 VM— 3 160MB
4 VM— 3 VM— 4 VM— 1 VM— 2 240MB
5 VM— 2 VM— 4 VM— 1 VM— 3 240MB
6 VM— 2 VM— 3 VM— 1 VM— 4 160MB 以表 7B中划分序号为 1的一行为例,该行表示控制器将 VM— 1与 VM— 2划分到 DC— C中, 此时 VM— 1与 VM— 2位于同一个数据中心, VM— 1到 VM— 2的通信流量 及 VM— 2到 VM— 1的通信流量相当于 DC— C中的内部流量,因此可以不计入数据中 心之间, gPDC— C与 DC— A之间的通信流量, 如表 7A中左上角阴影部分所示; 同 理, VM— 3与 VM— 4被划分到 DC— A中, 此时 VM— 3与 VM— 4位于同一个数据中, VM— 3到 VM— 4的通信流量及 VM— 4到 VM— 3的通信流量相当于 DC— A的内部流 量, 因此可以不计入数据中心之间, gPDC— C与 DC— A之间的通信流量, 如表 7A 中右下角阴影部分所示。 因此, 若以划分序号为 1所示的划分方法部署 VM— 1、 VM— 2、 VM— 3与 VM— 4, 则根据表 7A, 不考虑左上角与右下角部分的数据中心 内部的流量, 贝 ijDC— A与 DC— C之间的通信流量总和为 240MB。
同理,可得出划分序号为 2~6的其他方式划分下的 DC— A与 DC— C之间的通信 流量总和, 具体的, 如表 7B所示。
控制器遍历表 7B , 发现当 VM— 2与 VM— 3如划分序号 3或划分序号 6所示处于 同一个数据中心的时候, DC— A与 DC— C之间的通信流量总和最低, 为 160MB; 发现当 VM— 2与 VM— 3处于不同的数据中心的时候, DC— A与 DC— C之间的通信流 量总和为 240MB, 比较大。 因此, 控制器建议对 VM— 1、 VM— 2、 VM— 3与 VM— 4 重新进行部署, 将 VM— 2与 VM— 3部署在 DC— A中, 将 VM— 1与 VM— 4部署在 DC— C 中,或者,将 VM— 2与 VM— 3部署在 DC— C 中,将 VM— 1与 VM— 4部署在 DC— A 中。
基于上述实施例六, 可选的, 待部署计算节点为业务系统中已部署计算节 点, 如属于某一租户的至少一个计算节点, 处理节点向控制器发送的部署建议 请求消息携带待部署计算节点的描述信息, 描述信息除了包括待部署计算节点 的数量, 还可以包括租户标识; 部署要求信息还可以为指示该子网的计算节点 跨数据中心的通信流量最低、 且每个数据中心部署 2个计算节点。 例如, 计算节 点为虚拟机, 租户标识为 CP— 1234, 共有 6个计算节点: VM— 1、 VM— 2、 VM— 3、
VM— 4、 VM— 5与 VM— 6 (图中未示出) 属于该租户, 当前的部署位置为 VM— 1、 VM— 2处于 DC— C, VM— 3、 VM— 4处于 DC— A, VM— 5与 VM— 6处于 DC— B。 表 8A 为图 6中 VM— 1、 VM— 2、 VM— 3、 VM— 4、 VM— 5与 VM— 6之间进行通信流量采集 而得出的第二通信流量表。 为了简单说明起见, 这里仅是列出了各个计算节点 之间的通信流量, 以 MB计算, 在其他可行的实施方式中, 该通信流量表也可以 包括各计算节点间的平均带宽、 突发时间长度、 突发带宽等信息。
表 8A
据中心的通信流量最低、 且每个数据中心部署 2个虚拟机的部署建议请求消息 后, 控制器对该租户下的各虚拟机的部署位置进行划分, 划分的要求是将该 6个 虚拟机划分到属于租户 CP— 1234的数据中心 DC— C、 DC— A与 DC— B中, 每个数据 中心划分两个虚拟机, 则可能的划分结果如表 8B所示, 表 8B为表 8A中划分结果 表 8B
11 VM— 3 VM— 4 VM— 1 VM— 5 VM— 2 VM— 6 780MB
12 VM— 3 VM— 4 VM— 1 VM— 6 VM— 2 VM— 5 780MB
13 VM— 3 VM— 5 VM— 1 VM— 2 VM— 4 VM— 6 780MB
14 VM— 3 VM— 6 VM— 1 VM— 2 VM— 4 VM— 5 780MB 以表 8B中划分序号为 3的一行为例,该行表示控制器将 VM—1与 VM— 4划分到 DC— C中, 此时 VM— 1与 VM— 4位于同一个数据中心, VM— 1到 VM— 4的通信流量 及 VM— 4到 VM— 1的通信流量相当于 DC— C中的内部流量,因此可以不计入数据中 心 A之间的通信流量; 同理, VM— 2与 VM— 3被划分到数据 DC— A中, 此时 VM— 2 与 VM— 3的通信流量及 VM— 3到 VM— 2的通信流量相当于 DC— A中的内部流量, 因 此, 可以不计入数据中心之间的通信流量; 同理, VM— 5与 VM— 6被划分到数据 DC— B中, 此时 VM— 5与 VM— 6的通信流量及 VM— 6到 VM— 5的通信流量相当于 DC— B中的内部流量, 因此, 可以不计入数据中心之间的通信流量, 具体的, 该 种划分方式下不计入数据中心之间通信流量的部分如表 8A中阴影部分所示。 因 此,若以划分需要为 3所示的划分方法部署 VM—1、 VM— 2、 VM— 3、 VM— 4、 VM— 5 与 VM— 6, 则根据表 8A, 不考虑表 8A中所示阴影部分, 贝 ijDC— A、 DC— B与 DC— C 之间的通信流量总和为 480MB。
同理, 可得出其他划分序号所示的划分方法下的各数据中心之间的通信流 量总和, 具体的, 如表 8B所示。 需要说明的是, 不同的计算节点个数以及不同 的划分要求下, 划分方式也是不同的, 此处未一一列举。 在实际的场景中, 可 以根据需求选择计算节点个数以及划分方式。
控制器遍历表 8B , 发现当 VM— 1与 VM— 4处于同一个数据中心、 VM— 2与 VM— 3处于同一个数据中心、 VM— 5与 VM— 6处于同一个数据中心时,各个数据中 心之间的通信流量总和为 480MB, 比较小。 因此, 控制器建议对 VM— 1、 VM— 2、 VM— 3、 VM— 4、 VM— 5与 VM— 6重新进行部署, 将 VM— 1与 VM— 4部署在同一个数 据中心中, 将 VM— 2与 VM— 3部署于同一个数据中心, 将 VM— 5与 VM— 6部署于同 一个数据中心。具体的,如表 8B中阴影部分, 即划分序号 3或 10所示的划分方法。
在本发明计算节点部署方法实施例七中, 对于业务系统中已部署计算节点, 处理节点向控制器发送的部署建议请求消息除了携带待部署计算节点的描述信 息外, 还携带待部署计算节点的流量请求信息, 请求获知已部署计算节点的流 量状态。 例如, 以计算节点具体为虚拟机为例, 请参照图 6, 业务系统中共有 9
个虚拟机, VM— 1、 VM— 2、 VM— 3 部署在 DC— A中, VM— 4、 VM— 5、 VM— 6 部 署在 DC— B中, VM— 7、 VM— 8、 VM— 9 部署在 DC— C中, 图中未示出具体的虚拟 机。表 9A为图 4中 VM— 1、 VM— 2、 VM— 3、 VM— 4、 VM— 5、 VM— 6、 VM— 7、 VM— 8、
VM— 9之间进行通信流量采集而得出的第三通信流量表。 为了简单说明起见, 这 里仅是列出了各个计算节点之间的通信流量, 以 MB计算, 在其他可行的实施方 式中, 该通信流量表也可以包括各计算节点间的平均带宽、 突发时间长度、 突 发带宽等信息。
表 9A
若控制器接收到部署建议请求消息携带的是 VMl的流量请求消息, 则遍历 表 9A,提取出于 VM1之间存在通信的其他虚拟机,然后将对应的流量提取出来, 提取出的通信流量信息如表 9B示, 表 9B为表 9A中对 VM进行通信流量信息提取 的结果示意表。
表 9B
以 VM— 2所在的列为例, 表示 VM— 1与 VM— 2进行通信时, VM— 1到 VM— 2的 通信流量, 即出流量为 100MB; VM— 2到 VM— 1的通信流量, 即入流量为 90MB。 控制器在提取出 VMl的通信流量信息, 将该通信流量信息发送给处理节点, 以 使得处理节点可以根据该流量信息自行确定 VM—1、 VM— 2、 VM— 3、 VM— 4、 VM— 5、 VM— 6、 VM— 7、 VM— 8、 VM— 9的部署方案。 具体的, 控制器可以将 VMl 的标识及表 9B所示的内容携带在部署建议响应消息中发送给处理节点, 也可以
算出每个通信对象的通信流量总和后再发送给处理节点, 例如, 以通信对象为
VM— 2为例, 出流量为 100MB, 如流量为 90MB, 则通信流量总和为 190MB, 控 制器向计算节点管理发送的部署建议响应消息为携带 VM—1的标识、 且 VM— 1与 VM— 2的通信流量总和为 190MB。 在实际的场景中, 控制器可以选择将 VM1与 所有的通信对象的通信流量信息或部分通信对象的流量信息发送给处理节点。
上述各个实施例中, 均是处理节点与控制器点对点通信, 处理节点直接向 控制器发送部署建议请求消息, 并接收控制器发送的部署建议响应消息; 控制 器直接接收处理节点发送的部署建议请求消息, 确定出部署方案后直接向处理 节点发送部署建议响应消息。 然而, 在实际的场景中, 处理节点与控制器也可 以间接的通信。 例如, 在本发明计算节点部署方式实施例八中, 控制器可以通 过代理接收处理节点发出的部署建议请求消息, 确定出部署方案后通过代理向 处理节点发送部署建议响应消息; 同理, 处理节点也可以通过代理向控制器发 送部署建议请求消息; 通过代理接收控制器发送的部署建议响应消息。 由于代 理可以实现消息格式的转换和内容的转换及处理节点与控制器间的解耦, 局限 性小, 使用范围更为广泛。 因此, 在实际的应用场景中, 可以根据需求选择处 理节点与控制器直接通信, 或者, 选择处理节点与控制器之间通过代理从而间 接通信。
另外, 上述各个实施例中, 链路信息均为链路信息包括处理节点所管理的 各数据中心之间的链路信息, 然而, 链路信息也可以为处理节点所管理的各数 据中心与不属于处理节点所管理的各数据中心之间的链路信息。 例如, 请参照 图 1, 若需要将新的计算节点部署在 DC— B或 DC— C, 且待部署的新的计算节点与 DC— D中的计算节点存在通信,则需要考虑 DC— B与 DC— C、 DC— B与 DC— D、 DC— C 与 DC— D之间的链路信息。
图 7为本发明控制器实施例一的结构示意图。 本实施例提供的控制器 100, 可实现本发明任意实施例中的应用于控制器的方法的各个歩骤, 具体实现过程 此处不再赘述。 具体的, 本实施例提供的控制器 100可以包括:
接收模块 11, 用于接收处理节点发送的部署建议请求消息, 部署建议请求 消息携带待部署计算节点的描述信息;
确定模块 12, 用于根据链路信息和 /或流量信息, 确定部署方案, 其中, 链 路信息包括处理节点所管理的各数据中心之间的链路信息, 和 /或, 处理节点所
管理的各数据中心与不属于处理节点所管理的各数据中心之间的链路信息; 流 量信息为待部署计算节点与待部署计算节点相关的计算节点之间的流量信息, 其中, 与待部署计算节点相关的计算节点为与待部署计算节点有通信需求的计 算节点;
发送模块 13, 用于向处理节点发送包含部署方案的部署建议响应消息。 本发明实施例提供的控制器, 根据业务系统中与待部署计算节点可部署的 各数据中心之间的链路信息或待部署计算节点可部署的数据中心与待部署计算 节点不可部署的各数据中心之间的链路信息、 与待部署计算节点相关的各计算 节点间的流量信息等, 对于新的计算节点, 可以确定出较佳的部署位置的建议; 对于业务系统中已部署的计算节点, 可以给出重新部署位置的建议, 通过将存 在大量网络通信的计算节点部署在同一个数据中心中, 将数据中心之间的通信 流量转变为数据中心内的流量, 从而提高计算节点之间的通信质量、 降低数据 中心之间的通信流量。
进一歩的, 接收模块 11接收到的部署建议请求消息中还携带待部署计算节 点的部署要求信息;
确定模块 12还用于根据链路信息和 /或流量信息, 确定满足部署要求信息的 部署方案。
进一歩的, 部署要求信息包括: 待部署计算节点之间的相对位置信息、 待 部署计算节点与已部署计算节点之间的相对位置信息、 待部署计算节点之间的 通信质量要求信息、 待部署计算节点与已部署计算节点之间的通信质量要求信 息、 待部署计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
进一歩的, 描述信息包括待部署计算节点的标识信息、 待部署计算节点的 数量信息或待部署计算节点所属的租户标识信息。
进一歩的, 接收模块 11还用于通过代理接收处理节点发送的部署建议请求 消息;
发送模块 13还用于通过代理向处理节点发送包含部署方案的部署建议响应 消息。
进一歩的, 待部署计算节点包括新增计算节点或已部署计算节点。
图 8为本发明控制器实施例二的结构示意图。 本实施例提供的控制器 200, 可实现本发明任意实施例提供的应用于控制器的方法的各个歩骤, 具体实现过
程此处不再赘述。 具体的, 本实施例提供的控制器 200可以包括:
接收模块 21, 用于接收处理节点发送的部署信息请求消息, 部署信息请求 消息携带待部署计算节点的描述信息;
获取模块 22, 用于获取链路信息和 /或流量信息, 其中, 链路信息包括处理 节点所管理的各数据中心之间的链路信息, 和 /或, 处理节点所管理的各数据中 心与不属于处理节点所管理的各数据中心之间的链路信息, 流量信息为待部署 计算节点与待部署计算节点相关的计算节点之间的流量信息, 其中, 待部署计 算节点相关的计算节点为与待部署计算节点有通信需求的计算节点;
发送模块 23, 用于向处理节点发送包含链路信息和 /或流量信息的部署信息 响应消息。
本发明实施例提供的控制器, 通过获取链路信息、 与待部署计算节点相关 的各计算节点间的流量信息等并发送给处理节点, 使得处理节点对于业务系统 中已部署的计算节点, 可以给出重新部署位置的建议, 通过将存在大量网络通 信的计算节点部署在同一个数据中心中, 将数据中心之间的通信流量转变为数 据中心内的流量, 从而提高计算节点之间的通信质量、 降低数据中心之间的通 信流量。
进一歩的, 描述信息包括待部署计算节点的标识信息、 待部署计算节点的 数量信息或待部署计算节点所属的租户标识信息。 。
进一歩的, 接收模块 21还用于通过代理接收处理节点发送的部署信息请求 消息, 部署信息请求消息携带待部署计算节点的描述信息;
发送模块 23还用于通过代理向处理节点发送包含链路信息和 /或流量信息的 部署信息响应消息, 以使处理节点根据链路信息和 /或流量信息确定部署方案。
进一歩的, 待部署计算节点包括新增计算节点或已部署计算节点。
图 9为本发明处理节点实施例一的结构示意图。 本实施例提供的处理节点 300, 可实现本发明任意实施例提供的应用于处理节点的方法的各个歩骤, 具体 实现过程此处不再赘述。 具体的, 本实施例提供的处理节点 300可以包括: 发送模块 31, 用于向控制器发送部署建议请求消息, 部署建议请求消息携 带待部署计算节点的描述信息, 以使得控制器根据链路信息和 /或流量信息, 确 定部署方案, 其中, 链路信息包括处理节点所管理的各数据中心之间的链路信 息, 和 /或, 处理节点所管理的各数据中心与不属于处理节点所管理的各数据中
心之间的链路信息; 流量信息为待部署计算节点与待部署计算节点相关的计算 节点之间的流量信息, 其中, 与待部署计算节点相关的计算节点为与待部署计 算节点有通信需求的计算节点;
接收模块 32, 用于接收控制器发送的包含部署方案的部署建议响应消息。 本发明实施例提供的处理节点, 对于新的计算节点, 处理节点根据控制器 给出的较佳部署位置的建议, 进行计算节点的创建等, 对于业务系统中已部署 的计算节点, 根据控制器给出的重新部署位置的建议, 进行位置调整, 例如, 冻结某个或某些已部署的计算节点, 在其他的数据中心重新创建这几个计算节 点, 从而将存在大量网络通信的计算节点部署在同一个数据中心中, 将数据中 心之间的通信流量转变为数据中心内的流量, 或者链路质量比较好的多个数据 中心, 从而提高计算节点之间的通信质量、 降低数据中心之间的通信流量。
进一歩的, 发送模块 31还用于向控制器发送携带待部署计算节点的部署要 求信息的部署建议请求消息, 以使得控制器根据链路信息和 /或流量信息, 确定 满足部署要求信息的部署方案。
进一歩的, 部署要求信息包括: 待部署计算节点之间的相对位置信息、 待 部署计算节点与已部署计算节点之间的相对位置信息、 待部署计算节点之间的 通信质量要求信息、 待部署计算节点与已部署计算节点之间的通信质量要求信 息、 待部署计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
进一歩的, 描述信息包括待部署计算节点的标识信息、 待部署计算节点的 数量信息或待部署计算节点所属的租户标识信息。 。
进一歩的, 发送模块 31还用于通过代理向控制器发送部署建议请求消息; 接收模块 32还用于通过代理接收控制器发送的包含部署方案的部署建议响 应消息。
进一歩的, 待部署计算节点包括新增计算节点或已部署计算节点。
图 10为本发明处理节点实施例二的结构示意图。 本实施例提供的处理节点 400, 可实现本发明任意实施例提供的应用于处理节点的方法的各个歩骤, 具体 实现过程此处不再赘述。 具体的, 本实施例提供的处理节点 400可以包括: 发送模块 41, 用于向控制器发送部署信息请求消息, 部署信息请求消息携 带待部署计算节点的描述信息, 以使得控制器获取链路信息和 /或流量信息, 其 中, 链路信息包括处理节点所管理的各数据中心之间的链路信息, 和 /或, 处理 节点所管理的各数据中心与不属于处理节点所管理的各数据中心之间的链路信
息, 流量信息为待部署计算节点与待部署计算节点相关的计算节点之间的流量 信息, 其中, 与待部署计算节点相关的计算节点为与待部署计算节点有通信需 求的计算节点;
接收模块 42, 用于接收控制器发送的包含链路信息和 /或流量信息的部署信 息响应消息;
确定模块 43, 用于根据链路信息和 /或流量信息确定部署方案。
本发明实施例提供的处理节点, 根据业务系统中与待部署计算节点所在的 数据中心相关的各个数据中心之间的链路信息、 与待部署计算节点相关的各计 算节点间的流量信息等, 对于业务系统中已部署的计算节点, 确定出重新部署 位置的建议, 进行位置调整, 例如, 冻结某个或某些已部署的计算节点, 在其 他的数据中心重新创建这几个计算节点, 从而将存在大量网络通信的计算节点 部署在同一个数据中心中, 将数据中心之间的通信流量转变为数据中心内的流 量, 或者链路质量比较好的多个数据中心, 从而提高计算节点之间的通信质量、 降低数据中心之间的通信流量。
进一歩的, 描述信息包括待部署计算节点的标识信息、 待部署计算节点的 数量信息或待部署计算节点所属的租户标识信息
进一歩的, 发送模块 41还用于通过代理向控制器发送部署信息请求消息; 接收模块 42还用于通过代理接收控制器发送的包含链路信息和 /或流量信息 的部署信息响应消息。
图 11为本发明控制器实施例三的结构示意图。 如图 11所示, 本实施例提供 的控制器 500包括处理器 51和存储器 52。 控制器 500还可以包括发射器 53、 接收 器 54。 发射器 53和接收器 54可以和处理器 51相连。 其中, 存储器 52存储执行指 令, 当控制器 500运行时, 处理器 51与存储器 52之间通信, 处理器 51调用存储器 52中的执行指令, 用于执行图 2所示方法实施例, 其实现原理和技术效果类似, 此处不再赘述。
图 12为本发明控制器实施例四的结构示意图。 如图 12所示, 本实施例提供 的控制器 600包括处理器 61和存储器 62。 控制器 600还可以包括发射器 63、 接收 器 64。 发射器 63和接收器 64可以和处理器 61相连。 其中, 存储器 62存储执行指 令, 当控制器 600运行时, 处理器 61与存储器 62之间通信, 处理器 61调用存储器 62中的执行指令, 用于执行图 3所示方法实施例, 其实现原理和技术效果类似, 此处不再赘述。
图 13为本发明处理节点实施例三的结构示意图。 如图 13所示, 本实施例提
供的处理节点 700包括处理器 71和存储器 72。 处理节点 700还可以包括发射器 73、 接收器 74。 发射器 73和接收器 74可以和处理器 71相连。 其中, 存储器 72存储执 行指令, 当处理节点 700运行时, 处理器 71与存储器 72之间通信, 处理器 71调用 存储器 72中的执行指令, 用于执行图 4所示方法实施例, 其实现原理和技术效果 类似, 此处不再赘述。
图 14为本发明处理节点实施例四的结构示意图。 如图 14所示, 本实施例提 供的处理节点 800包括处理器 81和存储器 82。 处理节点 800还可以包括发射器 83、 接收器 84。 发射器 83和接收器 84可以和处理器 81相连。 其中, 存储器 82存储执 行指令, 当处理节点 800运行时, 处理器 81与存储器 82之间通信, 处理器 81调用 存储器 82中的执行指令, 用于执行图 5所示方法实施例, 其实现原理和技术效果 类似, 此处不再赘述。
基于上述方法和装置的实施例, 本发明还提供一种业务系统其可包括如图 7 或图 11所示的控制器及如图 9或图 13所示的处理节点; 或者, 可包括如图 8或图 12所示的控制器及如图 10或图 14所示的处理节点, 具体工作原理请参照上述方 法实施例, 此处不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统, 装置和方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可以有另外的 划分方式, 例如多个单元或组件可以结合或者可以集成到另一个系统, 或一些 特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间的耦合或直接耦 合或通信连接可以是通过一些接口, 装置或单元的间接耦合或通信连接, 可以 是电性, 机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的, 作为 单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者 也可以分布到多个网络单元上。 可以根据实际的需要选择其中的部分或者全部 单元来实现本实施例方案的目的。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或者 对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并不使相 应技术方案的本质脱离本发明各实施例技术方案的范围。
Claims
1、 一种计算节点部署方法, 其特征在于, 包括:
接收处理节点发送的部署建议请求消息, 所述部署建议请求消息携带待部 署计算节点的描述信息;
根据链路信息和 /或流量信息, 确定部署方案, 其中, 所述链路信息包括所 述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理 的各数据中心与不属于所述处理节点所管理的各数据中心之间的链路信息; 所 述流量信息为所述待部署计算节点与所述待部署计算节点相关的计算节点之间 的流量信息, 其中, 与所述待部署计算节点相关的计算节点为与所述待部署计 算节点有通信需求的计算节点;
向所述处理节点发送包含所述部署方案的部署建议响应消息。
2、 根据权利要求 1所述的方法, 其特征在于, 所述部署建议请求消息中还 携带所述待部署计算节点的部署要求信息;
所述根据链路信息和 /或流量信息, 确定部署方案, 包括:
根据所述链路信息和 /或流量信息,确定满足所述部署要求信息的部署方案。
3、 根据权利要求 2所述的方法, 其特征在于, 所述部署要求信息包括: 所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
4、 根据权利要求 1~3任一项所述的方法, 其特征在于, 所述描述信息包括 待部署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计 算节点所属的租户标识信息。
5、 根据权利要求 1~4任一项所述的方法, 其特征在于, 所述接收处理节点 发送的部署建议请求消息, 包括:
通过代理接收所述处理节点发送的部署建议请求消息;
所述向所述处理节点发送包含所述部署方案的部署建议响应消息, 以使得 所述处理节点根据所述部署方案部署所述待部署计算节点, 包括:
通过所述代理向所述处理节点发送包含所述部署方案的部署建议响应消
息, 以使所述处理节点根据所述部署方案部署所述待部署计算节点。
6、 根据权利要求 1~5任一项所述的方法, 其特征在于, 所述待部署计算节 点包括新增计算节点或已部署计算节点。
7、 一种计算节点部署方法, 其特征在于, 包括:
接收处理节点发送的部署信息请求消息, 所述部署信息请求消息携带待部 署计算节点的描述信息;
获取链路信息和 /或流量信息, 其中, 所述链路信息包括所述处理节点所管 理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理的各数据中心与 不属于所述处理节点所管理的各数据中心之间的链路信息, 所述流量信息为所 述待部署计算节点与所述待部署计算节点相关的计算节点之间的流量信息, 其 中, 所述待部署计算节点相关的计算节点为与待部署计算节点有通信需求的计 算节点;
向所述处理节点发送包含所述链路信息和 /或流量信息的部署信息响应消 息, 以使所述处理节点根据所述链路信息和 /或流量信息确定部署方案。
8、 根据权利要求 7所述的方法, 其特征在于, 所述描述信息包括待部署计 算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节点所 属的租户标识信息。
9、 根据权利要求 7或 8所述的方法, 其特征在于, 所述接收处理节点发送的 部署信息请求消息, 所述部署信息请求消息携带待部署计算节点的描述信息, 包括:
通过代理接收所述处理节点发送的部署信息请求消息, 所述部署信息请求 消息携带待部署计算节点的描述信息;
所述向所述处理节点发送包含所述链路信息和 /或流量信息的部署信息响应 消息, 以使所述处理节点根据所述链路信息和 /或流量信息确定部署方案,包括: 通过所述代理向所述处理节点发送包含所述链路信息和 /或流量信息的部署 信息响应消息, 以使所述处理节点根据所述链路信息和 /或流量信息确定部署方 案。
10、 根据权利要求 7~9任一项所述的方法, 其特征在于, 所述待部署计算节 点包括新增计算节点或已部署计算节点。
11、 一种计算节点部署方法, 其特征在于, 包括:
向控制器发送部署建议请求消息, 所述部署建议请求消息携带待部署计算 节点的描述信息, 以使得所述控制器根据链路信息和 /或流量信息, 确定部署方 案, 其中, 所述链路信息包括所述处理节点所管理的各数据中心之间的链路信 息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处理节点所管理的 各数据中心之间的链路信息; 所述流量信息为所述待部署计算节点与所述待部 署计算节点相关的计算节点之间的流量信息, 其中, 与所述待部署计算节点相 关的计算节点为与所述待部署计算节点有通信需求的计算节点;
接收所述控制器发送的包含所述部署方案的部署建议响应消息。
12、 根据权利要求 11所述的方法, 其特征在于, 向所述控制器发送的所述 部署建议请求消息中还携带所述待部署计算节点的部署要求信息, 以使得所述 控制器根据所述链路信息和 /或流量信息, 确定满足所述部署要求信息的部署方 案。
13、 根据权利要求 12所述的方法, 其特征在于, 所述部署要求信息包括: 所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
14、 根据权利要求 11~13任一项所述的方法, 其特征在于, 所述描述信息包 括待部署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署 计算节点所属的租户标识信息。
15、 根据权利要求 11~14任一项所述的方法, 其特征在于, 所述向控制器发 送部署建议请求消息, 包括:
通过代理向控制器发送部署建议请求消息;
所述接收所述控制器发送的包含所述部署方案的部署建议响应消息, 包括: 通过所述代理接收所述控制器发送的包含所述部署方案的部署建议响应消 息。
16、 根据权利要求 11~15任一项所述的方法, 其特征在于, 所述待部署计算 节点包括新增计算节点或已部署计算节点。
17、 一种计算节点部署方法, 其特征在于, 包括:
向控制器发送部署信息请求消息, 所述部署信息请求消息携带待部署计算
节点的描述信息, 以使得所述控制器获取链路信息和 /或流量信息, 其中, 所述 链路信息包括所述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述 处理节点所管理的各数据中心与不属于所述处理节点所管理的各数据中心之间 的链路信息, 所述流量信息为所述待部署计算节点与所述待部署计算节点相关 的计算节点之间的流量信息, 其中, 与所述待部署计算节点相关的计算节点为 与所述待部署计算节点有通信需求的计算节点;
接收所述控制器发送的包含所述链路信息和 /或流量信息的部署信息响应消 息;
根据所述链路信息和 /或流量信息确定部署方案。
18、 根据权利要求 17所述的方法, 其特征在于, 所述描述信息包括待部署 计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节点 所属的租户标识信息。
19、 根据权利要求 17或 18所述的方法, 其特征在于, 所述向控制器发送部 署信息请求消息, 包括:
通过代理向控制器发送部署信息请求消息;
所述接收所述控制器发送的包含所述链路信息和 /或流量信息的响应消息, 包括;
通过所述代理接收所述控制器发送的包含所述链路信息和 /或流量信息的响 应消息。
20、 根据权利要求 17~19任一项所述的方法, 其特征在于, 所述待部署计算 节点包括新增计算节点或已部署计算节点。
21、 一种控制器, 其特征在于, 包括:
接收模块, 用于接收处理节点发送的部署建议请求消息, 所述部署建议请 求消息携带待部署计算节点的描述信息;
确定模块, 用于根据链路信息和 /或流量信息, 确定部署方案, 其中, 所述 链路信息包括所述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述 处理节点所管理的各数据中心与不属于所述处理节点所管理的各数据中心之间 的链路信息; 所述流量信息为所述待部署计算节点与所述待部署计算节点相关 的计算节点之间的流量信息, 其中, 与所述待部署计算节点相关的计算节点为 与所述待部署计算节点有通信需求的计算节点;
发送模块, 用于向所述处理节点发送包含所述部署方案的部署建议响应消 息。
22、 根据权利要求 21所述的控制器, 其特征在于, 所述接收模块接收到的 部署建议请求消息中还携带所述待部署计算节点的部署要求信息;
所述确定模块还用于根据所述链路信息和 /或流量信息, 确定满足所述部署 要求信息的部署方案。
23、 根据权利要求 22所述的控制器, 其特征在于, 所述部署要求信息包括: 所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署 计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
24、 根据权利要求 21~23任一项所述的控制器, 其特征在于, 所述描述信息 包括待部署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部 署计算节点所属的租户标识信息。
25、 根据权利要求 21~24任一项所述的控制器, 其特征在于, 所述接收模块 还用于通过代理接收所述处理节点发送的部署建议请求消息;
所述发送模块还用于通过所述代理向所述处理节点发送包含所述部署方案 的部署建议响应消息。
26、 根据权利要求 21~25任一项所述的控制器, 其特征在于, 所述待部署计 算节点包括新增计算节点或已部署计算节点。
27、 一种控制器, 其特征在于, 包括:
接收模块, 用于接收处理节点发送的部署信息请求消息, 所述部署信息请 求消息携带待部署计算节点的描述信息;
获取模块, 用于获取链路信息和 /或流量信息, 其中, 所述链路信息包括所 述处理节点所管理的各数据中心之间的链路信息, 和 /或, 所述处理节点所管理 的各数据中心与不属于所述处理节点所管理的各数据中心之间的链路信息, 所 述流量信息为所述待部署计算节点与所述待部署计算节点相关的计算节点之间 的流量信息, 其中, 所述待部署计算节点相关的计算节点为与待部署计算节点 有通信需求的计算节点;
发送模块, 用于向所述处理节点发送包含所述链路信息和 /或流量信息的部
署信息响应消息。
28、 根据权利要求 27所述的控制器, 其特征在于, 所述描述信息包括待部 署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算节 点所属的租户标识信息。
29、 根据权利要求 27或 28所述的控制器, 其特征在于, 所述接收模块还用 于通过代理接收所述处理节点发送的部署信息请求消息, 所述部署信息请求消 息携带待部署计算节点的描述信息;
所述发送模块还用于通过所述代理向所述处理节点发送包含所述链路信息 和 /或流量信息的部署信息响应消息, 以使所述处理节点根据所述链路信息和 /或 流量信息确定部署方案。
30、 根据权利要求 27~29任一项所述的控制器, 其特征在于, 所述待部署计 算节点包括新增计算节点或已部署计算节点。
31、 一种处理节点, 其特征在于, 包括:
发送模块, 用于向控制器发送部署建议请求消息, 所述部署建议请求消息 携带待部署计算节点的描述信息, 以使得所述控制器根据链路信息和 /或流量信 息, 确定部署方案, 其中, 所述链路信息包括所述处理节点所管理的各数据中 心之间的链路信息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处 理节点所管理的各数据中心之间的链路信息; 所述流量信息为所述待部署计算 节点与所述待部署计算节点相关的计算节点之间的流量信息, 其中, 与所述待 部署计算节点相关的计算节点为与所述待部署计算节点有通信需求的计算节 点;
接收模块, 用于接收所述控制器发送的包含所述部署方案的部署建议响应 消息。
32、 根据权利要求 31所述的处理节点, 其特征在于, 所述发送模块还用于 向所述控制器发送携带所述待部署计算节点的部署要求信息的所述部署建议请 求消息, 以使得所述控制器根据链路信息和 /或流量信息, 确定满足所述部署要 求信息的部署方案。
33、 根据权利要求 32所述的处理节点, 其特征在于, 所述部署要求信息包 括:
所述待部署计算节点之间的相对位置信息、 所述待部署计算节点与已部署
计算节点之间的相对位置信息、 所述待部署计算节点之间的通信质量要求信息、 所述待部署计算节点与已部署计算节点之间的通信质量要求信息、 所述待部署 计算节点跨数据中心通信总流量要求信息中的一种信息或其组合。
34、 根据权利要求 31~33任一项所述的处理节点, 其特征在于, 所述描述信 息包括待部署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待 部署计算节点所属的租户标识信息。
35、 根据权利要求 31~34任一项所述的处理节点, 其特征在于, 所述发送模 块还用于通过代理向控制器发送部署建议请求消息;
所述接收模块还用于通过所述代理接收所述控制器发送的包含所述部署方 案的部署建议响应消息。
36、 根据权利要求 31~35任一项所述的处理节点, 其特征在于, 所述待部署 计算节点包括新增计算节点或已部署计算节点。
37、 一种处理节点, 其特征在于, 包括:
发送模块, 用于向控制器发送部署信息请求消息, 所述部署信息请求消息 携带待部署计算节点的描述信息, 以使得所述控制器获取链路信息和 /或流量信 息, 其中, 所述链路信息包括所述处理节点所管理的各数据中心之间的链路信 息, 和 /或, 所述处理节点所管理的各数据中心与不属于所述处理节点所管理的 各数据中心之间的链路信息, 所述流量信息为所述待部署计算节点与所述待部 署计算节点相关的计算节点之间的流量信息, 其中, 与所述待部署计算节点相 关的计算节点为与所述待部署计算节点有通信需求的计算节点;
接收模块, 用于接收所述控制器发送的包含所述链路信息和 /或流量信息的 部署信息响应消息;
确定模块, 用于根据所述链路信息和 /或流量信息确定部署方案。
38、 根据权利要求 37所述的处理节点, 其特征在于, 所述描述信息包括待 部署计算节点的标识信息、 所述待部署计算节点的数量信息或所述待部署计算 节点所属的租户标识信息。
39、 根据权利要求 37或 38所述的处理节点, 其特征在于, 所述发送模块还 用于通过代理向控制器发送部署信息请求消息;
所述接收模块还用于通过所述代理接收所述控制器发送的包含所述链路信 息和 /或流量信息的部署信息响应消息。
40、 根据权利要求 37~39任一项所述的处理节点, 其特征在于, 所述待部署 计算节点包括新增计算节点或已部署计算节点。
41、 一种业务系统, 其特征在于, 包括如权利要求 21~26任一项所述的控制 器以及如权利要求 31~36任一项所述的处理节点。
42、 一种业务系统, 其特征在于, 包括权利要求 27~30任一项所述的控制器 以及如权利要求 37~40任一项所述的处理节点。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310174927.2 | 2013-05-13 | ||
CN201310174927.2A CN104158675B (zh) | 2013-05-13 | 2013-05-13 | 计算节点部署方法、处理节点、控制器及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014183574A1 true WO2014183574A1 (zh) | 2014-11-20 |
Family
ID=51884088
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/076828 WO2014183574A1 (zh) | 2013-05-13 | 2014-05-06 | 计算节点部署方法、处理节点、控制器及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104158675B (zh) |
WO (1) | WO2014183574A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10333851B2 (en) * | 2016-10-18 | 2019-06-25 | Huawei Technologies Co., Ltd. | Systems and methods for customizing layer-2 protocol |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105677454B (zh) * | 2014-11-20 | 2019-08-27 | 华为技术有限公司 | 计算资源的整合方法、装置和系统 |
CN105656662B (zh) * | 2014-12-08 | 2019-02-12 | 华为技术有限公司 | 一种故障定位方法及装置 |
CN110474960B (zh) * | 2014-12-23 | 2021-07-09 | 华为技术有限公司 | 一种虚拟化网络中业务部署的方法和装置 |
CN110275756B (zh) * | 2018-03-13 | 2023-04-18 | 华为技术有限公司 | 虚拟化网元的部署方法以及装置 |
CN109889370B (zh) * | 2019-01-10 | 2021-12-21 | 中国移动通信集团海南有限公司 | 一种网络设备位置确定方法、装置及计算机可读存储介质 |
CN113344152A (zh) * | 2021-04-30 | 2021-09-03 | 华中农业大学 | 一种奶品全链条生产信息智能检测上传系统及方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102082692A (zh) * | 2011-01-24 | 2011-06-01 | 华为技术有限公司 | 基于网络数据流向的虚拟机迁移方法、设备和集群系统 |
CN102112981A (zh) * | 2008-07-31 | 2011-06-29 | 思科技术公司 | 通信网络中的虚拟机的动态分布 |
CN103023799A (zh) * | 2011-09-27 | 2013-04-03 | 日电(中国)有限公司 | 用于虚拟机迁移的中央控制器和虚拟机迁移方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8145760B2 (en) * | 2006-07-24 | 2012-03-27 | Northwestern University | Methods and systems for automatic inference and adaptation of virtualized computing environments |
-
2013
- 2013-05-13 CN CN201310174927.2A patent/CN104158675B/zh active Active
-
2014
- 2014-05-06 WO PCT/CN2014/076828 patent/WO2014183574A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102112981A (zh) * | 2008-07-31 | 2011-06-29 | 思科技术公司 | 通信网络中的虚拟机的动态分布 |
CN102082692A (zh) * | 2011-01-24 | 2011-06-01 | 华为技术有限公司 | 基于网络数据流向的虚拟机迁移方法、设备和集群系统 |
CN103023799A (zh) * | 2011-09-27 | 2013-04-03 | 日电(中国)有限公司 | 用于虚拟机迁移的中央控制器和虚拟机迁移方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10333851B2 (en) * | 2016-10-18 | 2019-06-25 | Huawei Technologies Co., Ltd. | Systems and methods for customizing layer-2 protocol |
Also Published As
Publication number | Publication date |
---|---|
CN104158675A (zh) | 2014-11-19 |
CN104158675B (zh) | 2018-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014183574A1 (zh) | 计算节点部署方法、处理节点、控制器及系统 | |
CN109618002B (zh) | 一种微服务网关优化方法、装置及存储介质 | |
US9999030B2 (en) | Resource provisioning method | |
US11736402B2 (en) | Fast data center congestion response based on QoS of VL | |
CN112187612B (zh) | 高性能、可扩展和无掉话的数据中心交换结构 | |
US9887959B2 (en) | Methods and system for allocating an IP address for an instance in a network function virtualization (NFV) system | |
US9692696B2 (en) | Managing data flows in overlay networks | |
EP2865147B1 (en) | Guarantee of predictable and quantifiable network performance | |
JP5976942B2 (ja) | ポリシーベースのデータセンタネットワーク自動化を提供するシステムおよび方法 | |
TWI538453B (zh) | 網路介面控制器、積體電路微晶片、系統及方法 | |
US9288135B2 (en) | Managing data flows in software-defined network using network interface card | |
JP6200497B2 (ja) | 仮想マシンのフローの物理的なキューへのオフロード | |
CN110602156A (zh) | 一种负载均衡调度方法及装置 | |
WO2012100544A1 (zh) | 基于网络数据流向的虚拟机迁移方法、设备和集群系统 | |
EP2256640A1 (en) | Managing traffic on virtualized lanes between a network switch and a virtual machine | |
US11729108B2 (en) | Queue management in a forwarder | |
CN109510878B (zh) | 一种长连接会话保持方法和装置 | |
EP3310011A1 (en) | Load sharing method and related apparatus | |
CN110830574B (zh) | 一种基于docker容器实现内网负载均衡的方法 | |
US10243799B2 (en) | Method, apparatus and system for virtualizing a policy and charging rules function | |
CN110798412A (zh) | 组播业务处理方法、装置、云平台、设备及可读存储介质 | |
US11316916B2 (en) | Packet processing method, related device, and computer storage medium | |
WO2023186046A1 (zh) | 一种发送报文的方法和装置 | |
JP2011203810A (ja) | サーバ、計算機システム及び仮想計算機管理方法 | |
CN107249038A (zh) | 业务数据转发方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14797725 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14797725 Country of ref document: EP Kind code of ref document: A1 |