WO2017107215A1 - 一种数据中心内服务功能体部署方法、装置及控制器 - Google Patents

一种数据中心内服务功能体部署方法、装置及控制器 Download PDF

Info

Publication number
WO2017107215A1
WO2017107215A1 PCT/CN2015/099075 CN2015099075W WO2017107215A1 WO 2017107215 A1 WO2017107215 A1 WO 2017107215A1 CN 2015099075 W CN2015099075 W CN 2015099075W WO 2017107215 A1 WO2017107215 A1 WO 2017107215A1
Authority
WO
WIPO (PCT)
Prior art keywords
service function
service
function body
combination
data flow
Prior art date
Application number
PCT/CN2015/099075
Other languages
English (en)
French (fr)
Inventor
洪佩琳
张泓
周伟
杨柯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2015/099075 priority Critical patent/WO2017107215A1/zh
Publication of WO2017107215A1 publication Critical patent/WO2017107215A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, and a controller for deploying a service function in a data center.
  • the service functions (SFs) deployed in the data center can be divided into two categories according to their service objects.
  • One is the data center itself as an entity providing services, and the service function provides services for other data streams.
  • the data stream enters the data center in order to enjoy the services provided by the service function, for example, the service function that provides the data cache; the other is the data center as the entity that enjoys the service, for example, the data stream is provided for the cache provided by the service function.
  • a data service such as firewall filtering for security detection can be placed at the entrance of the data center.
  • the service function deployed in the data center may be position-adjustable.
  • the service function that provides services for accessing the data stream of the data center may be multiple instances, and multiple service functions may form a service function chain, and different data flows.
  • the corresponding service function chains are different, that is, the service functions to be executed by the data stream are different, and the order in which the service functions are executed is different.
  • the deployment method of the existing service function only considers the total delay of the overall data traffic.
  • the traffic overhead problem cannot be ignored.
  • the service function required by the service function chain is executed in the order of SF1, SF2, SF3, and SF4.
  • the scheme for deploying the service functions in the data center between the racks is SF1 and SF3, SF3 and SF2, and SF2 and SF4, in order to sequentially pass all the SFs in the service chain and reach the specified SF4, the data stream must be transmitted between the racks, and it is repeatedly transmitted between the racks, which imposes a greater burden on the data center's transmission network.
  • the present application discloses a method, a device and a controller for deploying a service function in a data center, which can reduce traffic between service functions, avoid redundant paths, and transmit data streams in the service function chain. Repeated transmissions in the transmission network impose an excessive burden on the data center transmission network.
  • the first aspect of the present application discloses a method for deploying a service function in a data center, which can obtain a service function chain and a data flow between service functions in a service function chain from a data stream monitored by a network data flow monitor.
  • Traffic generating an undirected graph according to the data flow between the service function chain and the service function in the service function chain, the node of the undirected graph is a service function body, and the weight of the edge in the undirected graph is based on the service function body
  • the degree of association between the service functions calculated by the ratio of the data stream traffic to the sum of the data stream flows on all service function chains; the undirected graph included by the maximum relevance minimum number loop algorithm and the maximum weight merging algorithm
  • the service function body is merged into a combination of service function bodies; the maximum relevance degree minimum number ring algorithm refers to a target ring that maximizes the sum of the degrees of association of each side in the undirected picture and includes the minimum number of service functions included.
  • the service function body is merged into a service function body combination, and the maximum weight merging algorithm refers to using the breadth-first search algorithm to the undirected graph Degree of association greater than a preset threshold value service functions merged into a service function member body composition; the body composition of each service function has deployed to separate physical machine resources required for the service function in combination with the matched member of the server cluster.
  • the method can reduce the traffic between the service functions, avoid redundant paths and repeatedly transmit data streams in the service chain in the service chain, which imposes an excessive burden on the data center transmission network.
  • the weight of the edge in the undirected graph is based on the sum of the data flow traffic between the service functional entities and the traffic flow on all the service function chains.
  • the degree of association between the service functions calculated by the ratio is specifically:
  • j ⁇ i ⁇ represents the degree of association between the service function body i and the service function body j, and the degree of association refers to the previous one of the service function body i
  • the hop or the next hop is the probability of the service function body j, is a metric for the association between the service function body i and the service function body j;
  • p(i, j) is the service function body i and The ratio of the data flow between the service function j to the sum of the data flow on all service function chains, where k1, . . . , km represents the service existing between the service function i and the service function j Functional body.
  • the method for deploying the service function in the data center is to utilize a maximum correlation degree minimum number of rings algorithm and a maximum weight merging algorithm
  • the service function body included in the undirected graph is merged into a combination of service function bodies
  • the Fiduccia-Mattheyses (FM) algorithm can also be used to adjust the service functions included in the respective service function body combinations.
  • the main idea of the FM algorithm is to change the node on one side of the dividing line to the other side of the dividing line each time.
  • the implementation can adjust the service function combination obtained by using the maximum association minimum number ring algorithm and the maximum weight merging algorithm to achieve an optimal combination and reduce the traffic between the loop and the service function.
  • the maximum correlation degree minimum number of rings algorithm and the maximum weight merging algorithm are used
  • the constraint that the service function body included in the undirected graph is required to be combined for each service function body is: physical machine resource constraint and affinity constraint; wherein the physical machine resource constraint refers to the service service body combination
  • the required physical machine resources do not exceed the maximum resources allowed by the physical machine
  • the affinity constraint refers to the computationally intensive service function and the storage intensive service function in the combination of the service functions to the physical Machine resources include different preferences of computing resources and storage resources to take advantage of the affinity of the physical machine resources.
  • the service function in the data center may further determine an optional deployment location of the service function body combination according to a server cluster in which the data flow requested by the network data flow monitor monitors data; separately calculate a service function in the service function body combination The cost of the migration to the optional deployment location; the server cluster in which the data of the data flow request having the largest data flow in the deployment location where the overhead does not exceed the preset threshold is located is determined as the service function combination a target deployment location, where the physical machine resources owned by the server cluster of the target deployment location satisfy physical device resources required for the combination of the service function bodies; correspondingly, the service function body combinations are separately deployed to the owned and described In the server cluster that matches the physical machine resources required for the service function combination, the service function body can be combined separately.
  • the implementation can not only reduce the traffic between the service functions by means of the combination of service functions, but also avoid redundant paths and repeated transmission of data flows in the service chain in the service chain to impose excessive burden on the data center transmission network. And can further avoid redundant paths and data flows in the service function chain in the transport network by determining the target deployment location Repeated transmission.
  • the second aspect also discloses a device for deploying a service function in a data center
  • the deployment device of the service function in the data center may include a unit for performing a deployment method of the service function in the data center disclosed in the first aspect of the present application.
  • the deployment device of the service function in the data center may include: an obtaining module, configured to obtain a service function chain and a service function chain from the data stream monitored by the network data flow monitor a data flow between the service functions; a generating module, configured to generate an undirected graph according to the service function chain and the data flow between the service functions in the service function chain, where the node of the undirected graph is a service function,
  • the weight of the edge in the undirected graph is the degree of association between the service functions based on the ratio of the data flow between the service functions to the sum of the traffic flows on all service function chains;
  • the merging module is used to utilize the maximum association The minimum number of rings algorithm and the maximum weight merging algorithm merge the service functions included in the undirected graph
  • the service function body included in the target ring with the smallest number of service functions is merged into a service function body combination, and the maximum weight merging algorithm is
  • the service function body with the degree of association greater than the preset threshold in the undirected graph is merged into a service function body combination by using the breadth-first search algorithm;
  • the deployment module is configured to separately deploy the service function body combinations to the combination of the owning and the service function body.
  • the physical machine resources are matched in the server cluster.
  • a third aspect also discloses a controller, the controller comprising a processor, a memory and a communication interface, the processor for obtaining a service function chain and the service from a data stream monitored by the network data flow monitor Data flow between service functions in the function chain;
  • the memory is configured to store a service function chain acquired by the processor and data flow between the service functions in the service function chain;
  • the processor And generating an undirected graph according to the service function chain and the data flow between the service function body in the service function chain, where the node of the undirected graph is a service function body, and the side of the undirected graph
  • the weight is a degree of association between the service functions according to a ratio of data flow between the service functions to a sum of data flow flows on all of the service function chains;
  • the processor is also used to utilize the maximum The minimum degree of association algorithm and the maximum weighted merging algorithm merge the service functions included in the undirected graph into combinations of service functions;
  • the maximum correlation degree minimum number of rings algorithm means The service function body included in the target ring
  • the processor is also used to separately deploy each service function body combination through a communication interface to a server cluster that matches the physical machine resources required for the combination of the service function body.
  • the processor in the controller may also perform the operations of any one or more of the deployment methods of the service functions within the data center disclosed in the first aspect.
  • the deployment method of the service function in the data center in the present application can obtain the service function chain and the data flow between the service function in the service function chain from the data flow monitored by the network data flow monitor; according to the service function chain and the service function
  • the data flow between the service functions in the chain generates an undirected graph; the service function body included in the undirected graph is merged into a service function body combination by using the maximum correlation degree minimum number ring algorithm and the maximum weight merging algorithm;
  • the volume combination is deployed to the server cluster that matches the physical machine resources required for the combination of the service function, thereby reducing the traffic between the service functions, avoiding redundant paths and the data flow in the service function chain in the transmission network. Repeated transmissions impose an excessive burden on the data center transmission network.
  • 1 is a method for deploying a service function body in a data center according to an embodiment of the present invention
  • FIG. 2 to FIG. 6 are schematic diagrams showing the use of an undirected graph to merge multiple service functional groups into a service functional body combination according to an embodiment of the present invention
  • FIG. 7 is a schematic flowchart diagram of another method for deploying a service function in a data center according to an embodiment of the present invention.
  • FIG. 8 is an undirected diagram of a multi-instance service function body according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a device for deploying a service function body in a data center according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • the embodiment of the invention discloses a method, a device and a controller for deploying a service function in a data center, which can reduce the traffic between the service functions, avoid redundant paths and repeatedly repeat the data flow in the service function chain in the transmission network. Transmission imposes an excessive burden on the data center transmission network. The details are described below separately.
  • FIG. 1 is a schematic diagram of a method for deploying a service function in a data center according to an embodiment of the present invention. As shown in FIG. 1 , a method for deploying a service function in the data center may include the following steps:
  • the network controller may obtain the data flow between the service function chain and the service function body in the service function chain from the data flow monitored by the network data flow monitor in the network initialization and preprocessing stage, optionally, The network controller may also obtain topology statistics of the network from the monitored data stream, where the topology information may include physical machine resources of each server cluster and bandwidth between the server clusters, and the service function body combination may be deployed according to the topology information.
  • the topology information may include physical machine resources of each server cluster and bandwidth between the server clusters, and the service function body combination may be deployed according to the topology information.
  • S102 Generate an undirected graph according to the service function chain and the data flow traffic between the service function body in the service function chain, where the node of the undirected graph is the service function body, and the undirected graph
  • the weight of the middle side is the degree of association between the service functions according to the ratio of the data flow between the service functions to the sum of the data flow flows on all the service function chains;
  • an undirected graph is generated according to the data flow between the service function chain and the service function body in the service function chain, for example, the correspondence between the service function chain and the data flow rate as shown in Table 1, According to Table 1, the undirected graph shown in Figure 2 can be generated.
  • the nodes in the graph are the service functions FW, IDS, Opt, Cach, and DPI, respectively.
  • the edge weight of the undirected graph in order to simultaneously represent the relationship between the size of the traffic between the service functions and the execution order, can be corrected by the concept of the degree of association, and the association degree is calculated by using the label layering method.
  • j ⁇ i ⁇ represents the degree of association between the service function body i and the service function body j, and the degree of association refers to the previous one of the service function body i
  • the hop or the next hop is the probability of the service function body j, is a metric for the association between the service function body i and the service function body j;
  • p(i, j) is the service function body i and The ratio of the data flow traffic between the service function bodies j to the sum of the data flow traffic on all service function chains, where k 1 , . . . , k m represents the existence between the service function body i and the service function body j Service function body.
  • the undirected graph shown in FIG. 2 can be converted into the undirected graph shown in FIG. 3 by the formula (1), wherein the difference between the undirected graph shown in FIG. 3 and FIG. 2 is that the graph 3
  • the middle right is the degree of association between service functions.
  • the definition of affinity enhances the relationship between two service functions that are directly connected and have high traffic to each other. If there is only a direct connection between the service function body i and the service function body j, the degree of association is determined only by the mutual traffic. If there is no other between the service function body i and the service function body j, Service function body, the middle service function body will weaken the relationship between the service function body i and the service function body j, and the degree of weakening can be used Said.
  • the degree of association between the two service functions is large, the more the two service functions should be placed in a server cluster, thereby saving traffic between the service functions in the network; If the degree of association between the service functions is 0, the two service functions are placed in a server cluster or in a connected server cluster, which easily leads to loops or network congestion.
  • the degree of association between two service functional entities is hierarchically calculated by using the labeling method, and the method can be mainly divided into two parts.
  • the first stage is to perform any service function according to the requirements of the service function body chain.
  • the body is the starting point, and the position (ie, the label) of other service functions relative to the service function body in the service function body chain is calculated;
  • the second stage is to hierarchically calculate the degree of association between the two service function bodies according to the labeling result.
  • the level of association between the two service functions is related to the number of service functions that are separated.
  • the service function body included in the undirected graph is merged into a service service body combination by using a maximum association degree minimum number ring algorithm and a maximum weight merging algorithm;
  • the maximum relevance minimum number ring algorithm refers to merging the service function body included in the target ring with the largest sum of the association degrees of each side in the undirected graph and the minimum number of service functions included into one service function. Body combination.
  • This embodiment can merge the service functions with large traffic between three or more service functions into one service function combination, and then put the service function combination into one server cluster, and distribute these services in a distributed manner. Functionality reduces traffic between server clusters in the data center compared to different server clusters.
  • the edge weight in the undirected graph may be modified first, that is, the reciprocal of the degree of association is used to represent the edge weight, so that the graph can be
  • the undirected graph shown in FIG. 3 is converted into the undirected graph shown in FIG. 4, and the problem that the sum of the edge weights in the undirected graph is the largest and the number of nodes in the loop is the smallest is converted into the smallest loop in the undirected graph. That is, looking for the minimum loop in the undirected graph shown in FIG.
  • p(i,j) holds the path from node i to node j, the previous hop node of node j, p(i, If the value of j) is h, it means that the shortest line from i to j is i->...->h->j, that is, h is the last node before j in the shortest path from i to j.
  • dis(i,j)>dis(i,k)+dis(k,j) let the shortest path from i to j be changed to i->...->k->...- >j this way,
  • the value of dis(k,j) is known, that is, the path k->...->j is known, so the previous hop node of j on the road k->...->j (ie p(k,j)) is also known, of course, because the path i->...->k->...->j is to be changed, the previous hop node of j is exactly p (k, j).
  • the maximum number of associations determined from the undirected graph may include a service function body that can be merged into a service function combination, and also needs to meet physical machine resource constraints and affinity constraints, where the physical machine resource constraint is The physical machine resources required for the combination of service functions do not exceed the maximum resources allowed by the physical machine.
  • the affinity constraint refers to the computationally intensive service function and the storage-intensive service function pair in each service function combination.
  • Physical machine resources include different preferences of computing resources and storage resources to take full advantage of the affinity of physical machine resources. If the maximum number of associations determined from the undirected graph is included in the minimum number of rings, the service function body may be merged into a service function body combination that does not satisfy the two constraints.
  • the method is continued to search from the undirected graph. For other minimum loops, if there is no minimum loop in the undirected graph that satisfies the constraint, the merge of the minimum number of rings with the largest degree of association can be stopped.
  • the maximum weight merging algorithm refers to merging a service function body whose degree of association in the undirected graph is greater than a preset threshold into a service function body combination by using a breadth-first search algorithm; the maximum weight merging algorithm can be summarized as: randomly selecting a vertex v0 (generally the edge node of the undirected graph), adding a merged region, starting from the v0 to the breadth-first search algorithm for the undirected graph, adding the traversed vertex to the current merged region to ensure that the above two constraints are met, The sum of the edge weights of the merged region is maximized. If the limit of any one of the above two constraints is reached or there is no service function node that is not merged, the merge ends.
  • the preset threshold may be set according to the maximum bandwidth in the network topology information.
  • the maximum weight merging algorithm is to place a service function with a large amount of traffic (a strong correlation, such as greater than a preset threshold) in a service function body without exceeding the physical machine resource limit and satisfying the affinity constraint. In combination.
  • the randomly selected vertices in the maximum weight merging algorithm as the starting point have a great influence on the complexity of the algorithm. Therefore, the edge nodes of the undirected graph are generally selected as the starting point. In order to find the nodes close to the edge, they can usually be randomly selected. After a vertex, the nodes in the undirected graph are traversed and labeled according to the breadth-first search algorithm, and the node with the largest label is used as a node closer to the edge.
  • an undirected graph as shown in FIG. 5 can be obtained.
  • the service function body FW and The Cache can be merged into a service function combination, and the service functions IDS, Opt, and DPI can be merged into one service function combination.
  • the service function body included in each service function body combination may also be adjusted by using an FM (Fiduccia-Mattheyses) algorithm.
  • FM Freduccia-Mattheyses
  • FIG. 6 the services of the two ends of the split line are cut.
  • the function body is adjusted, and the service function body connected to the edge through which the dividing line passes is adjusted to the neighbor service function body combination, and it is detected whether the adjusted service function chain loop is reduced or whether the traffic between the service function body combinations is reduced. If the service function chain loop is reduced or the traffic between the service function body combinations is reduced, the service function body is adjusted to the neighbor service function body combination. Otherwise, the service function body combination obtained in step S103 is finally obtained. Divide the results.
  • the main idea of the FM algorithm is to change the node on one side of the dividing line to the other side of the dividing line each time. If the load balancing in the undirected graph is improved after the exchange, the result of the division after the exchange is retained; If there is no improvement in load balancing or loop in the undirected graph, the partitioning result before the exchange is retained.
  • the physical machine resource may be specifically a computing resource and a storage resource in the server cluster, and the service function body that meets the physical machine resource and the affinity constraint may be deployed to the corresponding server cluster, thereby reducing the service function between the functional groups. Traffic, avoiding excessive burden on the data center transmission network.
  • the data flow needs to be allocated to each instance of the service function according to the destination address of the data stream and the load of the service function, for example, , the service function chains shown in Table 1
  • the service function chain in which the service function IDS is located is FW->IDS->Opt, DPI->IDS->Cache, and FW->Cache->IDS.
  • the IDS data stream is allocated to the instances IDS1 and IDS2, and can be the traffic on FW->IDS1->Opt, DPI->IDS2->Cache, and FW->Cache->IDS2 as shown in Table 2;
  • An instance of each service function is generated as a single instance to generate an undirected graph, as shown in FIG. 8.
  • FIG. 8 is an undirected graph constructed for a multi-instance service function; finally, an undirected graph constructed for the multi-instance is adopted.
  • the service entity instance included in the undirected graph is merged into a service function body combination by using a maximum association degree minimum number ring algorithm and a maximum weight merging algorithm, and each service function body is deployed in combination.
  • the service functions that have a large data flow between them are combined into one service function and placed in one server, thereby reducing the traffic between the service functions and reducing the data center transmission network. burden.
  • the deployment method of the service function body in the data center shown in FIG. 1 can obtain the service function chain and the data flow between the service function body in the service function chain from the data flow monitored by the network data flow monitor; according to the service function chain And generating an undirected graph of data flow between service functions in the service function chain; merging the service functions included in the undirected graph into service service body combinations by using a maximum correlation degree minimum number loop algorithm and a maximum weight merging algorithm;
  • Each service function combination is deployed to a server cluster that matches the physical machine resources required for the combination of the service function, thereby reducing traffic between service functions, avoiding redundant paths and data in the service function chain.
  • the repeated transmission of the stream in the transmission network imposes an excessive burden on the data center transmission network.
  • FIG. 7 is a schematic flowchart of another method for deploying a service function in a data center according to an embodiment of the present invention, where the deployment method shown in FIG. 7 is compared with the deployment method shown in FIG. After performing step S103, and before S104, the following steps may also be performed:
  • S105 Determine, according to the server cluster where the data requested by the data stream is located, an optional deployment location of the service function body combination
  • S107 Determine, as a target deployment location of the service function body combination, a server cluster in which data of the data flow request having the largest data flow traffic in the deployment location where the cost does not exceed a preset threshold is located, where the target deployment location is The physical machine resources owned by the server cluster satisfy the physical machine resources required for the combination of the service function bodies;
  • the step of deploying the service function body combination to the server cluster that matches the physical machine resources required for the combination of the service function body in step S104 may include: separately deploying the service function body combination to the server cluster The target deployment location of the service function combination.
  • the corresponding deployment location of each service function body combination can be obtained by using two aspects.
  • the service function body combination is deployed as much as possible to the server cluster where the data flow request data is located, so that the server clusters can be reduced. Traffic, reduce the encroachment of data streams on the network, shorten the service function path, and also avoid loops; on the other hand, it can preferentially determine the deployment location of the service function combination with large data volume, so as to maximize the service function of large data volume.
  • the volume combination can be deployed to the optimal location, and to a certain extent, the data stream can be reduced to the network.
  • step S105 Since the determination of the optional deployment location in step S105 mainly considers the server cluster in which the data flow is biased, the data flow biased by the different service functional entities in the same service function combination is different in the server cluster, and therefore, the service function
  • the volume combination corresponds to multiple optional server cluster deployment locations; in order to minimize the pressure on the data transmission network, the service function combination can be deployed in a server cluster with a large data flow tendency.
  • the deployment location of the service function body combination in step S106 also takes into account the overhead of the service function body migration, that is, the service function body in the service function body combination is migrated from the original location to the deployment location. Therefore, not only the service function but also the service function is considered.
  • the combination is deployed in a server cluster with a large data flow trend, and it is considered to migrate the service functions in the service function combination from the original location to the original.
  • the total cost of the deployment location determines the target deployment location, that is, the optimization goal is to minimize the routing overhead, and minimize the overhead of the service function migration.
  • the overhead of minimizing the migration of the service function can be used as a constraint as long as the migration overhead
  • a deployment location that does not exceed the preset threshold can be used as the target deployment location.
  • a s 1 indicates that the location of the service function s has changed.
  • t sv represents the transfer overhead of the service function s to the node v.
  • the t sv is subject to a number of factors, including the cost of occupying the storage space, the occupation cost of the computing resource, the overhead of the disassembly and installation process, and the forwarding of the service function. Additional overhead caused by node v and its surrounding links. If the value tends to infinity, it means that the transfer of this service function to node v is not appropriate.
  • the target deployment location of the service function combination needs to meet the two constraints when merging service function combinations, that is, physical machine resource constraints and affinity constraints, and also needs to be deployed in the target deployment location.
  • the data stream in the data center after the combination of service functions must pass through all service functions in order.
  • the deployment method of the service function in the data center shown in FIG. 7 can reduce the traffic between the service functions and reduce the pressure on the internal transmission network of the data center through the combination of service functions; further, the data center shown in FIG.
  • the deployment method of the service function body may also allocate the service function migration cost in the process, and determine the optional deployment location of the service function body combination according to the server cluster where the data requested by the data flow is located; separately calculate the service function body combination
  • the service function in the migration to the optional deployment location; the server cluster in which the data flow request data having the largest data flow in the deployment location where the overhead does not exceed the preset threshold is determined is the service
  • the target deployment location of the functional body combination minimizes the service forwarding path through the service function and avoids loops.
  • FIG. 9 is a schematic structural diagram of a device for deploying a service function in a data center according to an embodiment of the present invention.
  • the deployment device may include:
  • the obtaining module 210 is configured to obtain, from a data flow monitored by the network data flow monitor, a service function chain and a data flow between the service functions in the service function chain;
  • the generating module 220 is configured to generate an undirected graph according to the data flow between the service function chain and the service function body in the service function chain, where the node of the undirected graph is a service function body, and the weight of the edge in the undirected graph is a root
  • the degree of association between the service functions calculated based on the ratio of the data flow between the service functions to the sum of the data flow flows on all service function chains;
  • the merging module 230 is configured to merge the service functions included in the undirected graph into service service body combinations by using a maximum correlation degree minimum number ring algorithm and a maximum weight merging algorithm;
  • the maximum relevance degree minimum number ring algorithm refers to the undirected graph
  • the service function body included in the target ring with the largest sum of the sides and the smallest number of service functions is merged into a service function body combination.
  • the maximum weight merging algorithm refers to the association of the undirected graph by the breadth-first search algorithm. A service function whose degree is greater than a preset threshold is merged into a service function combination;
  • the deployment module 240 is configured to separately deploy each service function body to a server cluster that matches the physical machine resources required for the combination of the service function body.
  • the weight of the edge in the undirected graph is the degree of association between the service functional entities calculated according to the ratio of the data flow between the service functional entities to the sum of the traffic flows on all the service functional chains, and the association The degree can be calculated by the following formula:
  • j ⁇ i ⁇ represents the degree of association between the service function body i and the service function body j, and the degree of association refers to the previous hop or the latter of the service function body i
  • the probability that the hop is the service function body j is a metric for the association between the service function body i and the service function body j
  • p(i, j) is the data flow between the service function body i and the service function body j.
  • the apparatus shown in FIG. 9 may further include:
  • the adjustment module 250 is configured to adjust the service function body included in the undirected graph to the service function body combination by using the maximum association degree minimum number ring algorithm and the maximum weight merging algorithm after the merging module 230, and then adjust the Fiduccia-Mattheyses (FM) algorithm.
  • the maximum degree of association minimum number loop algorithm and the maximum weight merging algorithm are used to merge the service functions included in the undirected graph into the constraint conditions required for each service function body combination: physical machine resource constraints and affinity Constraint; wherein, the physical machine resource constraint means that the physical machine resources required for the combination of the service function bodies do not exceed the maximum resources allowed by the physical machine, and the affinity constraint refers to the computationally intensive service function body of each service function body combination.
  • the physical machine resource constraint means that the physical machine resources required for the combination of the service function bodies do not exceed the maximum resources allowed by the physical machine
  • the affinity constraint refers to the computationally intensive service function body of each service function body combination.
  • the apparatus shown in FIG. 9 may further include:
  • a first determining module 260 configured to determine, according to a server cluster where data requested by the data flow monitored by the network data flow monitor is located, an optional deployment location of the service function combination;
  • the calculating module 270 is configured to separately calculate an overhead of migrating the service function in the service function body combination to each optional deployment location;
  • a second determining module 280 configured to determine, as a target deployment location of the service function combination, a server cluster in which data of the data flow request having the largest data flow rate in the deployment location where the overhead does not exceed the preset threshold is located, where the target deployment location is The physical machine resources owned by the server cluster meet the physical machine resources required for the service function body combination;
  • the deployment module 240 separately deploys the service function body combination to the server cluster that matches the physical machine resources required for the combination of the service function body, specifically deploying the service function body combination to the target deployment of the service function body combination respectively. position.
  • the obtaining module 210 may perform the operation of step S101 in the method for deploying the service function in the data center shown in FIG. 1 and the corresponding implementation manner; the generating module 220 may execute the data shown in FIG.
  • the merging module 230 can perform the operation of step S103 in the deployment method of the service function body in the data center shown in FIG. 1 and the corresponding implementation manner
  • the deployment module 240 can perform the operations of step S104 in the deployment method of the service function in the data center and the corresponding implementation manners;
  • the first determining module 260, the computing module 270, and the second determining module 280 can execute the method in FIG.
  • the operations of steps S105 through S107 and corresponding embodiments determine the target deployment location.
  • the unit in the device of the embodiment of the present invention may be combined, divided, and deleted according to actual needs, which is not limited in the embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • the controller may include a memory 310, a communication interface 320, and a processor 330.
  • the communication interface 320 may be a wired communication interface.
  • the Ethernet interface can be an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface can be a WLAN interface, a cellular network communication interface, or a combination thereof.
  • the memory 410 can include Volatile memory (English: volatile memory), such as random access memory (English: random-access memory, abbreviation: RAM); memory can also include non-volatile memory (English: non-volatile memory), such as flash Memory (English: flash memory), hard disk (English: hard disk drive, abbreviated: HDD) or solid state drive (English: solid-state drive, abbreviation: SSD); the memory 410 may also include a combination of the above types of memory.
  • the processor 330 may be a central processing unit (English: central processing unit, abbreviated: CPU), a network processor (English: network processor, abbreviation: NP) or a combination of a CPU and an NP.
  • the processor 330 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (abbreviated as PLD), or a combination thereof.
  • the above PLD can be a complex programmable logic device (English: complex programmable logic device, abbreviation: CPLD), field-programmable gate array (English: field-programmable gate array, abbreviation: FPGA), general array logic (English: generic array Logic, abbreviation: GAL) or any combination thereof.
  • the memory 310 can be used to store program code corresponding to the deployment of the service function in the data center, and the processor 330 can invoke the program instructions stored in the memory 310 through the communication interface 320 for obtaining from the data stream monitored by the network data flow monitor.
  • the service function chain and the data flow between the service functions in the service function chain; the memory 310 is further configured to store the service function chain acquired by the processor 330 and the data flow between the service functions in the service function chain;
  • the processor 330 is further configured to generate an undirected graph according to the data flow between the service function chain and the service function body in the service function chain, where the node of the undirected graph is a service function body, and the weight of the edge in the undirected graph is a service according to the service.
  • the degree of association between the service functions calculated by the ratio of the data flow between the functional entities to the sum of the data flow flows on all service functional chains;
  • the processor 330 is further configured to merge the service function body included in the undirected graph into a service function body combination by using a maximum correlation degree minimum number ring algorithm and a maximum weight merging algorithm;
  • the maximum relevance degree minimum number ring algorithm refers to the undirected graph
  • the service function body included in the target ring with the largest sum of the related sides and the smallest number of service functions is merged into a service function body combination
  • the maximum weight merging algorithm refers to the use of the breadth-first search algorithm in the undirected graph.
  • a service function whose association degree is greater than a preset threshold is merged into a service function body combination;
  • the processor 330 is further configured to separately deploy each service function body combination to the own through the communication interface 320.
  • a server cluster that matches the physical machine resources required for the service function combination.
  • the weight of the edge in the undirected graph is the degree of association between the service functions calculated according to the ratio of the data flow between the service functions to the sum of the traffic flows on all the service function chains:
  • j ⁇ i ⁇ represents the degree of association between the service function body i and the service function body j, and the degree of association refers to the previous hop or the latter of the service function body i
  • the probability that the hop is the service function body j is a metric for the association between the service function body i and the service function body j
  • p(i, j) is the data flow between the service function body i and the service function body j.
  • the processor uses the maximum association degree minimum number of rings algorithm and the maximum weight merging algorithm to merge the service functions included in the undirected graph into a combination of service functional entities, and is also used to utilize the Fiduccia-Mattheyses (FM) algorithm. Adjust the service function body included in each service function body combination.
  • FM Fiduccia-Mattheyses
  • the maximum degree of association minimum number loop algorithm and the maximum weight merging algorithm are used to merge the service functions included in the undirected graph into the constraint conditions required for each service function body combination: physical machine resource constraints and affinity Constraint; wherein, the physical machine resource constraint means that the physical machine resources required for the combination of the service function bodies do not exceed the maximum resources allowed by the physical machine, and the affinity constraint refers to the computationally intensive service function body of each service function body combination. Different preferences of the storage-intensive service functions for computing resources and storage resources included in the physical machine resources to fully utilize the affinity of the physical machine resources.
  • the processor 330 is further configured to determine, according to the server cluster where the data requested by the data flow monitor is monitored by the network data flow monitor, an optional deployment location of the service function combination; separately calculate the service function body combination The cost of the service function migration to each optional deployment location; and the server cluster where the data flow request data with the largest data flow traffic in the deployment location where the overhead does not exceed the preset threshold is determined as the target deployment location of the service function combination
  • the physical machine resources owned by the server cluster in the target deployment location satisfy the physical machine resources required for the service function body combination;
  • the processor 330 separately deploys the service function body combination to the server cluster that matches the physical machine resources required for the combination of the service function body, specifically deploying the service function body combination to the target deployment of the service function body combination respectively. position.
  • the processor 330 invokes program instructions stored in the memory 310 to perform one or more steps in the embodiment of the invention shown in FIG. 1 or FIG. 7, or an alternative embodiment thereof.
  • an embodiment of the present invention further discloses a computer storage medium storing a computer program.
  • the computer program in the computer storage medium is read into a computer, the computer can cause the computer to complete the disclosure of the embodiment of the present invention. All steps of the deployment method of the service function within the data center.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Flash disk, random access memory (RAM), disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例公开了一种数据中心内服务功能体的部署方法、装置及控制器,该方法从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图;利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;将各服务功能体组合分别部署到拥有与各服务功能体组合所需的物理机资源相匹配的服务器集群中,从而减少了服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担。

Description

一种数据中心内服务功能体部署方法、装置及控制器 技术领域
本发明涉及通信技术领域,尤其涉及一种数据中心内服务功能体部署方法、装置及控制器。
背景技术
目前,数据中心部署的服务功能体(Service Function,SF)根据其服务对象而言可以分为两类,一类是数据中心自己作为提供服务的实体,该服务功能体为其他数据流提供服务,数据流为了享受服务功能体提供的服务进入到数据中心内,例如,提供数据缓存的服务功能体;另一类是数据中心作为享受服务的实体,例如,数据流为了享受服务功能体提供的缓存服务时,数据中心为了避免恶意数据流进入至数据中心,可以在数据中心的入口处放置防火墙过滤等用于安全检测的服务功能体。数据中心部署的服务功能体可以是位置可调的,其中,为进入数据中心的数据流提供服务的服务功能体可以为多个实例,多个服务功能体可构成服务功能链,不同数据流所对应的服务功能链不同,即数据流所需执行的服务功能体不同,服务功能体执行的顺序也不同。
针对如何在数据中心中部署为进入数据中心的数据流提供服务的服务功能体的问题,现有服务功能体的部署方法仅考虑整体数据流量的总延时,实践中,服务功能体之间的流量开销问题是不可忽视的,例如,服务功能链要求的服务功能体的执行顺序为SF1、SF2、SF3、SF4。其中,数据中心中随意部署的服务功能体在机架之间的部署的方案是SF1与SF3连接,SF3与SF2连接,SF2与SF4连接,为了按序经过服务功能链中所有的SF并到达指定的SF4,数据流必须在机架之间传输,而且是在机架之间反复传送,从而给数据中心的传输网络带来更大的负担。
发明内容
本申请公开了一种数据中心内服务功能体的部署方法、装置及控制器,能够降低服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传 输网络中反复传输对数据中心传输网络带来过多的负担。
本申请第一方面公开了一种数据中心内服务功能体的部署方法,该方法可以从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图,无向图的节点为服务功能体,无向图中边的权重为根据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合;所述最大关联度最小数量环算法是指将所述无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,所述最大权归并算法是指利用广度优先搜索算法将所述无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;将所述各服务功能体组合分别部署到拥有与所述服务功能体组合所需的物理机资源相匹配的服务器集群中。该方法可以降低服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担。
根据第一方面,在第一方面的第一种实现中,无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度具体为:
Figure PCTCN2015099075-appb-000001
其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,所述关联度是指所述服务功能体i的前一跳或者后一跳是所述服务功能体j的概率,是对所述服务功能体i与所述服务功能体j之间关联的度量;p(i,j)为所述服务功能体i与所述服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
根据第一方面或者第一方面的第一种实现,在第一方面的第二种实现中,该数据中心内服务功能体的部署方法在利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合之 后,还可以利用Fiduccia-Mattheyses(FM)算法调整所述各服务功能体组合包括的服务功能体。其中,FM算法的主要思路是每次将分割线一侧的节点换到分割线另一侧,若交换后对无向图中的负载均衡存在改进,则保留交换后的划分结果;若交换后对无向图中的负载均衡或环路没有改进,则保留交换之前的划分结果。该实现方式可以对利用最大关联度最小数量环算法以及最大权归并算法归并获得的服务功能体组合进行调整,以达到最优的组合,减少环路以及服务功能体之间的流量。
根据第一方面、第一方面的第一种实现或者第一方面的第二种实现,在第一方面的第三种实现中,利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,所述物理机资源约束是指所述各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,所述亲和性约束是指所述各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对所述物理机资源包括的计算资源和存储资源的不同偏好以充分利用所述物理机资源所具有的亲和性。
根据第一方面、第一方面的第一种实现、第一方面的第二种实现或者第一方面的第三种实现,在第一方面的第四种实现中,该数据中心内服务功能体的部署方法,还可以根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,其中,所述目标部署位置的服务器集群拥有的物理机资源满足所述服务功能体组合所需的物理机资源;相应地,将所述服务功能体组合分别部署到拥有与所述服务功能体组合所需的物理机资源相匹配的服务器集群中,可以为将所述服务功能体组合分别部署到所述服务功能体组合的目标部署位置。该实现方式不仅可以通过服务功能体组合的方式减少服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担,而且可以通过确定目标部署位置,进一步的避免冗余路径以及服务功能链中数据流在传输网络 中反复传输。
第二方面还公开了一种数据中心内服务功能体的部署装置,该数据中心内服务功能体的部署装置可以包括执行本申请第一方面公开的数据中心内服务功能体的部署方法的单元。可选的,在第二方面公开的实现中,数据中心内服务功能体的部署装置可以包括:获取模块,用于从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;生成模块,用于根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图,该无向图的节点为服务功能体,该无向图中边的权重为根据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的服务功能体之间的关联度;归并模块,用于利用最大关联度最小数量环算法以及最大权归并算法将该无向图包括的服务功能体归并为各服务功能体组合;最大关联度最小数量环算法是指将无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,最大权归并算法是指利用广度优先搜索算法将无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;部署模块,用于将各服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中。
第三方面还公开了一种控制器,该控制器包括包括处理器、存储器和通信接口,所述处理器,用于从网络数据流监测器监测的数据流中获取服务功能链以及所述服务功能链中服务功能体之间的数据流流量;所述存储器,用于存储所述处理器获取的服务功能链以及所述服务功能链中服务功能体之间的数据流流量;所述处理器,还用于根据所述服务功能链以及所述服务功能链中所述服务功能体之间的数据流流量生成无向图,该无向图的节点为服务功能体,无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;处理器,还用于利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;最大关联度最小数量环算法是指将无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,最大权归并算法是指利用广度优先搜索算法将无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体 组合;处理器,还用于通过通信接口将各服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中。该控制器中的处理器还可以执行第一方面公开的数据中心内服务功能体的部署方法中任一或多个实现的操作。
本申请中数据中心内服务功能体的部署方法可以从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图;利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;将各服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中,从而减少了服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种数据中心内服务功能体的部署方法;
图2至图6分别是本发明实施例公开的一种利用无向图将多个服务功能体归并为服务功能体组合的示意图;
图7是本发明实施例公开的另一种数据中心内服务功能体部署方法的流程示意图。
图8是本发明实施例公开的一种多实例服务功能体构建的无向图;
图9是本发明实施例公开的一种数据中心内服务功能体的部署装置的结构示意图;
图10是本发明实施例公开的一种控制器的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种数据中心内服务功能体的部署方法、装置及控制器,能够降低服务功能体之间的流量,避免冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担。以下分别进行详细说明。
请参阅图1,图1是本发明实施例公开的一种数据中心内服务功能体的部署方法,如图1所示,该数据中心内服务功能体的部署方法可以包括以下步骤:
S101、从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;
本发明实施例中,网络控制器可以在网络初始化与预处理阶段从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流量,可选地,网络控制器也可以从监测的数据流中获取网络的拓扑统计信息,该拓扑信息可以包括各服务器集群的物理机资源以及服务器集群之间的带宽,根据该拓扑信息可以将服务功能体组合部署到合适的服务器集群中。
S102、根据所述服务功能链以及所述服务功能链中所述服务功能体之间的数据流流量生成无向图,所述无向图的节点为所述服务功能体,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;
本发明实施例中,根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图,举例来说,如表1所示的服务功能链及数据流流量的对应关系,根据表1可以生成图2所示的无向图,其中,图中的节点分别是服务功能体FW、IDS、Opt、Cach以及DPI,DPI和Opt之间的流量为20+15+5=40(其中包括IDS->DPI->Opt,DPI->Opt,DPI->Opt这三个服务功能链的流量)。
表1
  服务功能链 数据流流量
1 FW->IDS->Opt 30
2 FW->Cache 25
3 IDS->DPI->Opt 20
4 DPI->Opt 15
5 Opt->Cache 10
6 FW->Opt->DPI 5
7 FW->Opt->Cache 5
本发明实施例中,为了同时表示服务功能体之间的流量大小与执行顺序的关系,可以用关联度的概念对该无向图的边权进行修正,采用标号分层法计算关联度,具体可以通过以下公式:
Figure PCTCN2015099075-appb-000002
其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,所述关联度是指所述服务功能体i的前一跳或者后一跳是所述服务功能体j的概率,是对所述服务功能体i与所述服务功能体j之间关联的度量;p(i,j)为所述服务功能体i与所述服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
本发明实施例中,通过公式(1)可以将图2所示的无向图转换为图3所示的无向图,其中,图3与图2所示的无向图的区别在于,图3中边权为服务功能体之间的关联度。关联度的定义加强了两个直接连接且相互间流量很大的服务功能体之间的关系。若服务功能体i与服务功能体j之间仅存在直连的关系,则关联度仅由相互间流量决定,若在无向图中,服务功能体i与服务功能体j之间还存在其他的服务功能体,则中间的服务功能体会削弱服务功能体i与服务功能体j之间的关系,这种削弱程度可以用
Figure PCTCN2015099075-appb-000003
表示。其中,若两 个服务功能体之间的关联度较大,则越说明应该将该两个服务功能体放置到一个服务器集群中,从而可以节省网络中服务功能体之间的流量;若两个服务功能体之间的关联度为0,则将该两个服务功能体放置到一个服务器集群中或者是相连的服务器集群中,很容易导致环路或者网络拥塞的产生。
本发明实施例中,采用标号法分层计算两个服务功能体之间的关联度,该方法主要可以分为两个部分,第一阶段是按照服务功能体链的需求,以任意一个服务功能体为起点,计算在服务功能体链中,其他服务功能体相对于该服务功能体的位置(即标号);第二阶段是按照标号结果,分层计算两个服务功能体之间的关联度,两个服务功能体之间关联度计算的层次与间隔的服务功能体的数量有关。
S103、利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合;
本发明实施例中,最大关联度最小数量环算法是指将无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合。该实施方式可以将三个或三个以上服务功能体之间流量很大的服务功能体归并到一个服务功能体组合,进而将该服务功能体组合放置到一个服务器集群中,与分散放置这些服务功能体到不同的服务器集群相比,可以减少数据中心中服务器集群之间的流量。
本发明实施例中,针对如何寻找无向图中关联度之和最大的最小环,可首先对无向图中的边权进行修改,即采用关联度的倒数来表示边权,从而可将图3所示的无向图转换为图4所示的无向图,将寻找无向图中边权之和最大且环中节点数量最小的问题转化为寻找无向图中的最小环即可,即寻找图4所示的无向图中的最小环,可以基于Floyd算法来寻找最小环,具体过程为:假设距离矩阵D=[dis(I,j)],其中dis(i,j)>0表示i节点到j节点的距离,若i与j之间无路可通,那么dis(i,j)就是无穷大,dis(i,i)=0。对于如何找出最短路径所行经的节点了,使用另一个矩阵P,定义如下:p(i,j)保存由节点i到节点j的路径中,节点j的前一跳节点,p(i,j)的值如果为h,就表示i到j的最短行经为i->...->h->j,也就是说h是i到j的最短行径中的j之前的最后一个节点。P矩阵的初值为p(i,j)=i。当dis(i,j)>dis(i,k)+dis(k,j)时,就要让i到j的最短路径改为走i->...->k->...->j这一条路, dis(k,j)的值是已知的,即k->...->j这条路是已知的,所以k->...->j这条路上j的前一跳节点(即p(k,j))也是已知的,当然,因为要改走i->...->k->...->j这一条路,j的前一跳节点正好是p(k,j)。所以一旦发现dis(i,j)>dis(i,k)+dis(k,j),就把p(k,j)存入p(i,j)。在无向图中寻找一条环路,即找出图中一对节点之间的两条路径,且路径的加权和最小。假定有一条从i到j的最短路径,此时再来寻找另外一个从i到j的较短路径,如果有的话则一定形成了从i回到i的环,比如1->2为通路,2->3为通路,3->4为通路,4->1为通路,则第一次存在从1到3的最短路,再寻找时找到了1到4,4到3的路径,则形成了环,若两条路径的边权和在所有的环路中是最小的,则形成了最小环,即对应于无向图中的最大关联度最小数量环。
需要注意的是,从无向图中确定的最大关联度最小数量环包括的服务功能体可以归并为服务功能体组合,还需满足物理机资源约束和亲和性约束,其中,物理机资源约束是指各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,亲和性约束是指各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对物理机资源包括的计算资源和存储资源的不同偏好以充分利用物理机资源所具有的亲和性。若从无向图确定的最大关联度最小数量环包括的服务功能体可以归并为服务功能体组合不满足这两个约束条件,则将该环标记后,继续采用上述方法从无向图中寻找其他的最小环,若无向图中没有满足约束条件的最小环时即可停止进行最大关联度最小数量环的归并。
最大权归并算法是指利用广度优先搜索算法将所述无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;该最大权归并算法可以概括为:随机选择一个顶点v0(一般是无向图的边缘节点),加入一个归并区域,从v0开始对无向图进行广度优先搜索算法,将遍历到的顶点加入当前归并区域,保证满足上述两个约束条件的情况下,使得归并区域的边权之和最大,若达到上述两个约束条件中任意一个约束条件的限制或者没有未被归并的服务功能体节点时,归并结束。其中,预设阈值可以根据网络拓扑信息中最大带宽来设定。最大权归并算法是在不超过物理机资源限制与满足亲和性约束的情况下,将相互间流量很大(关联度很强,如大于预设阈值)的服务功能体放置于一个服务功能体组合中。
其中,最大权归并算法中随机选择的一顶点作为起始点对算法的复杂度影响较大,因此一般选择无向图的边缘节点作为起始点,为了寻找到靠近边缘的节点,通常可以在随机选择一顶点后,按照广度优先搜索算法对无向图中的节点进行遍历并标号,将标号最大的节点作为比较靠近边缘的节点。
通过对图4所示的无向图采用最大关联度最小数量环算法以及最大权归并算法可以得到如图5所示的无向图,图5所示的无向图中,服务功能体FW和Cache可以归并为一个服务功能体组合,服务功能体IDS、Opt以及DPI可以归并为一个服务功能体组合。
可选的,在执行步骤S103之后,还可以利用FM(Fiduccia-Mattheyses)算法调整所述各服务功能体组合包括的服务功能体,如图6所示,将分割线所经割线两端的服务功能体进行调整,将分割线所经过的边所连接的服务功能体调整到邻居服务功能体组合中,检测调整后的服务功能链环路是否减少或者服务功能体组合之间的流量是否减少,若调整后服务功能链环路减少或者服务功能体组合之间的流量减少,则将该服务功能体调整到邻居服务功能体组合中,反之,则说明步骤S103中获得服务功能体组合为最终得划分结果。其中,FM算法的主要思路是每次将分割线一侧的节点换到分割线另一侧,若交换后对无向图中的负载均衡存在改进,则保留交换后的划分结果;若交换后对无向图中的负载均衡或环路没有改进,则保留交换之前的划分结果。
S104、将所述各服务功能体组合分别部署到拥有与所述服务功能体组合所需的物理机资源相匹配的服务器集群中。
其中,该物理机资源可以具体为服务器集群中的计算资源和存储资源,可将满足物理机资源和亲和性约束的服务功能体组合部署到相应的服务器集群中,从而减少了服务功能体之间的流量,避免对数据中心传输网络带来过多的负担。
本发明实施例中,数据中心内服务功能体的部署方法中以服务功能体只有一个实例进行初始部署,在网络运行过程中,为了均衡服务功能体负载的流量,需要增加或者减少服务功能体的数量,即服务功能体的多个实例,为实现服务功能体多实例部署,首先,需要根据数据流的目的地址和服务功能体的负载情况将数据流分配到服务功能体的各个实例上,例如,表1所示的各服务功能链 中服务功能体IDS所在的服务功能链为FW->IDS->Opt、DPI->IDS->Cache以及FW->Cache->IDS,假设服务功能体IDS对应的实例有IDS1、IDS2,则将IDS的数据流分配到实例IDS1、IDS2上,可以为如表2所示的FW->IDS1->Opt、DPI->IDS2->Cache以及FW->Cache->IDS2上的流量;然后,可以将每个服务功能体的实例作为单实例生成无向图,如图8所示,图8为为多实例服务功能体构建的无向图;最后,针对该多实例构建的无向图,采用本发明实施例所述的部署方法,利用最大关联度最小数量环算法以及最大权归并算法将该无向图包括的服务功能体实例归并为各服务功能体组合,并将各服务功能体组合部署到服务器集群中,由于将之间数据流流量较大的服务功能体归并为一个服务功能体组合并放置到一个服务器中,从而减少了服务功能体之间的流量,降低了数据中心传输网络的负担。
表2
Figure PCTCN2015099075-appb-000004
可见,图1所示数据中心内服务功能体的部署方法可以从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图;利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;将各服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中,从而减少了服务功能体之间的流量,避免了冗余的路径以及服务功能链中数据流在传输网络中反复传输对数据中心传输网络带来过多的负担.
请参阅图7,图7是本发明实施例公开的另一种数据中心内服务功能体部署方法的流程示意图,其中,图7所示的部署方法与图1所示的部署方法相比,在执行步骤S103之后,以及S104之前,还可以执行以下步骤:
S105、根据数据流请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;
S106、分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;
S107、将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,其中,所述目标部署位置的服务器集群拥有的物理机资源满足所述服务功能体组合所需的物理机资源;
相应地,步骤S104中将所述服务功能体组合分别部署到拥有与所述服务功能体组合所需的物理机资源相匹配的服务器集群中,可以包括:将所述服务功能体组合分别部署到所述服务功能体组合的目标部署位置。
本发明实施例中,各服务功能体组合对应的部署位置可以通过两方面获得,一是将服务功能体组合尽量部署到数据流请求的数据所在的服务器集群中,这样可以降低服务器集群之间的流量,减少数据流对网络的侵占,缩短服务功能路径,并且还可以避免出现环路;另一方面可以优先确定数据量大的服务功能体组合的部署位置,以尽量使得数据量大的服务功能体组合可以部署到最佳位置,在一定程度上也可以更大幅度的减少数据流对网络的侵占。由于步骤S105中可选部署位置的确定主要考虑数据流偏向的数据所在的服务器集群,同一个服务功能体组合中的不同服务功能体的数据流偏向的数据所在的服务器集群不同,因此,服务功能体组合对应多个可选的服务器集群部署位置;为了最大程度的降低对数据传输网络的压力,可将服务功能体组合部署到数据流量大的数据流倾向的服务器集群内。
步骤S106中服务功能体组合的部署位置还要兼顾服务功能体迁移的开销,即将服务功能体组合中的服务功能体从原来的位置迁移到该部署位置的开销,因此,不仅要考虑将服务功能体组合部署到数据流量大的数据流倾向的服务器集群内,而且要考虑将服务功能体组合中的各服务功能体从原来的位置迁移到 该部署位置的总开销来确定目标部署位置,即优化目标是最小化路由开销,兼顾最小化服务功能体迁移的开销,其中,最小化服务功能体迁移的开销可以作为一个约束条件,只要迁移开销不超过预设阈值的部署位置,即可作为目标部署位置。
针对软件实现的服务功能在拆卸和加载服务功能体的过程存在的开销,考虑使用凸函数
Figure PCTCN2015099075-appb-000005
对这一过程进行描述,as=1表示服务功能体s的位置发生了变化。tsv表示服务功能体s转移到节点v上的转移开销,tsv受到多个因素的制约,包括占用存储空间的开销、计算资源的占用开销、拆卸与安装过程的开销以及服务功能体给转发节点v及其周围链路带来的额外的开销。若该值趋于无穷,则表示这个服务功能体转移到节点v是不合适的。
需要注意的是,服务功能体组合的目标部署位置还需满足归并服务功能体组合时的两个约束条件,即物理机资源约束条件和亲和性约束条件,同时,还需满足采用目标部署位置部署服务功能体组合后的数据中心中数据流必须按序经过所有的服务功能体。
图7所示的数据中心内服务功能体的部署方法通过服务功能体组合可以降低服务功能体之间的流量,减小对数据中心内部传输网络的压力;进一步的,图7所示的数据中心内服务功能体的部署方法还可以部署过程中服务功能体迁移开销,根据数据流请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,尽可能缩短经过服务功能体的服务转发路径,避免出现环路。
请参阅图9,图9是本发明实施例公开的一种数据中心内服务功能体的部署装置的结构示意图,该部署装置可以包括:
获取模块210,用于从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;
生成模块220,用于根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图,无向图的节点为服务功能体,无向图中边的权重为根 据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的服务功能体之间的关联度;
归并模块230,用于利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;最大关联度最小数量环算法是指将无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,最大权归并算法是指利用广度优先搜索算法将无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;
部署模块240,用于将各服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中。
本发明实施例中,无向图中边的权重为根据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的服务功能体之间的关联度,该关联度可以通过如下公式计算:
Figure PCTCN2015099075-appb-000006
其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,关联度是指服务功能体i的前一跳或者后一跳是服务功能体j的概率,是对服务功能体i与服务功能体j之间关联的度量;p(i,j)为服务功能体i与服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
本发明实施例中,图9所示的装置还可以包括:
调整模块250,用于在归并模块230利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合之后,利用Fiduccia-Mattheyses(FM)算法调整各服务功能体组合包括的服务功能体。
本发明实施例中,利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,物理机资源约束是指各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,亲和性约束是指各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对物理机资源包括的计算资源和存储资源的不同偏好以充分利用物理机资源所具有的亲 和性。
本发明实施例中,图9所示的装置还可以包括:
第一确定模块260,用于根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定服务功能体组合可选的部署位置;
计算模块270,用于分别计算将服务功能体组合中的服务功能体迁移到各可选的部署位置的开销;
第二确定模块280,用于将开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为服务功能体组合的目标部署位置,其中,目标部署位置的服务器集群拥有的物理机资源满足服务功能体组合所需的物理机资源;
相应地,部署模块240将服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中,具体为将服务功能体组合分别部署到服务功能体组合的目标部署位置。
其中,本发明实施例中,获取模块210可以执行图1所示的数据中心内服务功能体的部署方法中的步骤S101的操作以及相应的实施方式;生成模块220可以执行图1所示的数据中心内服务功能体的部署方法中的步骤S102的操作以及相应的实施方式;归并模块230可以执行图1所示的数据中心内服务功能体的部署方法中的步骤S103的操作以及相应的实施方式;部署模块240可以执行所示的数据中心内服务功能体的部署方法中的步骤S104的操作以及相应的实施方式;第一确定模块260、计算模块270以及第二确定模块280可以执行图7中步骤S105至S107的操作以及相应的实施方式以确定目标部署位置。另外,本发明实施例装置中的单元可以根据实际需要进行合并、划分和删减,本发明实施例不做限制。
请参阅图10,图10是本发明实施例公开的一种控制器的结构示意图,其中,该控制器可以包括存储器310,通信接口320以及处理器330,其中,通信接口320可以为有线通信接入口,无线通信接口或其组合,其中,有线通信接口例如可以为以太网接口。以太网接口可以是光接口,电接口或其组合。无线通信接口可以为WLAN接口,蜂窝网络通信接口或其组合等。存储器410可以包括 易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器也可以包括非易失性存储器(英文:non-volatile memory),例如快闪存储器(英文:flash memory),硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD);存储器410还可以包括上述种类的存储器的组合。处理器330可以是中央处理器(英文:central processing unit,缩写:CPU),网络处理器(英文:network processor,缩写:NP)或者CPU和NP的组合。处理器330还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(英文:application-specific integrated circuit,缩写:ASIC),可编程逻辑器件(英文:programmable logic device,缩写:PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(英文:complex programmable logic device,缩写:CPLD),现场可编程逻辑门阵列(英文:field-programmable gate array,缩写:FPGA),通用阵列逻辑(英文:generic array logic,缩写:GAL)或其任意组合。存储器310可以用于存储数据中心内服务功能体的部署对应的程序代码,处理器330可以通过通信接口320调用存储器310中存储的程序指令,用于从网络数据流监测器监测的数据流中获取服务功能链以及服务功能链中服务功能体之间的数据流流量;存储器310,还可用于存储处理器330获取的服务功能链以及服务功能链中服务功能体之间的数据流流量;
处理器330,还用于根据服务功能链以及服务功能链中服务功能体之间的数据流流量生成无向图,无向图的节点为服务功能体,无向图中边的权重为根据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的服务功能体之间的关联度;
处理器330,还用于利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合;最大关联度最小数量环算法是指将无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,最大权归并算法是指利用广度优先搜索算法将无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;
处理器330,还用于通过通信接口320将各服务功能体组合分别部署到拥有 与服务功能体组合所需的物理机资源相匹配的服务器集群中。
本发明实施例中,无向图中边的权重为根据服务功能体之间的数据流流量占所有服务功能链上数据流流量之和的比值计算的服务功能体之间的关联度具体为:
Figure PCTCN2015099075-appb-000007
其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,关联度是指服务功能体i的前一跳或者后一跳是服务功能体j的概率,是对服务功能体i与服务功能体j之间关联的度量;p(i,j)为服务功能体i与服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
本发明实施例中,处理器利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合之后,还用于利用Fiduccia-Mattheyses(FM)算法调整各服务功能体组合包括的服务功能体。
本发明实施例中,利用最大关联度最小数量环算法以及最大权归并算法将无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,物理机资源约束是指各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,亲和性约束是指各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对物理机资源包括的计算资源和存储资源的不同偏好以充分利用物理机资源所具有的亲和性。
本发明实施例中,处理器330还用于根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定服务功能体组合可选的部署位置;分别计算将服务功能体组合中的服务功能体迁移到各可选的部署位置的开销;以及将开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为服务功能体组合的目标部署位置,其中,目标部署位置的服务器集群拥有的物理机资源满足服务功能体组合所需的物理机资源;
相应地,处理器330将服务功能体组合分别部署到拥有与服务功能体组合所需的物理机资源相匹配的服务器集群中,具体为将服务功能体组合分别部署到服务功能体组合的目标部署位置。
其中,处理器330调用存储器310中存储的程序指令,可以执行图1或图7所示的发明实施例中的一个或多个步骤,或其中可选的实施方式。
一个实施例中,本发明实施例进一步公开一种计算机存储介质,该计算机存储介质存储有计算机程序,当计算机存储介质中的计算机程序被读取到计算机时,能够使得计算机完成本发明实施例公开的数据中心内服务功能体的部署方法的全部步骤。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本发明实施例所提供的数据中心内服务功能体的部署方法、装置及控制器进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (15)

  1. 一种数据中心内服务功能体的部署方法,其特征在于,包括:
    从网络数据流监测器监测的数据流中获取服务功能链以及所述服务功能链中服务功能体之间的数据流流量;
    根据所述服务功能链以及所述服务功能链中所述服务功能体之间的数据流流量生成无向图,所述无向图的节点为所述服务功能体,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;
    利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合;所述最大关联度最小数量环算法是指将所述无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,所述最大权归并算法是指利用广度优先搜索算法将所述无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;
    将所述各服务功能体组合分别部署到拥有与所述各服务功能体组合所需的物理机资源相匹配的服务器集群中。
  2. 根据权利要求1所述的方法,其特征在于,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度具体为:
    Figure PCTCN2015099075-appb-100001
    其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,所述关联度是指所述服务功能体i的前一跳或者后一跳是所述服务功能体j的概率,是对所述服务功能体i与所述服务功能体j之间关联的度量;p(i,j)为所述服务功能体i与所述服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
  3. 根据权利要求1或2所述的方法,其特征在于,所述利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合之后,所述方法还包括:
    利用Fiduccia-Mattheyses(FM)算法调整所述各服务功能体组合包括的服务功能体。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,所述物理机资源约束是指所述各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,所述亲和性约束是指所述各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对所述物理机资源包括的计算资源和存储资源的不同偏好以充分利用所述物理机资源所具有的亲和性。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:
    根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;
    分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;
    将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,其中,所述目标部署位置的服务器集群拥有的物理机资源满足所述服务功能体组合所需的物理机资源;
    将所述各服务功能体组合分别部署到拥有与所述各服务功能体组合所需的物理机资源相匹配的服务器集群中,包括:
    将所述各服务功能体组合分别部署到所述各服务功能体组合的目标部署位置。
  6. 一种数据中心内服务功能体的部署装置,其特征在于,包括:
    获取模块,用于从网络数据流监测器监测的数据流中获取服务功能链以及所述服务功能链中服务功能体之间的数据流流量;
    生成模块,用于根据所述服务功能链以及所述服务功能链中所述服务功能体之间的数据流流量生成无向图,所述无向图的节点为所述服务功能体,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;
    归并模块,用于利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合;所述最大关联度最小数量环算法是指将所述无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,所述最大权归并算法是指利用广度优先搜索算法将所述无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;
    部署模块,用于将所述各服务功能体组合分别部署到拥有与所述各服务功能体组合所需的物理机资源相匹配的服务器集群中。
  7. 根据权利要求6所述的装置,其特征在于,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度具体为:
    Figure PCTCN2015099075-appb-100002
    其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,所述关联度是指所述服务功能体i的前一跳或者后一跳是所述服务功能体j的概率,是对所述服务功能体i与所述服务功能体j之间关联的度量;p(i,j)为所述服务功能体i与所述服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
  8. 根据权利要求6或7所述的装置,其特征在于,所述装置还包括:
    调整模块,用于在所述归并模块利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合之后,利用Fiduccia-Mattheyses(FM)算法调整所述各服务功能体组合包括的服务功能体。
  9. 根据权利要求6至8任一项所述的装置,其特征在于,利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,所述物理机资源约束是指所述各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,所述亲和性约束是指所述各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对所述物理机资源包括的 计算资源和存储资源的不同偏好以充分利用所述物理机资源所具有的亲和性。
  10. 根据权利要求6至9任一项所述的装置,其特征在于,所述装置还包括:
    第一确定模块,用于根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;
    计算模块,用于分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;
    第二确定模块,用于将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,其中,所述目标部署位置的服务器集群拥有的物理机资源满足所述服务功能体组合所需的物理机资源;
    所述部署模块将所述各服务功能体组合分别部署到拥有与所述各服务功能体组合所需的物理机资源相匹配的服务器集群中,具体为将所述各服务功能体组合分别部署到所述各服务功能体组合的目标部署位置。
  11. 一种控制器,其特征在于,包括处理器、存储器和通信接口;
    所述处理器,用于从网络数据流监测器监测的数据流中获取服务功能链以及所述服务功能链中服务功能体之间的数据流流量;
    所述存储器,用于存储所述处理器获取的服务功能链以及所述服务功能链中服务功能体之间的数据流流量;
    所述处理器,还用于根据所述服务功能链以及所述服务功能链中所述服务功能体之间的数据流流量生成无向图,所述无向图的节点为所述服务功能体,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度;
    所述处理器,还用于利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合;所述最大关联度最小数量环算法是指将所述无向图中各边的关联度之和最大且包括的服务功能体数量最小的目标环所包括的服务功能体归并为一个服务功能体组合,所述最大权归并算法是指利用广度优先搜索算法将所述无向图中关联度大于预设阈值的服务功能体归并为一个服务功能体组合;
    所述处理器,还用于通过所述通信接口将所述各服务功能体组合分别部署 到拥有与所述各服务功能体组合所需的物理机资源相匹配的服务器集群中。
  12. 根据权利要求11所述的控制器,其特征在于,所述无向图中边的权重为根据所述服务功能体之间的数据流流量占所有所述服务功能链上数据流流量之和的比值计算的所述服务功能体之间的关联度具体为:
    Figure PCTCN2015099075-appb-100003
    其中,R(i,j):{i→j||j→i}表示服务功能体i与服务功能体j之间的关联度,所述关联度是指所述服务功能体i的前一跳或者后一跳是所述服务功能体j的概率,是对所述服务功能体i与所述服务功能体j之间关联的度量;p(i,j)为所述服务功能体i与所述服务功能体j之间的数据流流量占所有服务功能链上数据流流量之和的比值,其中,k1,...,km表示服务功能体i与服务功能体j之间存在的服务功能体。
  13. 根据权利要求11或12所述的控制器,其特征在于,所述处理器利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合之后,还用于利用Fiduccia-Mattheyses(FM)算法调整所述各服务功能体组合包括的服务功能体。
  14. 根据权利要求11至13任一项所述的控制器,其特征在于,利用最大关联度最小数量环算法以及最大权归并算法将所述无向图包括的服务功能体归并为各服务功能体组合所需满足的约束条件为:物理机资源约束和亲和性约束;其中,所述物理机资源约束是指所述各服务功能体组合所需的物理机资源不超过物理机允许承载的最大资源,所述亲和性约束是指所述各服务功能体组合中计算密集型的服务功能体与存储密集型的服务功能体对所述物理机资源包括的计算资源和存储资源的不同偏好以充分利用所述物理机资源所具有的亲和性。
  15. 根据权利要求11至14任一项所述的控制器,其特征在于,所述处理器还用于根据网络数据流监测器监测的数据流所请求的数据所在的服务器集群确定所述服务功能体组合可选的部署位置;分别计算将所述服务功能体组合中的服务功能体迁移到所述各可选的部署位置的开销;以及将所述开销不超过预设阈值的部署位置中数据流流量最大的数据流请求的数据所在的服务器集群确定为所述服务功能体组合的目标部署位置,其中,所述目标部署位置的服务 器集群拥有的物理机资源满足所述服务功能体组合所需的物理机资源;
    所述处理器将所述服务功能体组合分别部署到拥有与所述服务功能体组合所需的物理机资源相匹配的服务器集群中,具体为将所述服务功能体组合分别部署到所述服务功能体组合的目标部署位置。
PCT/CN2015/099075 2015-12-25 2015-12-25 一种数据中心内服务功能体部署方法、装置及控制器 WO2017107215A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/099075 WO2017107215A1 (zh) 2015-12-25 2015-12-25 一种数据中心内服务功能体部署方法、装置及控制器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/099075 WO2017107215A1 (zh) 2015-12-25 2015-12-25 一种数据中心内服务功能体部署方法、装置及控制器

Publications (1)

Publication Number Publication Date
WO2017107215A1 true WO2017107215A1 (zh) 2017-06-29

Family

ID=59088771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099075 WO2017107215A1 (zh) 2015-12-25 2015-12-25 一种数据中心内服务功能体部署方法、装置及控制器

Country Status (1)

Country Link
WO (1) WO2017107215A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379230A (zh) * 2018-11-08 2019-02-22 电子科技大学 一种基于广度优先搜索的服务功能链部署方法
EP3687111A4 (en) * 2017-09-18 2021-06-02 Institute of Acoustics, Chinese Academy of Sciences METHOD OF MANUFACTURING A NESTED CONTAINER WITHOUT LAPPING AND FULLY COVERED IN THE SAME LAYER AND READABLE STORAGE MEDIUM

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102664787A (zh) * 2012-04-01 2012-09-12 华为技术有限公司 决策树的生成方法和装置
CN103051564A (zh) * 2013-01-07 2013-04-17 杭州华三通信技术有限公司 资源动态调配的方法和装置
CN105141617A (zh) * 2015-09-14 2015-12-09 上海华为技术有限公司 一种数据中心间服务功能体的部署调整方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102664787A (zh) * 2012-04-01 2012-09-12 华为技术有限公司 决策树的生成方法和装置
CN103051564A (zh) * 2013-01-07 2013-04-17 杭州华三通信技术有限公司 资源动态调配的方法和装置
CN105141617A (zh) * 2015-09-14 2015-12-09 上海华为技术有限公司 一种数据中心间服务功能体的部署调整方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3687111A4 (en) * 2017-09-18 2021-06-02 Institute of Acoustics, Chinese Academy of Sciences METHOD OF MANUFACTURING A NESTED CONTAINER WITHOUT LAPPING AND FULLY COVERED IN THE SAME LAYER AND READABLE STORAGE MEDIUM
CN109379230A (zh) * 2018-11-08 2019-02-22 电子科技大学 一种基于广度优先搜索的服务功能链部署方法
CN109379230B (zh) * 2018-11-08 2020-05-22 电子科技大学 一种基于广度优先搜索的服务功能链部署方法

Similar Documents

Publication Publication Date Title
CN108260169B (zh) 一种基于QoS保障的服务功能链动态部署方法
CN112738820B (zh) 一种服务功能链的动态部署方法、装置及计算机设备
US9596295B2 (en) Computing connected components in large graphs
JP5651756B2 (ja) ネットワークトポロジー要求を物理ネットワークにマッピングする方法及び通信システム
CN112448900B (zh) 一种数据传输方法及装置
CN111901170B (zh) 可靠性感知的服务功能链备份保护方法
WO2019011338A1 (zh) 一种最短路径确定方法及控制器
CN107483286B (zh) 一种基于云-雾环境下合并和部署服务功能链的方法
US9559985B1 (en) Weighted cost multipath routing with intra-node port weights and inter-node port weights
JP5869041B2 (ja) ネットワークトポロジ要求を物理ネットワークにマッピングする方法、コンピュータプログラム製品、モバイル通信システム及びネットワーク構成プラットフォーム
US11082358B2 (en) Network path measurement method, apparatus, and system
WO2018121178A1 (zh) 一种资源调整方法、装置和系统
JPWO2012141241A1 (ja) ネットワーク、データ転送ノード、通信方法およびプログラム
Kang et al. Virtual network function allocation to maximize continuous available time of service function chains with availability schedule
CN114697256B (zh) 基于集中式控制器的动态网络带宽分配与管理
WO2018137361A1 (zh) 数据转发方法及装置
WO2017107215A1 (zh) 一种数据中心内服务功能体部署方法、装置及控制器
WO2018166249A1 (zh) 一种网络业务传输的方法及系统
CN103532861A (zh) 基于生成树的域内动态多路径生成方法
CN113938434A (zh) 大规模高性能RoCEv2网络构建方法和系统
WO2018090852A1 (zh) 链路状态数据包的传输方法及路由节点
WO2023093513A1 (zh) 路径感知方法、装置及系统
WO2020043120A1 (zh) 一种在网络中划分内部网关协议区域的方法和装置
Zheng et al. Dynamic VNF chains placement for mobile IoT applications
Yu et al. Robust resource provisioning in time-varying edge networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15911212

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15911212

Country of ref document: EP

Kind code of ref document: A1