CN105553846B - A method of distributing resource in software defined network - Google Patents

A method of distributing resource in software defined network Download PDF

Info

Publication number
CN105553846B
CN105553846B CN201610097262.3A CN201610097262A CN105553846B CN 105553846 B CN105553846 B CN 105553846B CN 201610097262 A CN201610097262 A CN 201610097262A CN 105553846 B CN105553846 B CN 105553846B
Authority
CN
China
Prior art keywords
network
switching equipment
stream
flow
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610097262.3A
Other languages
Chinese (zh)
Other versions
CN105553846A (en
Inventor
王炜
丁丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201610097262.3A priority Critical patent/CN105553846B/en
Publication of CN105553846A publication Critical patent/CN105553846A/en
Application granted granted Critical
Publication of CN105553846B publication Critical patent/CN105553846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic

Abstract

The method that the invention discloses a kind of to distribute resource in software defined network, including the network equipment, controller and switching equipment, comprising the following steps: step 1, collect the whole network resource information;Step 2, subsequent period flow information is collected;Step 3, flow spatial information is input in linear programming model and is solved;Step 4, whether the solving result in checking step 3 has fractionation to flow;Step 5, approximate algorithm is called, by the stream aggregation of fractionation in individual paths;Step 6, the stream process rule of each switching equipment is generated;Step 7, stream process rule is installed to switching equipment.The method of distribution resource of the invention is implemented very convenient with more new capital;The solving speed of algorithm is very fast;The solving result of in most cases linear programming can be used directly, further be optimized without recalling approximate algorithm, and execution speed is further accelerated from framework;Really realize the equilibrium assignment of Internet resources.

Description

A method of distributing resource in software defined network
Technical field
The present invention relates to software defined networks and data center network field, more particularly to one kind to be based on linear programming algorithm The method that network flow problem is solved.
Background technique
The reasonable distribution one of Internet resources (switching equipment central processing unit, memory, link etc.) to be network research neck An important project in domain, with the continuous development of cloud computing technology, the status of data center network is increasingly important, in data For flow in heart network also gradually at ascendant trend, some burst big flows have been even more than network single link processing capacity Limitation, therefore reasonable distribution Internet resources, are effectively treated overall network fluid space with reaching, but not will cause link and hand over The unbalanced target utilized for changing resource is then even more important.
Switching equipment is a closed body in traditional network, and the processing of network packet is independently made certainly by each switching equipment It is fixed, it substantially does things in his own way between all devices, none global network is seen, and wants to reach network money in this case Source is substantially impossible in the optimum allocation of the whole network.
The appearance of software defined network fundamentally changes this status, and the framework of software defined network is the control of network Plane (control panel) processed is kept completely separate with Forwarding plane (data panel), there is the control for being in center in network User-defined advanced processes strategy (Policy) can be converted into a rule and specifically handle rule by device (controller) processed It is installed on switching equipment, such whole network just has a global visual angle, can achieve the target of optimized resource distribution.
Have relevant research about this project academia, " One Big Switch " [CoNEXT ' 13], vCRIB [NSDI ' 13], OFFICER [Infocom ' 15] etc. is the research achievement in this field, but these study common feature It is that only considered exchange memory not enough this resource constraint, and do not account for this kind of resource constraint of other such as CPU substantially, And the limitation of this kind of resource often cannot be neglected, especially in virtual network device, such as: vCRIB [NSDI ' 13] just refers to Out when being equipped with 2K rule (600 for asterisk wildcard rule) on Open vSwitch, it is only necessary to the network flow arrival rate of 1K/s The running rate of CPU can be made to reach 100%, so solve resource constraint of the resource constraint of CPU even than solving memory It is more important.
OFFICER [Infocom ' 15] although proposing the Optimization Framework of a general solution problems, by Using to be integer linear programming method in it, solution room is limited, it is not suitable for large-scale data center network topology, proposition Approximate algorithm complexity in the biggish situation in flow space is again excessively high, and in addition it considers mainly to be also the limitation of memory.This Invention is improved on this basis, propose it is a kind of be used in processing a period of time in network flow Internet resources whole The distribution model being equalized in a network, and solved with the method for linear programming, such model can achieve use Simply it is suitable for real network topological environmental.
Patent US20040073673 be with the most similar work of the present invention, but its target be network application (Web, MapReduce, Database) reasonable distribution on the server, main purpose be rationally using server resource, and it is of the invention Target be each switching equipment Coordination Treatment network flow, and present invention could apply to any software defined networks.
Summary of the invention
Aiming at the problems existing in the prior art, the invention proposes one kind in software defined network, rationally balanced Distribute Internet resources, to handle a period of time in reach network flow, make Internet resources utilization reach rationalization with it is optimal The method of change.It solves the problems, such as how abundant application software defines the global of network and sees the reasonable distribution resource in the whole network.
In order to solve the above problem, the invention adopts the following technical scheme:
Problem is modeled as multi-commodity flow problem, the rest processing capacity of all switching equipment in maximization network uses Linear programming method is solved, and linear programming (Linear Programming) resolver of Solve problems is integrated with now most The LP algorithmic technique in forward position, solving speed are very fast.The most of network flow gone out through model solution can be accomplished without fractionation, edge Shortest path handled, for needing to split the scene of stream individually, the present invention is based on the results of linear programming to propose closely Like algorithm.
The approximate algorithm is to a pair of every in the network network equipment<source station, and Target Station>determine a shortest path is each Paths accomplish non-intersecting as far as possible, i.e., different<source station, Target Station>to being set as far as possible without identical exchange in corresponding path Standby and link.It applies above-mentioned linear programming model again on obtained shortest path, determines and distributed on its each switching equipment Flow.
A method of distributing resource in software defined network, which is characterized in that including the network equipment, controller and friendship Exchange device, comprising the following steps:
Step 1, collect the whole network resource information: controller is communicated with switching equipment first, and it is overall to obtain Internet resources Situation, processing capacity and link bandwidth information including switching equipment;
Step 2, it collects subsequent period flow information: collecting the flow spatial information of user's request;
Step 3, flow spatial information is input in linear programming model and is solved;
Step 4, analyze result: whether the solving result in checking step 3 has fractionation to flow, if there is splitting the case where flowing, Then follow the steps 5;If the case where not splitting stream, step 6 is directly executed;
Step 5, approximate algorithm is called, by the stream aggregation of fractionation in individual paths;
Step 6, the create-rule module of controller is called to generate the stream process rule of each switching equipment;
Step 7, stream process rule is installed to switching equipment.
Switching equipment is the switching equipment for supporting Openflow, especially virtual swap device Open vSwitch;It is divided into Ingress switch equipment, intermediate switching equipment, egress switch equipment;Between the network equipment, lead between the network equipment and switching equipment OpenFlow agreement is crossed to be communicated.
Linear programming model is installed on the controller, is mainly used for the central processing unit resource in software defined network point It is solved with situation.
Approximate algorithm is to determine a shortest path to a pair of network equipment every in network topology, and each paths are as far as possible not Intersection applies linear programming model again on obtained shortest path, determines the flow distributed on its each switching equipment.
Linear programming model includes:
Objective function:
Constraint condition:
minΣobjVarA+objVarB (3)
Constraint (2) guarantees that network flow is run along shortest path in a network as far as possible;
Bandwidth constraint:
Constraint (4) guarantees that flowing through all processing of link and untreated stream does not exceed the capacity limit of link;
Flow equilibrium constraint:
ifu∈S6, PF, (u, s)=0 (6)
ifv∈S6, UF, (s, v)=0 (7)
Constrain (5) regulation: the stream into switching equipment s should be quantitatively equal with the stream of outflow s;Constrain (6) regulation When s is ingress switch equipment, then stream should be entirely untreated stream;Constraining (7) regulation to work as s is egress switch equipment, then Should not be either with or without the stream of processing, i.e., the stream of all outflow networks should all be processed;
Constrain (8) and (9) regulation: for same stream, when it is into some switching equipment s, then from switching equipment s When outflow, processed part should increase or remain unchanged, and untreated part should reduce or remain unchanged;
Exchange processing constraint:
Σf∈Fv∈N-(s)PF, (s, v)f∈Fu∈N-(s)PF, (v, s)≤Cs-Rs
(10)
Constrain (10) regulation: for any one switching equipment s, what the summation for all streams that it is handled should be total less than it Processing capacity subtracts a reserved value of the switching equipment.
The controller for being mainly technically characterized by one, SDN network of the invention calculates optimal result and installation process rule in real time Then in switching equipment;Two, switching equipment processor resource optimal allocation models;Three, each switching equipment collaboration processing network flow; Four, linear programming algorithm rapid solving;Five, approximate algorithm.Basic step includes: collection network fluid space collection, using linear gauge It draws model solution, solving result analysis, shunt result using approximate algorithm, installation results rule to switching equipment.
Further, the LP solver of Solution of Linear Programming Mode is integrated with original antithesis simplex algorithm (primal And dual simplex algorithms), parallel barrier algorithm (the parallel barrier with crosspoint Algorithm with crossover), concurrently optimize and displacement algorithm (concurrent optimization, and sifting algorithm)
Further, the network topology had both included the FatTree of data center, VL2 topology, also comprising all kinds of locals and The topology of Wide Area Network.
Compared with prior art, the invention has the following advantages: the present invention uses an Optimization Solution model, to soft Part defines the resource in network, especially processor resource and carries out reasonable distribution, reaches it when handling various flow spaces To the load balancing of the whole network.Present invention can apply to the software defined network controllers of various mainstreams, such as: POX, NOX, OpenDayLight etc., technical solution of the present invention solving speed is fast, can be applied to large-scale real network topology.It is more Solving result can be used directly in number situation.
The invention has the following beneficial effects: one, all elements that can be multiplexed completely in existing software defined network, Any transformation has not been needed to hardware, even directly implementing the controller of algorithm, has only needed to support that Solutions of Linear Programming can be called yet The result of parser need to only do the change of very little to controller code, implement very convenient with more new capital;Two, algorithm is asked Fast speed is solved, is only about needed twenties seconds in the case where 3,000,000 decision variables in actual test, supports the scale of network Also larger, it can support a nodes up to a hundred, the network of tens of thousands of stream;Three, the solving result of in most cases linear programming It can directly use, further be optimized without recalling approximate algorithm, execution speed is further accelerated from framework; Four, the equilibrium assignment for really realizing Internet resources, not will cause that some region of network is extremely busy, and node is equal with link utilization The case where close to 100%, and other regions are very idle, and node and link utilization are almost 0.
Detailed description of the invention
Fig. 1 is an application scenarios of the embodiment of the present invention.
Fig. 2 is a kind of implementing procedure of the invention.
Fig. 3 is that the present invention shunts Sample Scenario 1 (load balancing).
Fig. 4 is that the present invention shunts Sample Scenario 2 (resource constraint).
Fig. 5 is that approximate algorithm of the present invention solves 1 network of problem flow graph of example.
Fig. 6 is that approximate algorithm of the present invention solves 2 network of problem flow graph of example.
Fig. 7 is approximate algorithm path allocation examples of problems of the present invention.
Specific embodiment
The invention will be further described with reference to the accompanying drawings and embodiments.
The present invention will be modeled as more than one in a period of time by the processing problem of the network flow in resource-constrained network Commodity flow problem lists the objective function and constraint condition for solving this problem, result is installed in network after solving result Implemented.
Fig. 1 (a), (b) be the embodiment of the present invention an implement scene.It is exchanged comprising a controller with four in scene Equipment, switching equipment are the switching equipment for supporting OpenFlow, can be passed through between controller and all switching equipment OpenFlow agreement carries out normal communication, and Switch1 is entrance (Ingress) switching equipment, all nets in four switching equipment Network stream is all outlet (egress) switching equipment thence into, Switch4, all-network stream all from this outflow, Switch2 with Switch3 is intermediate switching equipment, efficient linear programming resolver (gurobi or cplex etc.) is housed on controller, control The result parsed is translated into specific exchange flow table rule and is mounted on each switching equipment by device.Fig. 1 (a) is stream process rule Installation process, from controller to switching equipment install.Assuming that there is the stream from Switch1 to Switch4, controller is calculated Out after result, corresponding exchange regulation is loaded on Switch1, in Switch3, Switch4;Fig. 1 (b) is that rule is successfully installed Afterwards, the processing path of the network flow (shown in dotted line).
Fig. 2 is a kind of implementing procedure of the present embodiment.Controller is communicated with switching equipment first in implementation process, is obtained Internet resources overall situation is taken, processing capacity and all link bandwidth informations 201 including each switching equipment are then collected Customer flow request, the flow spatial information 202 of network will be passed through by obtaining following a period of time (1s).These data are defeated The case where entering and carry out solution 203 into linear programming resolver, result is analyzed, mainly checking whether there is fractionation stream 204, If the case where not splitting stream, i.e., all streams are flowed into meeting point from source point each along individual paths, then call the life of controller Approximate algorithm is then called the case where stream if there is fractionation at the specific processing rule 205 that rule module generates each exchange, it will The stream aggregation of fractionation is 206 in individual paths, installation stream process rule to switching equipment 207.
It is an object of the present invention to the distribution of optimal change equipment process resource (mainly CPU), make the residue of whole network Processing capacity maximizes.It is because the rest processing capacity of network is bigger, then big for the flow of subsequent appearance, especially burst to flow Amount, so that it may have better counter-measure, so that the use of whole network be made to be optimal.For the line of the present embodiment is discussed in detail Property plan model, first substantially introduce the model symbol indicate,
F: flow space
S: all switching equipment set of optimization network are constituted
Se: it is directly connected to the set of the external node of optimization network, they are not belonging to the cyberspace scope for needing to optimize, Mainly by host, controller etc. is constituted
S+: the set of all nodes, S in network+=SUSe
L: the set of link in network is defined as (s, d) ∈ S × S, and s is source node, and d is destination node
I: flowing into the set of link, connects external node and switching equipment, is defined as (s, d) ∈ Se×S
E: flowing out the set of link, connects switching equipment and external node, is defined as (s, d) ∈ S × Se
L+: the set of all links, L+=LUIUE
All set with s neighborhood of nodes, s ∈ S, s are meeting point, and respective streams flow into s from neighborhood of nodes
All set with s neighborhood of nodes, s ∈ S, s are source point, and respective streams flow to neighborhood of nodes from s
Pf,l: the flow of the processed stream f passed through on link l, l ∈ L+, f ∈ F
Uf,l: the flow of the untreated stream f passed through on link l, l ∈ L+, f ∈ F
Cs: the throughput that switching equipment has, s ∈ S
Rs: the reserved processing capacity of switching equipment, s ∈ S, optimization aim seek to make this value small as far as possible
Bl: link capacity, l ∈ L+
The linear programming model of the present embodiment is described in detail below.
Objective function:
minΣobjVarA+objVarB (3)
Constraint (1) is exactly the optimization aim of linear programming model, and the meaning of constraint (2) is to guarantee network flow edge as far as possible Shortest path run in a network, detour phenomenon because if flow path exists, not only the great originally rare net of consumption Network resource, and be also possible that data-bag lost phenomena such as.
Bandwidth constraint:
Constraint (4) guarantees that flowing through all processing of link l and untreated stream does not exceed the capacity limit of link l.
Flow equilibrium constraint:
ifu∈S6, PF, (v, s)=0 (6)
ifv∈S6, UF, (s, v)=0 (7)
Constraint (5) is meant that any stream (processing+untreated) into switching equipment s should be with the stream (processing of outflow s + it is untreated) be quantitatively it is equal, this is most basic flow conservation.It is when s is entrance (ingress) switching equipment, i.e., main The stream that machine generates is by s injection network, then stream at this moment should be entirely untreated stream, and constraint (6) limits this feelings Condition at this moment should not be either with or without place when s is outlet (egress) switching equipment, that is, handles the stream of completion by s outflow network The stream of reason, i.e., it is all outflow networks streams should all be it is processed, constraint (7) define this case.
It is processed when it enters some switching equipment s, then flows out from switching equipment s for same stream Part should increase or remain unchanged (the case where not processing), and untreated part should reduce or remain unchanged, about Beam (8) and (9) define both conservation forms.By observation it is recognised that flowing f, ∑ for any onev∈N-(s) PF, (s, v)u∈N-(s)PF, (u, s)Or Σu∈N-(s)UF, (u, s)u∈N-(s)UF, (s, v)It is exactly the stream of some switching equipment s actual treatment Amount, it is hereby achieved that following exchange handles constraint.
Exchange processing constraint:
Σf∈FΣv∈N-(s)PF, (s, v)f∈FΣv∈N-(s)PF, (v, s)≤Cs-Rs
(10)
Constraint (10) is defined for any one switching equipment s, and it is total that the summation for all streams that it is handled should be less than it Processing capacity subtract a reserved value of the switching equipment, the objective function very phase of this reserved value and linear optimization model It closes, in the very big situation of flow, this reserved value Rs is also possible to zero, shows that all processing capacities of switching equipment s are equal It has been be depleted that, this is very extreme situation.Network topology is abstracted as topological graph model by above problem model, the node in figure It is exactly switching equipment, and side is the network linking between switching equipment.The model is adapted to different network topology structures, packet The FatTree of data center is included, VL2 topology also includes the topology of all kinds of local and wide areas.
By experimental result it can be found that in fact not needed when the stream of once-through network reaches certain scale Great scale, when there is 10 or so stream to pass through in network, is equivalent between each pair of host by taking the FatTree of k=4 as an example One stream, then according to linear programming model, all stream substantially all can be consciously along the shortest path runoff from source point to meeting point Network out, this be objective function (2) in action, this characteristic is extremely important in practical applications.
The rest processing capacity situation exchanged under analysis different situations, it is found that model makes each exchange remaining as far as possible The specific gravity of processing capacity is identical, by taking the FatTree of k=4 as an example, one shares 20 exchanges in this case, it is assumed that each exchange Processing capacity it is identical, be the stream of 100 units of single treatment, then the processing capacity of whole network in total is exactly 2000 lists Position stream, at this moment in network 80 units of once-through stream, the distribution of flow is substantially uniform, then it is each exchange can assign to 4 The stream of a unit, each remaining processing capacity of exchange is (100-4)/100=96%, if the processing capacity of exchange is uneven Weighing apparatus, such as the processing capacity of core exchange is 60 units, the processing capacity of Aggregation exchange is 80 units, edge exchange Processing capacity be 100 units, then the stream of this 80 units still can be evenly assigned in each exchange, specifically, often A core exchange about distributes the stream of 3.5 units, and each Aggregation exchange is about assigned to the stream of 4.8 units, Edge exchanges the stream for being about assigned to 6 units, it can be seen that the flow of each exchange allocation processing and its processing capacity are Directly proportional, the processing capacity of edge exchange is most strong, so the stream assigned to is most, vice versa, but all exchanges are final remaining Processing capacity be 94% or so, then reached the equilibrium state in network-wide basis.It can be seen that linear programming model exists It in most cases can reach and do not split single network stream, and Internet resources is made to can achieve the effect of equilibrium assignment.
The case where described above is flow equalization distribution, the result of linear programming model can be straight in this case Use is connect, is no longer needed to using approximate algorithm.
Split two kinds of situations of stream.The first situation only has a stream process in the FatTree of k=4 between H1 and H5, Moreover, the two pod are also more idle, in addition to this stream, do not have this period other stream to pass through, such case flows down not It is in for fractionation avoidablely, has a look the example of Fig. 3, H1 and H5 indicate the network equipment (host), and E1 and E3 are edge exchanges, A1, A2, A3, A4 are aggregation exchanges, and C1, C2, C3, C4 are core exchanges, and arrow direction is exactly the stream of network flow To, label in each edge, the processed stream of digital representation for including in " | | ", such as " A1 " arrive the label " | 2 |+3 " of " C2 " Indicate that " A1-C2 " this chain road has the stream of two units to process, there are also the streams of three units not to handle, and observes " A1- This link of C2-A3 ", it is found that the stream of C2 is flowed into compared with the stream of outflow C2, the stream that more processed in units are crossed, this The stream of one unit is exactly to be handled by C2.
The stream of 10 units as shown in Figure 3 is split into four, is flowed respectively on different paths, the stream of this four paths To being indicated on the diagram with arrow, they are four shortest paths from H1-H5, and entire stream, which is averaged, to be assigned on this four paths All switching equipment on handled, each switching equipment is assigned to the stream of a unit, and according to model, this is an ideal The solution of change, but it is not fully up to expectations in practical applications, because stream needs to be recombinated after splitting, it is possible to will cause in this way and prolongs When, even packet loss in turn results in the variety of problems such as re-transmission and congestion, under serious situation can extreme influence network performance.
Second situation flows between H1-H5 as shown in figure 4, it is still assumed that the processing capacity of each exchange is 100 units Into -1000 units of big stream, all switching equipment exchange capacities on this big non-any one shortest path of stream are handled Summation can reach, because there was only five exchanges on shortest path, the summation of processing capacity is 500 units, much smaller than needed for stream The processing capacity wanted, in this way this big stream have to split, and are still stream on four shortest paths, this lucky four shortest paths The quantity of switching equipment is 10 on diameter, then each to exchange the stream process for being assigned to 100 units, runs out of the place of each exchange Reason ability.
It can be seen that splitting stream, there are two types of situations, and one is the needs for flow equalization, as shown in figure 3, being also believed to line The side effect of property plan model;It is another then be resource constraint in individual paths, it is in other words such as to scheme the case where tearing open Shown in 4.
The very good solution of the first situation provides a paths for this kind of stream, linear gauge is then applied on this paths Draw model, so that it may obtain optimal solution.Since the processing capacity of such case lower network is originally at superfluous state, sacrifice Harmony will not influence the overall performance of network.Result after stream approximate algorithm in Fig. 3 optimizes is as shown in Figure 5.
First according to linear programming optimization as a result, rejecting the stream of all Fig. 4 situations in flow space, remaining stream is heavy New optimization, generally can be obtained by optimal solution, and the stream for allowing these to obtain optimal solution firstly flows through network, because of the scale one of these streams As not counting very big, be then that all these big streams find one in a network so many time will not be spent by handling these streams Path, the condition that this paths meets are that the summation of all exchange processing capacities is greater than equal to target flow, then at this Linear programming model is applied on path.Result after stream approximate algorithm in Fig. 4 optimizes is as shown in Figure 6.
The essence of approximate algorithm is that the manifold of whole network is projected into individual paths, and line is applied in individual paths Property plan model, using after approximate algorithm it cannot be guaranteed that target network stream walked is shortest path, as shown in fig. 6, but with tear open Bring counter productive is shunted to compare, what such loss still can be tolerated.
It is non-intersecting as far as possible for the selection in path in approximate algorithm, by taking Fig. 7 as an example, if h1~h3 has flow to pass through, h1 ~h5 also has flow to pass through, if the path that the network flow of h1~h3 selects is h1-e1-a2-e2-h3, h1~h5 should be selected H1-e1-a1-c2-a3-e3-h5 avoids this road " e1-a2 " as far as possible, can make the distribution of flow more evenly in this way, at residue Reason ability can achieve bigger.
Certainly, the invention may also have other embodiments, but they are not for limiting the present invention, without departing substantially from this In the case where spirit and its essence, those skilled in the art make various corresponding changes in accordance with the present invention And deformation, but these corresponding changes and modifications all should fall within the scope of protection of the appended claims of the present invention.Therefore this hair Bright protection scope should be subject to what claims hereof protection scope was defined.

Claims (2)

1. a kind of method for distributing resource in software defined network, which is characterized in that including the network equipment, controller and exchange Equipment, comprising the following steps:
Step 1, collect the whole network resource information: the controller is communicated with the switching equipment first, obtains Internet resources Overall situation, processing capacity and link bandwidth information including the switching equipment;
Step 2, it collects subsequent period flow information: collecting the network flow spatial information of user's request;
Step 3, the flow spatial information is input in linear programming model and is solved;
Step 31, analyze result: whether the solving result in checking step 3 has fractionation to flow, and if there is splitting the case where flowing, then holds Row step 32;If the case where not splitting stream, step 4 is directly executed;
Step 32, approximate algorithm is called, by the stream aggregation of fractionation in individual paths;
The linear programming model is mounted on the controller, is mainly used for the central processing unit money in software defined network Source distribution condition is solved;
The approximate algorithm is to determine a shortest path to a pair of network equipment every in network, each paths not phase as far as possible It hands over, applies the linear programming model again on obtained shortest path, determine and distribute on its each described switching equipment Flow;
Step 4, the controller generates the stream process rule of each switching equipment;
Step 5, the stream process rule is installed to the switching equipment.
2. the method for distribution resource according to claim 1, it is characterised in that: the switching equipment is to support Openflow Virtual swap device Open vSwitch;It is divided into ingress switch equipment, intermediate switching equipment, egress switch equipment;The net Between network equipment, communicated between the network equipment and the switching equipment by OpenFlow agreement.
CN201610097262.3A 2016-02-22 2016-02-22 A method of distributing resource in software defined network Active CN105553846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610097262.3A CN105553846B (en) 2016-02-22 2016-02-22 A method of distributing resource in software defined network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610097262.3A CN105553846B (en) 2016-02-22 2016-02-22 A method of distributing resource in software defined network

Publications (2)

Publication Number Publication Date
CN105553846A CN105553846A (en) 2016-05-04
CN105553846B true CN105553846B (en) 2019-03-19

Family

ID=55832785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610097262.3A Active CN105553846B (en) 2016-02-22 2016-02-22 A method of distributing resource in software defined network

Country Status (1)

Country Link
CN (1) CN105553846B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107800641A (en) * 2017-12-14 2018-03-13 中国科学技术大学苏州研究院 Prevent from controlling the control method of link congestion in software defined network
CN108809826A (en) * 2018-04-27 2018-11-13 广州西麦科技股份有限公司 A kind of elephant data flow processing method, device, P4 interchangers and medium
CN108809707A (en) * 2018-05-30 2018-11-13 浙江理工大学 A kind of TSN dispatching methods towards real-time application demand
CN111131064B (en) * 2019-12-31 2021-12-28 电子科技大学 Multicast stream scheduling method in data center network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179046A (en) * 2013-04-15 2013-06-26 昆山天元昌电子有限公司 Data center flow control method and data center flow control system based on openflow
CN104065509A (en) * 2014-07-24 2014-09-24 大连理工大学 SDN multi-controller deployment method for reducing management load overhead
CN104333514A (en) * 2014-11-28 2015-02-04 北京交通大学 Network flow control method, device and system
CN104734954A (en) * 2015-03-27 2015-06-24 华为技术有限公司 Routing determination method and device used for software defined network (SDN)
CN105049279A (en) * 2015-06-19 2015-11-11 国家电网公司 Communication trend flexibility configuration method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7743127B2 (en) * 2002-10-10 2010-06-22 Hewlett-Packard Development Company, L.P. Resource allocation in data centers using models
US9124506B2 (en) * 2013-06-07 2015-09-01 Brocade Communications Systems, Inc. Techniques for end-to-end network bandwidth optimization using software defined networking
US9503374B2 (en) * 2014-01-22 2016-11-22 Futurewei Technologies, Inc. Apparatus for hybrid routing in SDN networks to avoid congestion and achieve good load balancing under fluctuating traffic load

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179046A (en) * 2013-04-15 2013-06-26 昆山天元昌电子有限公司 Data center flow control method and data center flow control system based on openflow
CN104065509A (en) * 2014-07-24 2014-09-24 大连理工大学 SDN multi-controller deployment method for reducing management load overhead
CN104333514A (en) * 2014-11-28 2015-02-04 北京交通大学 Network flow control method, device and system
CN104734954A (en) * 2015-03-27 2015-06-24 华为技术有限公司 Routing determination method and device used for software defined network (SDN)
CN105049279A (en) * 2015-06-19 2015-11-11 国家电网公司 Communication trend flexibility configuration method and system

Also Published As

Publication number Publication date
CN105553846A (en) 2016-05-04

Similar Documents

Publication Publication Date Title
Sallam et al. Shortest path and maximum flow problems under service function chaining constraints
CN105553846B (en) A method of distributing resource in software defined network
Ghorbani et al. Micro load balancing in data centers with DRILL
Toosi et al. ElasticSFC: Auto-scaling techniques for elastic service function chaining in network functions virtualization-based clouds
Luizelli et al. Piecing together the NFV provisioning puzzle: Efficient placement and chaining of virtual network functions
CN104737504B (en) The system and method for effectively using flow table space in a network environment
CN104702522B (en) Computer implemented method, device, the controller of software defined network routing data
CN104253770B (en) Realize the method and apparatus of the distributed virtual switch system
CN104170335B (en) Congestion control and resource allocation in separated system structure network
CN105900363B (en) The system and method that light λ streams manipulate
CN103348635B (en) Network system, control unit and optimum route control method
CN105282191B (en) SiteServer LBS, controller and method
CN105049419B (en) Based on the multifarious mimicry network of isomery switching route system step by step
CN106059915A (en) System and method for implementing limitation of north-south traffic of tenants based on SDN controller
Lin et al. Jointly optimized QoS-aware virtualization and routing in software defined networks
Lee et al. Optimal flow distribution in service function chaining
Wu et al. A low-cost deadlock-free design of minimal-table rerouted xy-routing for irregular wireless nocs
CN112954069A (en) Method, device and system for accessing mobile equipment to SD-WAN (secure digital-Wide area network)
Li et al. CoMan: Managing bandwidth across computing frameworks in multiplexed datacenters
Antequera et al. ADON: Application-driven overlay network-as-a-service for data-intensive science
Seetharam et al. ADON: Application-driven overlay network-as-a-service for data-intensive science
CN107645458A (en) Three-tier message drainage method and controller
CN108923961B (en) Multi-entry network service function chain optimization method
Yang et al. Joint optimization of mapreduce scheduling and network policy in hierarchical data centers
Yang et al. Joint optimization of mapreduce scheduling and network policy in hierarchical clouds

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant