CN113098789A - SDN-based data center network multipath dynamic load balancing method - Google Patents

SDN-based data center network multipath dynamic load balancing method Download PDF

Info

Publication number
CN113098789A
CN113098789A CN202110324811.7A CN202110324811A CN113098789A CN 113098789 A CN113098789 A CN 113098789A CN 202110324811 A CN202110324811 A CN 202110324811A CN 113098789 A CN113098789 A CN 113098789A
Authority
CN
China
Prior art keywords
path
link
flow
bandwidth
rerouting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110324811.7A
Other languages
Chinese (zh)
Other versions
CN113098789B (en
Inventor
朱金鑫
王珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110324811.7A priority Critical patent/CN113098789B/en
Publication of CN113098789A publication Critical patent/CN113098789A/en
Application granted granted Critical
Publication of CN113098789B publication Critical patent/CN113098789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A data center network multi-path dynamic load balancing method based on an SDN comprises the following steps: monitoring the network state and acquiring the real-time state information of the equipment; dividing a size stream; for the small flow, routing is carried out in an ECMP mode, and Hash operation is carried out on the basis of the 10-tuple value of the flow to select a forwarding path; for the large flow, a large flow routing algorithm is adopted; and calculating the average link utilization rate and the link load variance, and rerouting the flow occupying the maximum bandwidth on the link with the maximum link utilization rate to the path with the maximum bottleneck bandwidth. The method combines the initial routing algorithm and the rerouting algorithm to strengthen the load balancing effect; the low delay requirement of the small flow is met, and the high throughput requirement of the large flow is met; a dynamic routing algorithm based on a probability selection algorithm is adopted for the big flow, so that the problem that the big flow is distributed to the same path due to the fact that the path information is not updated timely is avoided; and the upper limit of the rerouting times in a single period is set, so that unnecessary rerouting caused by network fluctuation is avoided, and the expense of the controller is reduced.

Description

SDN-based data center network multipath dynamic load balancing method
Technical Field
The invention belongs to the field of data center networks, and particularly relates to a data center network multi-path dynamic load balancing method based on an SDN.
Background
With the rapid development of internet technology, network traffic is rapidly increased, network operators are continuously increasing the deployment density of servers and storage devices, network nodes and links of data centers are exponentially increased, and data centers gradually become convergence sites of network traffic. The continuous increase of data traffic of a data center and the difference of different types of traffic on link requirements and service quality requirements put higher demands on a data center network, and most of existing routing algorithms do not comprehensively consider the real-time state and each traffic characteristic of a link, so that a part of links in the network are overloaded while the other part of links are still in an idle state, and network load imbalance is caused.
In the current data center load balancing algorithm, the ECMP algorithm is used most widely, a plurality of equal cost paths are numbered, the hash modular operation is carried out on the data packet head field according to the number of the equal cost paths, and finally the data flow is mapped to the corresponding path. The ECMP algorithm achieves the balance of the number of flows, but because the ECMP algorithm does not consider a dynamic network environment, multiple large flows may be mapped onto the same link to cause large flow collision, which aggravates link load, and even causes link congestion and data packet loss. The ECMP algorithm performs well when the network load is light, but the network load becomes unbalanced as the network load becomes heavy. The LABERIO algorithm takes the instantaneous bandwidth variance used in the whole network as a load balancing parameter, once the variance exceeds a set threshold, the SDN controller schedules the flow on the link with the highest congestion degree in the network to a path which meets the flow transmission condition and has the lowest load, and although LABERIO meets the scheduling urgency of some flows, the instantaneous bandwidth variance used is easily influenced by burst flow in the network, and rerouting is continuously triggered to cause unnecessary overhead. The DLB algorithm is a dynamic load balancing algorithm facing a fat-tree topology network, and in the fat-tree topology network, as long as a path of a flow reaching a highest-level node is determined, a downlink path is correspondingly determined. And the DLB adopts the greedy strategy, and selects another node of the link with the largest residual bandwidth from the source node as the next hop until the node of the highest-layer switch is reached. However, the path obtained by the DLB algorithm is only locally optimal, and may cause congestion on part of the locally optimal path.
Patent CN106533960A, "a data center network routing method based on Fat-Tree structure", proposes a congestion control algorithm based on scheduling priority, which routes new flows by using DLB algorithm based on link bandwidth and large flow number, then monitors the network, reroutes the large flows found on the congested link, and reschedules the large flows in order from large to small according to DR algorithm based on deadline and switch queue length until the link is no longer congested. The method reduces the load on the congested link to a certain extent and improves the link utilization rate. However, the DLB algorithm adopted by this patent for the new flow selects the next hop according to the greedy strategy, and most of the selected paths are local optimal paths, which easily causes local link congestion, and then the DR algorithm is adopted, although the urgency of scheduling a large flow on a local link is considered, which can alleviate the congestion to some extent, but the unrestricted DR algorithm may cause a large amount of extra overhead; while the need for low latency for small flows is not considered. Therefore, the new flow routing method is optimized and the rerouting mechanism is combined to possibly achieve a better load balancing effect.
Disclosure of Invention
In order to solve the technical problems, the invention provides a data center Network multi-path Dynamic load balancing Method (MTDLB) based on SDN (Software Defined Network) by combining an initial routing algorithm and a rerouting algorithm, and improves the traditional data center load planning method.
A data center network multi-path dynamic load balancing method based on an SDN comprises the following steps:
step 1: monitoring the network state, and acquiring real-time state information of each device in the network;
step 2: dividing a size stream; the edge switch is responsible for judging the size flow, when the edge switch receives the flow from the host, the flow which is 10 percent lower than the link capacity is defined as the small flow, and the flow which is 10 percent higher than the link capacity is defined as the large flow;
and step 3: executing an initial routing algorithm; after the flow with the size is divided by the judgment in the step 2, routing the small flow by adopting an ECMP mode, and performing hash operation on a 10-tuple value of the flow to select a forwarding path; for the large flow, a large flow routing algorithm is adopted;
and 4, step 4: executing a rerouting algorithm; calculating the average link utilization rate and the link load variance, and when the average link utilization rate is more than 30% and the variance of the used bandwidth of the link exceeds the threshold value
Figure BDA0002994155620000031
Rerouting a flow occupying the largest bandwidth on a link with the largest link utilization rate to a path with the largest bottleneck bandwidth, and setting the upper limit of the rerouting times to be 2;
and 5: and repeating the steps 1 to 4 for each monitoring period until the monitoring period is ended.
Furthermore, the data center network adopts a fat tree topology as a network model, the fat tree topology comprises a core layer, an aggregation layer and an edge layer, and each layer consists of switches; the aggregation switch and the edge switch form a plurality of Pods, and the aggregation switch in each Pod is respectively connected with different switches in the same Pod; k represents the number of the Pod, and each Pod has k/2 aggregation switches and k/2 edge switches; the core layer has k2A/4 core switches with aggregation layer k2A/2 aggregation switches, edge layer having22 edge switches and k34, a host machine; in addition, each edge switch is connected to k/2 hosts.
Further, in step 3, the large flow routing algorithm adopted for the large flow in the initial routing stage calculates the weight w of the multiple available paths based on the remaining available bandwidth of the link, the link capacity, the bandwidth required by the large flow, and the number of the path links, and then selects the forwarding path according to the path weight and probability, and the specific steps are as follows:
step 3-1, finding out all available paths from the large flow source to the destination;
step 3-2, calculating to obtain a link bandwidth utilization rate and a k path bottleneck bandwidth, obtaining a k path residual link utilization rate on the basis, summarizing to obtain path satisfaction, and finally obtaining the weight of the path according to the path satisfaction;
and 3-3, obtaining the probability of selecting the path for transmission through the ratio of the weight value of the kth path to the sum of the weight values of all available paths, then normalizing the probability to obtain a subinterval of each path corresponding to the interval [0,1), finally generating a random number on the interval [0,1), and selecting the corresponding path as a stream transmission path according to the subinterval to which the random number falls.
Further, in step 3-1, for fat tree topology, the set of all paths from source to destination is denoted by J, J ═ J1,j2,j3,...,jk,...},jkIndicating the k-th path.
Further, in step 3-2, the link bandwidth utilization rate UkiIs the ratio of the link bandwidth used to the link capacity:
Uki=Ski/C (1)
in the formula, SkiThe used bandwidth of an ith link in a kth path is represented, C represents the link capacity, and i represents the link;
the k path bottleneck bandwidth is the maximum value of the used bandwidth in each link, and is Bk
Bk=max{Sk1,Sk2,...,Skn} (2)
Wherein n is the number of links in the path;
the utilization rate of the k path residual link is that the ratio of the minimum residual bandwidth after the flow is transmitted to the link capacity is Us
Us=((C-Bk)-f)/C (3)
Wherein f represents a stream bandwidth;
then, the path satisfaction degree S is obtained:
Figure BDA0002994155620000051
finally, the weight of the path is defined:
Figure BDA0002994155620000052
further, in step 3-3, n available paths are set between the source node and the destination node, and the weight value of the kth path is WkThe probability of selecting the k-th path is PkThe value is the ratio of the weighted value of the kth path to the sum of all available path weighted values, and the calculation formula is as follows:
Figure BDA0002994155620000053
for each stream, by probability PkRandomly selecting a path; because the sum of the probabilities is 1, for k paths, k intervals are designed, and the interval corresponding to the k-th path is shown in formula (7):
Figure BDA0002994155620000061
ensuring to occupy the interval of [0, 1); and generating a random number between [0,1), and selecting a corresponding path for forwarding the stream in which interval the random number is positioned, thereby completing the selection of the stream transmission path.
Further, in step 4, the average link utilization and the used bandwidth variance are used as the trigger conditions of the rerouting mechanism, where:
Figure BDA0002994155620000062
U(t)=Load(t)/C (9)
Figure BDA0002994155620000063
in the formula, loadi(t) represents the used bandwidth of the link i at the time t, load (t) represents the average used bandwidth of the link, U (t) represents the average link utilization rate, and M represents the total number of links in the network; the larger the o (t) is, the larger the dispersion degree of the used bandwidth of the link is, the more inconsistent the utilization rate of the link is, and the more unbalanced the network load is; further, a limit value CW is set to 2, and the rerouting is terminated when the number of consecutive rerouting times within one monitoring period exceeds 2.
Further, the specific process of step 4 is as follows:
step 4-1: the initialization limit value CW is 0;
step 4-2: calculating the average used bandwidth load (t) and the used bandwidth variance o (t) of the network link through formulas (8) and (10), and calculating the average link utilization rate U (t) in the network through a formula (9); when U (t) is more than or equal to 30 percent
Figure BDA0002994155620000064
Turning to the step 4-3, otherwise ending the rerouting;
step 4-3: comparing link utilization loads of all linksl(t) finding the loadl(t) the most heavily loaded flow F on the largest linkmax
Step 4-4: selecting the path with the maximum bottleneck bandwidth as the rerouting path of the flow, and taking FmaxRerouting to this path, CW plus 1;
and 4-5, if the CW is greater than 2, ending the rerouting, otherwise, turning to the step 4-2.
Compared with the prior art, the invention has the following beneficial effects:
(1) compared with the traditional load balancing algorithm, the load balancing method and the load balancing system combine the initial routing algorithm and the rerouting algorithm to strengthen the load balancing effect.
(2) The invention adopts different algorithms aiming at the large and small flows, thereby meeting the requirements of low time delay of the small flow and high throughput of the large flow. And a dynamic routing algorithm based on a probability selection algorithm is adopted for the large flows, so that the problem that the large flows are distributed to the same path due to the fact that the path information is not updated timely is avoided.
(3) The upper limit of the rerouting times in a single period is set, unnecessary rerouting caused by network fluctuation is avoided, and the expense of the controller is reduced.
Drawings
Fig. 1 shows a fat tree topology network (k 4) according to an embodiment of the present invention.
Fig. 2 is a flow chart of an initial routing algorithm in an embodiment of the present invention.
Fig. 3 is a flowchart of a rerouting algorithm in an embodiment of the present invention.
Detailed Description
The technical method of the present invention is further described in detail with reference to the accompanying drawings.
The invention adopts Fat-Tree (Fat-Tree) topology as a network model, as shown in figure 1. The fat tree topology contains 3 layers: core layer, convergence layer, edge layer. The aggregation switch and the edge switch form a plurality of Pods, and the aggregation switch in each Pod is respectively connected with different switches in the same Pod. K represents the number of Pod, each Pod has k/2 aggregation switches and k/2 edge switches, and each edge switch is connected with k/2 host computers, and the total number of k2K 4 core switches2K 2 aggregation switches22 edge switches and k3And 4, the host computer. In the present invention, k has a value of 4, i.e., there are 4 core switches, 8 aggregation switches, 8 edge switches, and 16 hosts.
In the embodiment of the invention, J is used to represent the set of all paths from the source to the destination, and J ═ J1,j2,j3,...,jk,...},jkDenotes the kth path, i denotes the link, f denotes the stream bandwidth, C denotes the link capacity, SkiIndicating the used bandwidth of the ith link in the kth path, and n is the number of links in the path (the number of links may be different on each path). Link bandwidth utilization UkiIs the ratio of the link bandwidth used to the link capacity:
Uki=Ski/C (1)
the k path bottleneck bandwidth is the maximum value of the used bandwidth in each link, and is Bk
Bk=max{Sk1,Sk2,...,Skn} (2)
The embodiment of the invention is divided into 2 parts to execute:
the first stage executes an initial routing algorithm, and the specific steps are shown in fig. 2. First, network state information, such as link bandwidth used, switch forwarding rate, etc., is obtained. Secondly, when the edge switch detects a new flow, detecting whether the size of the flow exceeds 10% of the link capacity, if not, judging the flow as a small flow, and routing the small flow by adopting an ECMP (equal cost performance) mode; and if the size of the flow exceeds 10% of the link capacity, judging the flow as a large flow, and adopting a large flow routing algorithm based on probability selection.
Introduction of ECMP algorithm:
in a fat-tree topology network with multipath transmission characteristics, each flow has multiple available transmission paths, and the ECMP algorithm opens up a buffer space for all flows to store the multiple available paths. When the small flow reaches the edge switch, the ECMP algorithm takes the quintuple of the data flow as the input value of the hash function to perform modular calculation on the number of available paths to obtain an output value, and then forwarding is completed according to the preset mapping relation between the output value of the hash function and the equivalent path. As shown in fig. 1, taking the data flow from host H1 to host H5 as an example, the flow has 4 optimal equivalent paths according to the ECMP algorithm, which are (S1, S9, S17, S11, S3), (S1, S9, S18, S11, S3), (S1, S10, S19, S12, S3) and (S1, S10, S20, S12, S3), respectively. And then, for each stream, calculating to obtain a modulus value, wherein 0,1, 2 and 3 respectively correspond to 4 equivalent paths, and the corresponding paths are selected for transmission.
The large flow routing algorithm finds out a plurality of reachable paths in a fat tree topology with multipath characteristics according to the source and destination addresses of the flow, calculates the bottleneck bandwidth of the paths according to a formula (2), and selects the paths with the bottleneck bandwidth larger than the flow transmission bandwidth as the available paths. The next step is to calculate the path weights w.
Firstly, setting the residual link utilization rate of the kth path as the ratio of the minimum residual bandwidth after the flow is transmitted to the link capacity to be Us
Us=((C-Bk)-f)/C (3)
Then, the path satisfaction degree S is obtained:
Figure BDA0002994155620000091
the path satisfaction degree S represents the satisfaction degree of the flow to the available path, the larger the residual bottleneck bandwidth of the path is, the more the residual link utilization rate U issThe greater the path satisfaction S, the greater the flow satisfaction for the path, indicating that the greater the flow satisfaction for the path, the more likely it is to select the path transport flow. In the method, a path is divided into 3 states according to the utilization rate of a residual link after the flow is transmitted, when Us is less than or equal to 10%, the path is judged to be in a heavy load state, the large flow is not suitable to be transmitted, and the weight is 1; when the Us is more than 10% and less than or equal to 50%, the path is judged to be in a medium load state, a large flow can be transmitted, and the weight is 3 at the moment; when Us is more than 50%, the path is judged to be in a light load state, and the path is more suitable for transmitting a large flow, and the weight is 9.
Finally, the weight of the path is defined:
Figure BDA0002994155620000101
the path weight comprehensively considers the number of links, the bottleneck bandwidth and the stream bandwidth, and the purpose of load balancing is achieved as far as possible. The bottleneck link bandwidth is considered because the bottleneck link is more faithful to the path available bandwidth. In the flow transmission process, the larger the utilization rate of the residual link of the path is, the larger the bottleneck bandwidth is, and the flow transmitted by using the path can effectively reduce the load balancing variance, so that the network load is more balanced, and the small flow is not easy to block, and the time delay of the small flow is influenced. Therefore, in the invention, the path weight is increased along with the increase of the utilization rate of the residual links, and in addition, the addition of the number of the links considers the transmission delay of the flow and improves the transmission efficiency.
After the weights of the available paths are calculated, the probability that all the available paths are selected as stream transmission paths is calculated by taking the method weight of probability selection as a reference factor, and finally the stream is transmitted according to the probability selection path.
The probability selection algorithm is a common stream forwarding method, the influence of untimely updating of the link state can be reduced to a great extent, and the probability selection algorithm can better realize load balancing.
Suppose there are n available paths between the source and destination nodes, the weight value of the k path is WkThe probability of selecting the k-th path is PkThe value is the ratio of the weighted value of the kth path to the sum of all available path weighted values, and the calculation formula is as follows:
Figure BDA0002994155620000111
for each stream, by probability PkThe path is randomly selected, and the specific method is as follows: because the sum of the probabilities is 1, for k paths, k intervals are designed, and the interval corresponding to the k-th path is shown in formula (7):
Figure BDA0002994155620000112
ensuring to occupy the interval of 0, 1). Generating random numbers between [0,1), and selecting a corresponding path to forward the flow in the interval.
For the path with high probability, the generated random number is more likely to fall on the path with high probability because the probability space is larger.
And when the first stage is finished, entering a second stage: a rerouting phase.
The detailed rerouting step is shown in fig. 3, and first obtains network state information, such as link bandwidth used, switch forwarding rate, and the like. And calculating the average used bandwidth of the network according to a formula (8) by using the obtained information of the used bandwidth of the links of the whole network, calculating the average link utilization rate of the network according to a formula (9), and then calculating the used bandwidth variance of the links in the network according to a formula (10). After the three data are obtained, rerouting is carried outTriggering a decision when the average link utilization reaches 30% and the variance of the used bandwidth of the link exceeds a threshold
Figure BDA0002994155620000113
And triggering a rerouting mechanism, finding out the link with the maximum congestion degree, namely the link with the highest link utilization rate, rescheduling the flow occupying the maximum bandwidth on the link, calculating all available paths of the flow according to the scheme in the first stage, calculating bottleneck bandwidths of the available paths, and selecting the path with the maximum bottleneck bandwidth as the scheduling path of the flow. Meanwhile, the upper limit CW of the rerouting times is set to be 2, and in one period, the scheduling of the large flow is executed for 2 times at most, so that the waste of network resources due to continuous rerouting is avoided.
Figure BDA0002994155620000121
U(t)=Load(t)/C (9)
Figure BDA0002994155620000122
loadl(t) represents the used bandwidth of link l at time t, load (t) represents the average used bandwidth of the link, u (t) represents the average link utilization, and M represents the total number of links in the network. The larger the o (t) is, the more discrete the used bandwidth of the link is, the more inconsistent the link utilization is, and the more unbalanced the network load is. When the network is in a low-load state, the influence of the load imbalance on the network performance is not great, and as the network load is increased, the load imbalance directly influences the transmission delay of the network throughput and the traffic. The process is as follows:
step 1: initializing CW as 0;
step 2: the average used bandwidth load (t) and the used bandwidth variance o (t) of the network link are calculated by equations (8) and (10), and the average link utilization u (t) in the network is calculated by equation (9). When U (t) is more than or equal to 30 percent
Figure BDA0002994155620000123
(
Figure BDA0002994155620000124
The value of (b) is obtained empirically) to step 3, otherwise, ending the rerouting;
and step 3: comparing link utilization loads of all linksl(t) finding the loadl(t) the most heavily loaded flow F on the largest linkmax
And 4, step 4: selecting the path with the maximum bottleneck bandwidth as the rerouting path of the flow, and taking FmaxRerouting to this path, CW plus 1;
and 5, if the CW is greater than 2, ending the rerouting, otherwise, turning to the step 2.
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the present disclosure should be included in the scope of the present invention as set forth in the appended claims.

Claims (8)

1. A data center network multi-path dynamic load balancing method based on an SDN is characterized by comprising the following steps: the method comprises the following steps:
step 1: monitoring the network state, and acquiring real-time state information of each device in the network;
step 2: dividing a size stream; the edge switch is responsible for judging the size flow, when the edge switch receives the flow from the host, the flow which is 10 percent lower than the link capacity is defined as the small flow, and the flow which is 10 percent higher than the link capacity is defined as the large flow;
and step 3: executing an initial routing algorithm; after the flow with the size is divided by the judgment in the step 2, routing the small flow by adopting an ECMP mode, and performing hash operation on a 10-tuple value of the flow to select a forwarding path; for the large flow, a large flow routing algorithm is adopted;
and 4, step 4: executing a rerouting algorithm; calculating the average link utilization rate and the link load variance, when the average link utilization rate is more than 30 percent and the link has used bandwidthThe difference exceeds the threshold value
Figure FDA0002994155610000011
Rerouting a flow occupying the largest bandwidth on a link with the largest link utilization rate to a path with the largest bottleneck bandwidth, and setting the upper limit of the rerouting times to be 2;
and 5: and repeating the steps 1 to 4 for each monitoring period until the monitoring period is ended.
2. The SDN-based data center network multi-path dynamic load balancing method of claim 1, wherein: the data center network adopts a fat tree topology as a network model, the fat tree topology comprises a core layer, an aggregation layer and an edge layer, and each layer consists of a switch; the aggregation switch and the edge switch form a plurality of Pods, and the aggregation switch in each Pod is respectively connected with different switches in the same Pod; k represents the number of the Pod, and each Pod has k/2 aggregation switches and k/2 edge switches; the core layer has k2A/4 core switches with aggregation layer k2A/2 aggregation switches, edge layer having22 edge switches and k34, a host machine; in addition, each edge switch is connected to k/2 hosts.
3. The SDN-based data center network multi-path dynamic load balancing method of claim 1, wherein: in step 3, the large flow routing algorithm adopted for the large flow in the initial routing stage calculates the weight w of a plurality of available paths based on the remaining available bandwidth of the link, the link capacity, the bandwidth required by the large flow and the number of the path links, and then selects a forwarding path according to the path weight and probability, and the specific steps are as follows:
step 3-1, finding out all available paths from the large flow source to the destination;
step 3-2, calculating to obtain a link bandwidth utilization rate and a k path bottleneck bandwidth, obtaining a k path residual link utilization rate on the basis, summarizing to obtain path satisfaction, and finally obtaining the weight of the path according to the path satisfaction;
and 3-3, obtaining the probability of selecting the path for transmission through the ratio of the weight value of the kth path to the sum of the weight values of all available paths, then normalizing the probability to obtain a subinterval of each path corresponding to the interval [0,1), finally generating a random number on the interval [0,1), and selecting the corresponding path as a stream transmission path according to the subinterval to which the random number falls.
4. The SDN-based data center network multi-path dynamic load balancing method of claim 3, wherein: in step 3-1, for fat tree topology, the set of all paths from source to destination is denoted by J, J ═ J1,j2,j3,...,jk,...},jkIndicating the k-th path.
5. The SDN-based data center network multi-path dynamic load balancing method of claim 3, wherein: in step 3-2, the link bandwidth utilization rate UkiIs the ratio of the link bandwidth used to the link capacity:
Uki=Ski/C (1)
in the formula, SkiThe used bandwidth of an ith link in a kth path is represented, C represents the link capacity, and i represents the link;
the k path bottleneck bandwidth is the maximum value of the used bandwidth in each link, and is Bk
Bk=max{Sk1,Sk2,...,Skn} (2)
Wherein n is the number of links in the path;
the utilization rate of the k path residual link is that the ratio of the minimum residual bandwidth after the flow is transmitted to the link capacity is Us
Us=((C-Bk)-f)/C (3)
Wherein f represents a stream bandwidth;
then, the path satisfaction degree S is obtained:
Figure FDA0002994155610000031
finally, the weight of the path is defined:
Figure FDA0002994155610000032
6. the SDN-based data center network multi-path dynamic load balancing method of claim 3, wherein: in step 3-3, n available paths are set between the source node and the destination node, and the weight value of the kth path is WkThe probability of selecting the k-th path is PkThe value is the ratio of the weighted value of the kth path to the sum of all available path weighted values, and the calculation formula is as follows:
Figure FDA0002994155610000041
for each stream, by probability PkRandomly selecting a path; because the sum of the probabilities is 1, for k paths, k intervals are designed, and the interval corresponding to the k-th path is shown in formula (7):
Figure FDA0002994155610000042
ensuring to occupy the interval of [0, 1); and generating a random number between [0,1), and selecting a corresponding path for forwarding the stream in which interval the random number is positioned, thereby completing the selection of the stream transmission path.
7. The SDN-based data center network multi-path dynamic load balancing method of claim 1, wherein: in step 4, the average link utilization rate and the used bandwidth variance are used as the trigger conditions of the rerouting mechanism, wherein:
Figure FDA0002994155610000043
U(t)=Load(t)/C (9)
Figure FDA0002994155610000044
in the formula, loadi(t) represents the used bandwidth of the link i at the time t, load (t) represents the average used bandwidth of the link, U (t) represents the average link utilization rate, and M represents the total number of links in the network; the larger the o (t) is, the larger the dispersion degree of the used bandwidth of the link is, the more inconsistent the utilization rate of the link is, and the more unbalanced the network load is; further, a limit value CW is set to 2, and the rerouting is terminated when the number of consecutive rerouting times within one monitoring period exceeds 2.
8. The SDN-based data center network multi-path dynamic load balancing method of claim 7, wherein: the specific process of step 4 is as follows:
step 4-1: the initialization limit value CW is 0;
step 4-2: calculating the average used bandwidth load (t) and the used bandwidth variance o (t) of the network link through formulas (8) and (10), and calculating the average link utilization rate U (t) in the network through a formula (9); when U (t) is more than or equal to 30 percent
Figure FDA0002994155610000051
Turning to the step 4-3, otherwise ending the rerouting;
step 4-3: comparing link utilization loads of all linksl(t) finding the loadl(t) the most heavily loaded flow F on the largest linkmax
Step 4-4: selecting the path with the maximum bottleneck bandwidth as the rerouting path of the flow, and taking FmaxRerouting to this path, CW plus 1;
and 4-5, if the CW is greater than 2, ending the rerouting, otherwise, turning to the step 4-2.
CN202110324811.7A 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method Active CN113098789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110324811.7A CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110324811.7A CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Publications (2)

Publication Number Publication Date
CN113098789A true CN113098789A (en) 2021-07-09
CN113098789B CN113098789B (en) 2023-05-02

Family

ID=76669785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110324811.7A Active CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Country Status (1)

Country Link
CN (1) CN113098789B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113746734A (en) * 2021-07-30 2021-12-03 苏州浪潮智能科技有限公司 Flow forwarding method, device, equipment and medium
CN113938434A (en) * 2021-10-12 2022-01-14 上海交通大学 Large-scale high-performance RoCEv2 network construction method and system
CN114448899A (en) * 2022-01-20 2022-05-06 天津大学 Method for balancing network load of data center
CN115134304A (en) * 2022-06-27 2022-09-30 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center
CN115396357A (en) * 2022-07-07 2022-11-25 长沙理工大学 Traffic load balancing method and system in data center network
CN116032829A (en) * 2023-03-24 2023-04-28 广东省电信规划设计院有限公司 SDN network data stream transmission control method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809393A (en) * 2017-12-14 2018-03-16 郑州云海信息技术有限公司 A kind of iink load balancing algorithm and device based on SDN
CN109547340A (en) * 2018-12-28 2019-03-29 西安电子科技大学 SDN data center network jamming control method based on heavy-route

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809393A (en) * 2017-12-14 2018-03-16 郑州云海信息技术有限公司 A kind of iink load balancing algorithm and device based on SDN
CN109547340A (en) * 2018-12-28 2019-03-29 西安电子科技大学 SDN data center network jamming control method based on heavy-route

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113746734A (en) * 2021-07-30 2021-12-03 苏州浪潮智能科技有限公司 Flow forwarding method, device, equipment and medium
CN113938434A (en) * 2021-10-12 2022-01-14 上海交通大学 Large-scale high-performance RoCEv2 network construction method and system
CN114448899A (en) * 2022-01-20 2022-05-06 天津大学 Method for balancing network load of data center
CN115134304A (en) * 2022-06-27 2022-09-30 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center
CN115134304B (en) * 2022-06-27 2023-10-03 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center
CN115396357A (en) * 2022-07-07 2022-11-25 长沙理工大学 Traffic load balancing method and system in data center network
CN115396357B (en) * 2022-07-07 2023-10-20 长沙理工大学 Traffic load balancing method and system in data center network
CN116032829A (en) * 2023-03-24 2023-04-28 广东省电信规划设计院有限公司 SDN network data stream transmission control method and device
CN116032829B (en) * 2023-03-24 2023-07-14 广东省电信规划设计院有限公司 SDN network data stream transmission control method and device

Also Published As

Publication number Publication date
CN113098789B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN113098789B (en) SDN-based data center network multipath dynamic load balancing method
CN111277502B (en) Method for transmitting data by multi-link aggregation and transmitting equipment
CN107959633B (en) Multi-path load balancing method based on price mechanism in industrial real-time network
CN107579922B (en) Network load balancing device and method
US6084858A (en) Distribution of communication load over multiple paths based upon link utilization
US7765321B2 (en) Link state routing techniques
JP4693328B2 (en) Method and system for communicating data over an optimal data path in a network
Long et al. LABERIO: Dynamic load-balanced routing in OpenFlow-enabled networks
Bahk et al. Dynamic multi-path routing and how it compares with other dynamic routing algorithms for high speed wide area network
US6178448B1 (en) Optimal link scheduling for multiple links by obtaining and utilizing link quality information
US8000239B2 (en) Method and system for bandwidth allocation using router feedback
US7933206B2 (en) Method for adapting link weights in relation to optimized traffic distribution
CN112350949A (en) Rerouting congestion control method and system based on flow scheduling in software defined network
CN110351187A (en) Data center network Road diameter switches the adaptive load-balancing method of granularity
CN109039941B (en) Adaptive packet scattering method based on path classification in data center network
US7508766B2 (en) Packet routing
Hertiana et al. A joint approach to multipath routing and rate adaptation for congestion control in openflow software defined network
Kumar et al. Confidence-based q-routing: An on-line adaptive network routing algorithm
Yue et al. Rule placement and switch migration-based scheme for controller load balancing in SDN
CN112261690B (en) Satellite network constrained multipath routing setting method, electronic device and storage medium
CN112187642B (en) Weighted bandwidth allocation for adaptive routing
Xu et al. An effective routing mechanism based on fuzzy logic for software-defined data center networks
CN113542121A (en) Load balancing routing method for tree data center link layer based on annealing method
Li et al. Data Center Traffic Rescheduling Algorithm Based on Ant Colony Optimization Algorithm
CN116760777B (en) Multipath congestion control method based on ABEA3C

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant