CN113098789B - SDN-based data center network multipath dynamic load balancing method - Google Patents

SDN-based data center network multipath dynamic load balancing method Download PDF

Info

Publication number
CN113098789B
CN113098789B CN202110324811.7A CN202110324811A CN113098789B CN 113098789 B CN113098789 B CN 113098789B CN 202110324811 A CN202110324811 A CN 202110324811A CN 113098789 B CN113098789 B CN 113098789B
Authority
CN
China
Prior art keywords
path
link
bandwidth
rerouting
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110324811.7A
Other languages
Chinese (zh)
Other versions
CN113098789A (en
Inventor
朱金鑫
王珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110324811.7A priority Critical patent/CN113098789B/en
Publication of CN113098789A publication Critical patent/CN113098789A/en
Application granted granted Critical
Publication of CN113098789B publication Critical patent/CN113098789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A SDN-based data center network multipath dynamic load balancing method comprises the following steps: monitoring the network state and acquiring real-time state information of equipment; dividing the size stream; for small flows, routing is carried out in an ECMP mode, hash operation is carried out based on 10-tuple values of the flows, and a forwarding path is selected; for large flows, a large flow routing algorithm is adopted; and calculating average link utilization rate and link load variance, and rerouting the stream with the largest occupied bandwidth on the link with the largest link utilization rate to the path with the largest bottleneck bandwidth. The method combines the initial routing and rerouting algorithm to strengthen the load balancing effect; the method meets the low-delay requirement of small flows and the high-throughput requirement of large flows; a dynamic routing algorithm based on a probability selection algorithm is adopted for the large flow, so that the problem that the large flow is distributed on the same path due to untimely updating of path information is avoided; and setting the upper limit of the rerouting times in a single period, avoiding unnecessary rerouting caused by network fluctuation and reducing the overhead of a controller.

Description

SDN-based data center network multipath dynamic load balancing method
Technical Field
The invention belongs to the field of data center networks, and particularly relates to a SDN-based data center network multipath dynamic load balancing method.
Background
With the rapid development of internet technology, network traffic increases rapidly, network operators continue to increase the deployment density of servers and storage devices, data center network nodes and links increase exponentially, and data centers have gradually become a convergence place for network traffic. The increasing data traffic of the data center and the different types of traffic put higher demands on the link requirements and the quality of service requirements on the data center network, but most of the existing routing algorithms do not comprehensively consider the real-time state and each traffic characteristic of the links, so that part of links in the network are overloaded while the other part of links are still in an idle state, and unbalance of network load is caused.
In the current data center load balancing algorithm, the ECMP algorithm is most widely used, a plurality of equivalent paths are numbered firstly, hash modulo operation is carried out on the header fields of the data packets according to the number of the equivalent paths, and finally the data streams are mapped to the corresponding paths. The ECMP algorithm realizes stream number equalization, but because the ECMP algorithm does not consider the dynamic network environment, large streams may be mapped onto the same link to cause large stream collision, aggravate link load, even cause link congestion, and data packet loss. The ECMP algorithm performs well when the network load is light, but as the network load is increased, the network load becomes unbalanced. The LABERIO algorithm takes the instantaneous used bandwidth variance of the whole network as a load balancing parameter, once the variance exceeds a set threshold, the SDN controller schedules the flow on the link with the heaviest congestion degree in the network to a path which meets the flow transmission condition and has the minimum load, and the LABERIO meets the urgency of scheduling certain flows, but the instantaneous used bandwidth variance is easy to be influenced by burst flow in the network, and rerouting is triggered continuously, so that unnecessary overhead is caused. The DLB algorithm is a dynamic load balancing algorithm oriented to a fat tree topology network, and in the fat tree topology network, as long as a path of traffic reaching a highest-layer node is determined, a downloading path is correspondingly determined. The DLB adopts the greedy strategy idea, and starts from the source node to select another node of the link with the largest residual bandwidth as the next hop until reaching the highest-layer switch node. However, the path obtained by the DLB algorithm is only locally optimal, which may cause congestion of a part of the locally optimal path.
The patent CN106533960a proposes a congestion control algorithm based on scheduling priority, which routes new flows by adopting a DLB algorithm based on link bandwidth and large number of flows, monitors the network, reroutes the found large flows on the congested link, and reschedules the large flows in sequence from large to small according to a DR algorithm based on deadline and switch queue length until the link is no longer congested. The method reduces the load on the congestion link to a certain extent and improves the link utilization. However, the DLB algorithm adopted by this patent for the new flow selects the next hop characteristic due to the greedy strategy, and the selected paths are mostly local optimal paths, so that local link congestion is easily caused, and then the DR algorithm is adopted, although considering the urgency of large flows on the local link being scheduled, congestion can be relieved to a certain extent, but the unrestricted DR algorithm may cause a large amount of additional overhead; while the low latency requirement of the small stream is not considered. Therefore, the new flow routing method is optimized and the rerouting mechanism is combined at the same time, so that better load balancing effect can be achieved.
Disclosure of Invention
Aiming at the technical problems, the invention provides a data center network multipath dynamic load balancing method (Multipath Transmission Dynamic LoadBalance, MTDLB) based on SDN (Software Defined Network ) by combining an initial routing algorithm and a rerouting algorithm, and the method is improved on the traditional data center load planning method.
A SDN-based data center network multipath dynamic load balancing method comprises the following steps:
step 1: monitoring the network state and acquiring real-time state information of each device in the network;
step 2: dividing the size stream; the edge switch is responsible for judging the size flow, when the edge switch receives the flow from the host, the flow with the size lower than 10% of the link capacity is defined as small flow, and the flow with the size higher than 10% of the link capacity is defined as large flow;
step 3: executing an initial routing algorithm; after the step 2 is judged to divide the large and small flows, the small flows are routed in an ECMP mode, and hash operation is carried out based on 10-tuple values of the flows to select a forwarding path; for large flows, a large flow routing algorithm is adopted;
step 4: executing a rerouting algorithm; calculating average link utilization and link load variance when average link utilization is greater than 30% and link used bandwidth variance exceeds a threshold
Figure BDA0002994155620000031
When the method is used, rerouting is carried out, and the stream with the largest occupied bandwidth on the link with the largest link utilization rate is rerouted to the path with the largest bottleneck bandwidth, and meanwhile, the upper limit of the rerouting frequency is set to be 2;
step 5: for each monitoring period, steps 1 to 4 are repeatedly executed until the monitoring period ends.
Further, the data center network adopts a fat tree topology as a network model, wherein the fat tree topology comprises a core layer, an aggregation layer and an edge layer, and each layer is composed of a switch; the aggregation switch and the edge switch form a plurality of Pod, and the aggregation switch in each Pod is respectively connected with different switches in the same Pod; let k denote the number of Pods, each Pod having k/2 aggregation switches and k/2 edge switches; the core layer has k 2 4 core switches, aggregation layer with k 2 2 aggregation switches, edge layer has k 2 2 edge switches k 3 4 hosts; in addition, each edge switch is connected to k/2 hosts.
Further, in step 3, the large flow routing algorithm adopted for the large flow in the initial routing stage calculates weights w of a plurality of available paths based on the remaining available bandwidth of the link, the link capacity, the bandwidth required by the large flow and the number of path links, and then selects a forwarding path according to the path weights according to probability, which specifically comprises the following steps:
step 3-1, finding out all available paths from a large stream source to a destination;
step 3-2, calculating to obtain the utilization rate of the link bandwidth and the bottleneck bandwidth of the kth path, obtaining the utilization rate of the residual link of the kth path on the basis, summarizing to obtain the path satisfaction, and finally obtaining the weight of the path according to the path satisfaction;
and 3-3, obtaining the probability of selecting the path for transmission through the ratio of the weight value of the kth path to the sum of all available path weight values, normalizing the probability to obtain the subinterval of each path corresponding to the [0,1 ] interval, finally generating the random number on the [0, 1'), and selecting the corresponding path as the stream transmission path according to the subinterval to which the random number falls.
Further, in step 3-1, for fat tree topology, J represents all path sets from source to destination, j= { J 1 ,j 2 ,j 3 ,...,j k ,...},j k Representing the kth path.
Further, in step 3-2, the link bandwidth utilization U ki Is the used bandwidth of the linkRatio to link capacity:
U ki =S ki /C (1)
wherein S is ki Representing the used bandwidth of the ith link in the kth path, C representing the link capacity, and i representing the link;
the k-th path bottleneck bandwidth is the maximum value of the used bandwidth in each link is B k
B k =max{S k1 ,S k2 ,...,S kn } (2)
Wherein n is the number of links in the path;
the k-th path residual link utilization is the ratio of the minimum residual bandwidth to the link capacity after the transmission of the stream is U s
U s =((C-B k )-f)/C (3)
Where f represents the stream bandwidth;
and then obtaining the path satisfaction S:
Figure BDA0002994155620000051
finally, defining the weight of the path:
Figure BDA0002994155620000052
further, in step 3-3, n available paths are set between the source node and the destination node, and the weight value of the kth path is W k The probability of selecting the kth path is P k The value is the ratio of the weight value of the kth path to the sum of all available path weight values, and the calculation formula is as follows:
Figure BDA0002994155620000053
for each stream, according to probability P k Randomly selecting a path; because the sum of the probabilities is 1, for k paths,k intervals are designed, and the interval corresponding to the kth path is shown in formula (7):
Figure BDA0002994155620000061
ensuring that the interval of [0,1 ] is occupied; generating random numbers between 0 and 1), and selecting a corresponding path to forward the stream in which interval to finish the selection of the stream transmission path.
Further, in step 4, the average link utilization and the variance of the used bandwidth are used as trigger conditions for the rerouting mechanism, wherein:
Figure BDA0002994155620000062
U(t)=Load(t)/C (9)
Figure BDA0002994155620000063
in the load i (t) represents the bandwidth used by link i at time t, load (t) represents the average bandwidth used by the link, U (t) represents the average link utilization, and M represents the total number of links in the network; the larger o (t) is the larger the dispersion degree of the used bandwidth of the link, the more inconsistent the link utilization is, and the more unbalanced the network load is; further, a limit value cw=2 is set, and once the number of consecutive reroutes exceeds 2 in one monitoring period, the rerouting is terminated.
Further, the specific flow of step 4 is as follows:
step 4-1: initializing a limit value cw=0;
step 4-2: calculating the average used bandwidth Load (t) and the used bandwidth variance o (t) of the network link respectively through formulas (8) and (10), and then calculating the average link utilization U (t) in the network through formula (9); when U (t) is more than or equal to 30 percent
Figure BDA0002994155620000064
Go to step 4-3, otherwise end this timeRerouting;
step 4-3: comparing link utilization load for all links l (t) find load l (t) maximum link flow F with maximum flow load max
Step 4-4: selecting the path with the largest bottleneck bandwidth as a rerouting path of the flow, and determining F max Rerouting onto this path, CW plus 1;
step 4-5 if CW >2, ending this rerouting, otherwise going to step 4-2.
Compared with the prior art, the invention has the following beneficial effects:
(1) Compared with the traditional load balancing algorithm, the method combines the initial routing algorithm and the rerouting algorithm, and strengthens the load balancing effect.
(2) The invention adopts different algorithms aiming at the large and small flows, thereby meeting the low delay requirement of the small flows and the high throughput requirement of the large flows. And a dynamic routing algorithm based on a probability selection algorithm is adopted for the large flow, so that the problem that the large flow is distributed on the same path due to untimely updating of path information is avoided.
(3) The upper limit of the rerouting times in a single period is set, unnecessary rerouting caused by network fluctuation is avoided, and the overhead of a controller is reduced.
Drawings
Fig. 1 shows a fat tree topology network diagram (k=4) in an embodiment of the invention.
Fig. 2 is a flow chart of an initial routing algorithm in an embodiment of the present invention.
Figure 3 is a flow chart of a rerouting algorithm in an embodiment of the present invention.
Detailed Description
The technical method of the present invention will be described in further detail with reference to the accompanying drawings.
The present invention adopts Fat Tree (Fat-Tree) topology as a network model, as shown in fig. 1. The fat tree topology contains 3 layers: core layer, convergence layer, border layer. The aggregation switch and the edge switch form a plurality of Pod, and the aggregation switch in each Pod is respectively connected with different switches in the same Pod. Let k denote the number of Pods, each Pod having k/2 aggregation switchesWith k/2 edge switches, each edge switch is connected to k/2 hosts for a total of k 2 4 core switches, k 2 2 aggregation switches k 2 2 edge switches k 3 4 hosts. In the present invention, the value of k is 4, that is, there are 4 core switches, 8 aggregation switches, 8 edge switches, and 16 hosts in total.
In the embodiment of the invention, J is used for representing all path sets from source to destination, and J= { J 1 ,j 2 ,j 3 ,...,j k ,...},j k Represents the kth path, i represents the link, f represents the flow bandwidth, C represents the link capacity, S ki Represents the used bandwidth of the ith link in the kth path, n is the number of links in the path (the number of links on each path may be different). Link bandwidth utilization U ki Is the ratio of the bandwidth used by the link to the link capacity:
U ki =S ki /C (1)
the k-th path bottleneck bandwidth is the maximum value of the used bandwidth in each link is B k
B k =max{S k1 ,S k2 ,...,S kn } (2)
The embodiment of the invention is divided into 2 parts to be executed:
the first stage performs an initial routing algorithm, the specific steps of which are shown in fig. 2. Network state information such as link used bandwidth, switch forwarding rate, etc. is first obtained. Secondly, when the edge switch detects a new flow, detecting whether the size of the flow exceeds 10% of the link capacity, if not, judging the flow as a small flow, and routing the flow in an ECMP mode; if the size of the flow exceeds 10% of the link capacity, it is determined as a large flow, and a large flow routing algorithm based on probability selection is adopted.
ECMP algorithm introduction:
in fat tree topology networks with multipath transmission characteristics, each stream has multiple available transmission paths, and the ECMP algorithm opens up a buffer space for all streams for storing the multiple available paths. When the small flow reaches the edge switch, the ECMP algorithm takes the five-tuple of the data flow as the input value of the hash function to carry out modular computation on the number of available paths to obtain an output value, and then the forwarding is completed according to the preset mapping relation between the output value of the hash function and the equivalent path. Taking the data stream from host H1 to host H5 as an example, according to the ECMP algorithm, there are 4 best equivalent paths (S1, S9, S17, S11, S3), (S1, S9, S18, S11, S3), (S1, S10, S19, S12, S3) and (S1, S10, S20, S12, S3), respectively, as shown in fig. 1. And then, for each stream, according to the calculated modulo value, 0,1, 2 and 3 respectively correspond to 4 equivalent paths, and the corresponding paths are selected for transmission.
The large flow routing algorithm firstly finds out a plurality of reachable paths in a fat tree topology with multipath characteristics according to source and destination addresses of the flow, calculates bottleneck bandwidths of the paths according to a formula (2), and selects paths with path bottleneck bandwidths larger than flow transmission bandwidths as available paths. The next step is to calculate the path weights w.
Firstly, setting the ratio of the minimum residual bandwidth to the link capacity of the k-th path after the transmission of the stream as U s
U s =((C-B k )-f)/C (3)
And then obtaining the path satisfaction S:
Figure BDA0002994155620000091
the path satisfaction S represents the satisfaction degree of the flow on the available path, and the larger the residual bottleneck bandwidth of the path is, the residual link utilization U is s The larger the path satisfaction S, which means that the more satisfied the flow is with the path, the more likely the path transport flow is selected. In the method, a path is divided into a state of 3 according to the utilization rate of the residual link after the transmission of the stream, when Us is less than or equal to 10%, the path is judged to be in a heavy load state, and large stream is not suitable to be transmitted, and the weight is 1; when Us is more than 10% and less than or equal to 50%, judging that the path is in a medium load state, and transmitting large flow, wherein the weight is 3; when Us > 50%, the determination is madeThe path is in a light load state, and is suitable for transmitting large flows, and the weight is 9.
Finally, defining the weight of the path:
Figure BDA0002994155620000101
the path weight comprehensively considers the number of links, the bottleneck bandwidth and the stream bandwidth, and achieves the aim of load balancing as much as possible. The bottleneck link bandwidth is considered because the bottleneck link more truly reflects the path available bandwidth situation. In the flow transmission process, the larger the utilization rate of the rest links of the paths means the larger the bottleneck bandwidth, and the use of the paths to transmit the flow can effectively reduce the load balancing variance, so that the network load is more balanced, small flows are not easy to block, and the time delay of the small flows is not easy to influence. Therefore, in the invention, the path weight increases along with the increase of the utilization rate of the residual links, and in addition, the addition of the number of the links considers the transmission delay of the stream, thereby improving the transmission efficiency.
After the weights of the available paths are calculated, the probability that all the available paths are selected as streaming paths is calculated by taking the probability selection method weights as reference factors, and finally the streaming is transmitted by selecting the paths according to the probability.
The probability selection algorithm is a common flow forwarding method, can greatly reduce the influence of untimely updating of the link state, and can better realize load balancing.
Assuming that n available paths exist between a source node and a destination node, the weight value of a kth path is W k The probability of selecting the kth path is P k The value is the ratio of the weight value of the kth path to the sum of all available path weight values, and the calculation formula is as follows:
Figure BDA0002994155620000111
for each stream, according to probability P k The method for randomly selecting the paths comprises the following steps: because the sum of the probabilities is 1, fork paths are designed, k sections are designed, and the section corresponding to the kth path is shown in formula (7):
Figure BDA0002994155620000112
ensure that the interval of [0,1 ] is completed. Generating random numbers between 0 and 1), and selecting corresponding paths to forward the flow in the interval.
For paths with high probability, the probability space is relatively large, so that the generated random numbers are more likely to fall on the paths with high probability.
So far, after the first stage is executed, entering the second stage: and a rerouting stage.
The specific steps of rerouting are shown in fig. 3, where network state information, such as link used bandwidth, switch forwarding rate, etc., is first obtained. The average used bandwidth of the network is calculated according to the formula (8) through the obtained used bandwidth information of the links of the whole network, the average link utilization rate of the network is calculated according to the formula (9), and the variance of the used bandwidth of the links in the network is calculated according to the formula (10). After the three data are obtained, a rerouting trigger judgment is carried out, when the average link utilization reaches 30 percent and the variance of the used bandwidth of the link exceeds a threshold value
Figure BDA0002994155620000113
And triggering a rerouting mechanism, finding out a link with the highest congestion degree, namely a link with the highest link utilization rate, rescheduling the flow with the highest occupied bandwidth on the link, calculating all available paths of the flow according to the scheme in the first stage, calculating the bottleneck bandwidths of the available paths, and selecting the path with the highest bottleneck bandwidth as the scheduling path of the flow. Meanwhile, the upper limit CW=2 of the rerouting times is set, and in one period, the scheduling of 2 times of large flows is only executed at most, so that the network resource waste caused by continuous rerouting is avoided.
Figure BDA0002994155620000121
U(t)=Load(t)/C (9)
Figure BDA0002994155620000122
load l (t) represents the bandwidth used by link l at time t, load (t) represents the average bandwidth used by the link, U (t) represents the average link utilization, and M refers to the total number of links in the network. The larger o (t) indicates the greater degree of dispersion of the used bandwidth of the link, the more inconsistent the link utilization, and the more unbalanced the network load. When the network is in a low-load state, the influence of load imbalance on the network performance is not very large, and as the network load is increased, the load imbalance directly influences the network throughput and the transmission delay of the traffic. The flow is as follows:
step 1: initializing cw=0;
step 2: and (3) calculating the average used bandwidth Load (t) and the used bandwidth variance o (t) of the network link respectively through formulas (8) and (10), and then calculating the average link utilization U (t) in the network through formula (9). When U (t) is more than or equal to 30 percent
Figure BDA0002994155620000123
(
Figure BDA0002994155620000124
Empirically derived) to step 3, otherwise ending the rerouting;
step 3: comparing link utilization load for all links l (t) find load l (t) maximum link flow F with maximum flow load max
Step 4: selecting the path with the largest bottleneck bandwidth as a rerouting path of the flow, and determining F max Rerouting onto this path, CW plus 1;
step 5, if CW >2, ending the rerouting, otherwise turning to step 2.
The above description is merely of preferred embodiments of the present invention, and the scope of the present invention is not limited to the above embodiments, but all equivalent modifications or variations according to the present disclosure will be within the scope of the claims.

Claims (6)

1. A SDN-based data center network multipath dynamic load balancing method is characterized in that: the method comprises the following steps:
step 1: monitoring the network state and acquiring real-time state information of each device in the network;
step 2: dividing the size stream; the edge switch is responsible for judging the size flow, when the edge switch receives the flow from the host, the flow with the size lower than 10% of the link capacity is defined as small flow, and the flow with the size higher than 10% of the link capacity is defined as large flow;
step 3: executing an initial routing algorithm; after the step 2 is judged to divide the large and small flows, the small flows are routed in an ECMP mode, and hash operation is carried out based on 10-tuple values of the flows to select a forwarding path; for large flows, a large flow routing algorithm is adopted;
in step 3, the large flow routing algorithm adopted for the large flow in the initial routing stage calculates weights w of a plurality of available paths based on the remaining available bandwidth of the link, the link capacity, the bandwidth required by the large flow and the number of path links, and then selects a forwarding path according to the probabilities according to the path weights, and the specific steps are as follows:
step 3-1, finding out all available paths from a large stream source to a destination;
step 3-2, calculating to obtain the utilization rate of the link bandwidth and the bottleneck bandwidth of the kth path, obtaining the utilization rate of the residual link of the kth path on the basis, summarizing to obtain the path satisfaction, and finally obtaining the weight of the path according to the path satisfaction;
step 3-3, obtaining the probability of selecting the path for transmission through the ratio of the weight value of the kth path to the sum of all available path weight values, normalizing the probability to obtain sub-intervals of each path corresponding to the [0,1 ] interval, finally generating random numbers on the [0,1 ], and selecting the corresponding path as a stream transmission path according to the sub-interval to which the random numbers fall;
step 4: executing a rerouting algorithm; calculating average link utilizationLink load variance when average link utilization is greater than 30% and link used bandwidth variance exceeds a threshold
Figure QLYQS_1
When the method is used, rerouting is carried out, and the stream with the largest occupied bandwidth on the link with the largest link utilization rate is rerouted to the path with the largest bottleneck bandwidth, and meanwhile, the upper limit of the rerouting frequency is set to be 2;
the specific flow of the step 4 is as follows:
step 4-1: initializing a limit value cw=0;
step 4-2: calculating the average used bandwidth Load (t) and the used bandwidth variance o (t) of the network link respectively, and then calculating the average link utilization U (t) in the network; when U (t) is more than or equal to 30 percent
Figure QLYQS_2
Figure QLYQS_3
Turning to step 4-3, otherwise ending the rerouting;
step 4-3: comparing link utilization load for all links i (t) find load i (t) maximum link flow F with maximum flow load max
Step 4-4: selecting the path with the largest bottleneck bandwidth as a rerouting path of the flow, and determining F max Rerouting onto this path, CW plus 1;
step 4-5, if CW >2, ending the rerouting, otherwise turning to step 4-2;
step 5: for each monitoring period, steps 1 to 4 are repeatedly executed until the monitoring period ends.
2. The SDN-based data center network multipath dynamic load balancing method of claim 1, wherein: the data center network adopts a fat tree topology as a network model, wherein the fat tree topology comprises a core layer, an aggregation layer and an edge layer, and each layer consists of a switch; wherein the aggregation switch and the edge switch form a plurality of Pods, each of whichThe aggregation switches in the Pod are respectively connected with different switches in the same Pod; let k denote the number of Pods, each Pod having k/2 aggregation switches and k/2 edge switches; the core layer has k 2 4 core switches, aggregation layer with k 2 2 aggregation switches, edge layer has k 2 2 edge switches k 3 4 hosts; in addition, each edge switch is connected to k/2 hosts.
3. The SDN-based data center network multipath dynamic load balancing method of claim 1, wherein: in step 3-1, for fat tree topology, J represents the set of all paths from source to destination with j= { J 1 ,j 2 ,j 3 ,...,j k ,...},j k Representing the kth path.
4. The SDN-based data center network multipath dynamic load balancing method of claim 1, wherein: in step 3-2, the link bandwidth utilization U ki Is the ratio of the bandwidth used by the link to the link capacity:
U ki =S ki /C (1)
wherein S is ki Representing the used bandwidth of the ith link in the kth path, C representing the link capacity, and i representing the link;
the k-th path bottleneck bandwidth is the maximum value of the used bandwidth in each link is B k
B k =max{S k1 ,S k2 ,...,S kn } (2)
Wherein n is the number of links in the path;
the k-th path residual link utilization is the ratio of the minimum residual bandwidth to the link capacity after the transmission of the stream is U s
U s =((C-B k )-f)/C (3)
Where f represents the stream bandwidth;
and then obtaining the path satisfaction S:
Figure QLYQS_4
finally, defining the weight of the path:
Figure QLYQS_5
5. the SDN-based data center network multipath dynamic load balancing method of claim 1, wherein: in step 3-3, the weight value of the kth path between the source and destination nodes is set as W k The probability of selecting the kth path is P k The value is the ratio of the weight value of the kth path to the sum of all available path weight values, and the calculation formula is as follows:
Figure QLYQS_6
for each stream, according to probability P k Randomly selecting a path; since the sum of probabilities is 1, for k paths, k intervals are designed, and the interval corresponding to the kth path is shown in formula (7):
Figure QLYQS_7
ensuring that the interval of [0,1 ] is occupied; generating random numbers between 0 and 1), and selecting a corresponding path to forward the stream in which interval to finish the selection of the stream transmission path.
6. The SDN based data center network multipath dynamic load balancing method of claim 4, wherein: in step 4, taking average link utilization and used bandwidth variance as trigger conditions of a rerouting mechanism, wherein:
Figure QLYQS_8
/>
U(t)=Load(t)/C (9)
Figure QLYQS_9
in the load i (t) represents the bandwidth used by link i at time t, load (t) represents the average bandwidth used by the link, U (t) represents the average link utilization, and M represents the total number of links in the network; the larger o (t) is the larger the dispersion degree of the used bandwidth of the link, the more inconsistent the link utilization is, and the more unbalanced the network load is; further, a limit value cw=2 is set, and once the number of consecutive reroutes exceeds 2 in one monitoring period, the rerouting is terminated.
CN202110324811.7A 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method Active CN113098789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110324811.7A CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110324811.7A CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Publications (2)

Publication Number Publication Date
CN113098789A CN113098789A (en) 2021-07-09
CN113098789B true CN113098789B (en) 2023-05-02

Family

ID=76669785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110324811.7A Active CN113098789B (en) 2021-03-26 2021-03-26 SDN-based data center network multipath dynamic load balancing method

Country Status (1)

Country Link
CN (1) CN113098789B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113746734B (en) * 2021-07-30 2023-04-28 苏州浪潮智能科技有限公司 Traffic forwarding method, device, equipment and medium
CN113938434A (en) * 2021-10-12 2022-01-14 上海交通大学 Large-scale high-performance RoCEv2 network construction method and system
CN114448899A (en) * 2022-01-20 2022-05-06 天津大学 Method for balancing network load of data center
CN115134304B (en) * 2022-06-27 2023-10-03 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center
CN115396357B (en) * 2022-07-07 2023-10-20 长沙理工大学 Traffic load balancing method and system in data center network
CN116032829B (en) * 2023-03-24 2023-07-14 广东省电信规划设计院有限公司 SDN network data stream transmission control method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809393A (en) * 2017-12-14 2018-03-16 郑州云海信息技术有限公司 A kind of iink load balancing algorithm and device based on SDN

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547340B (en) * 2018-12-28 2020-05-19 西安电子科技大学 SDN data center network congestion control method based on rerouting

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809393A (en) * 2017-12-14 2018-03-16 郑州云海信息技术有限公司 A kind of iink load balancing algorithm and device based on SDN

Also Published As

Publication number Publication date
CN113098789A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113098789B (en) SDN-based data center network multipath dynamic load balancing method
CN111277502B (en) Method for transmitting data by multi-link aggregation and transmitting equipment
CN107959633B (en) Multi-path load balancing method based on price mechanism in industrial real-time network
US6084858A (en) Distribution of communication load over multiple paths based upon link utilization
Long et al. LABERIO: Dynamic load-balanced routing in OpenFlow-enabled networks
JP4768192B2 (en) Method and system for controlling data traffic in a network
US8000239B2 (en) Method and system for bandwidth allocation using router feedback
US7765321B2 (en) Link state routing techniques
CN113785541A (en) System and method for immediate routing in the presence of errors
US7107344B2 (en) Connection allocation technology
WO2010127543A1 (en) Method and equipment for selecting terminal during congestion process
CN110351187A (en) Data center network Road diameter switches the adaptive load-balancing method of granularity
CN112350949A (en) Rerouting congestion control method and system based on flow scheduling in software defined network
Patil Load balancing approach for finding best path in SDN
CN109039941B (en) Adaptive packet scattering method based on path classification in data center network
WO2015168888A1 (en) Network congestion control method and controller
US7508766B2 (en) Packet routing
Hertiana et al. A joint approach to multipath routing and rate adaptation for congestion control in openflow software defined network
Nithin et al. Efficient load balancing for multicast traffic in data center networks using SDN
Chooprateep et al. Video path selection for traffic engineering in SDN
Devapriya et al. Enhanced load balancing and QoS provisioning algorithm for a software defined network
Nishimuta et al. Adaptive server and path switching scheme for content delivery network
CN109861923B (en) Data scheduling method and TOR switch
JP2003242065A (en) Contents selection, contents request acceptance control, congestion control method, contents control device, network resource control server device, portal server device and edge device
US11502941B2 (en) Techniques for routing data in a network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant