CN114448899A - Method for balancing network load of data center - Google Patents

Method for balancing network load of data center Download PDF

Info

Publication number
CN114448899A
CN114448899A CN202210083868.7A CN202210083868A CN114448899A CN 114448899 A CN114448899 A CN 114448899A CN 202210083868 A CN202210083868 A CN 202210083868A CN 114448899 A CN114448899 A CN 114448899A
Authority
CN
China
Prior art keywords
switch
data
network load
flow
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210083868.7A
Other languages
Chinese (zh)
Inventor
郭得科
陈雅文
李克秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210083868.7A priority Critical patent/CN114448899A/en
Publication of CN114448899A publication Critical patent/CN114448899A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for balancing network load of a data center, which is based on a data center network load module, wherein the data center network load module comprises a server, a data network load unit and a switch, the data network load unit performs balanced configuration on the congestion data flow of the switch to select a target route for output, and the method comprises the following steps: calculating the output data flow distribution of the server to obtain the queue length of an output port; calculating the actual data flow transmitted by the port and the accumulated data volume of the queue; judging whether the data volume of the port queue exceeds a preset threshold value or not; according to whether the queue accumulation amount exceeds a preset threshold value and the actual data flow, the data network load unit selects a parallel path for each data flow in the next round to output; if the queue accumulation amount exceeds the threshold value, the window value of each flow passing through the switch in the next round is reduced by half, and if not, one is added; repeating the first step; the invention solves the problem that the data center network load balance has no unified lightweight model.

Description

Method for balancing network load of data center
Technical Field
The invention belongs to the technical field of computer network application, and particularly relates to a method for balancing network load of a data center.
Background
With the unprecedented development of cloud computing and big data application, data centers are widely used, and data center networks bear huge traffic transmission pressure. Data center networks, in order to provide very high amounts of bandwidth, commonly employ a CLOS architecture in which there are multiple parallel paths between two hosts. However, in practice, the multi-path is usually not fully utilized and still is a single-path transmission. This is because the conventional ECMP routing method routes according to the five-tuple hash value of the flow, and although it is simple and easy, it is easy to have long flow hash collision to cause a decrease in link utilization and ECMP has no measure against a failed link.
Many new load balancing mechanisms have emerged in recent years: presto, CONGA, Drill, Hedera, CLOVE, Hermes, etc. There are also some recent load balancing mechanisms AG, Luopan, etc. which are usually improved over the original classical mechanisms in order to cater to large-scale data centre networks or to adapt to new traffic patterns. It is also difficult to specify which method is the best in principle. Each method will use some scenes and some experimental results to show that the method is better. These load balancing methods are good or bad, and in which case they apply, is not yet known. If all the methods can be analyzed mathematically and mechanistically, the rule of data center network load balancing can be found, and the direction can be developed more efficiently in the future.
Today's data center networks are being scaled up, new load balancing mechanisms also require test validation under larger data center networks, however, there are few people affordable data center networks that are full copies of them, and therefore, typically use testbed or simulation software, such as ns2, for simulation. However, since the package-level simulation software such as ns2 is event-triggered, and each data package passes through some nodes on the time axis to trigger some events, the operation time required by the simulation software is long under the conditions of large data volume and large topological scale. For load balancing, simulation software at a packet level is not necessarily required, granularity of load balancing includes a packet level, a stream unit level, a stream slice level and a stream level, and emulation can be accelerated in a mode of increasing granularity.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a general modeling method for load balancing of a data center network, which can be used for different load balancing mechanism systems, and a user can model a new load balancing mechanism. Specifically, the user may obtain information of general interest in the network, such as a change in queue length of each switch, flow completion time, and link throughput, using a network topology, a traffic data set, and a load balancing policy as inputs.
The invention is implemented by adopting the following technical scheme:
a method for balancing data center network load is based on a data center network load module, the data center network load module comprises a server, a data network load unit and a switch, the data network load unit performs balanced configuration on switch congestion data flow to select a target route for output, wherein:
calculating the output data flow distribution of the server to obtain the queue length of an output port;
calculating the length of the port queue to transmit actual data traffic,
judging whether the data volume of the port queue exceeds a preset threshold value or not; according to whether the queue accumulation amount exceeds a preset threshold value and the actual data flow, the data network load unit selects a parallel path for each data flow in the next round to output; if the queue accumulation amount exceeds the threshold value, subtracting half from the window value of each flow passing through the switch in the next round, and if not, adding one; repeating the first step; wherein: the preset threshold value is as follows:
Figure BDA0003480048030000021
wherein t is iteration number, i is number of switch in access layer, j is number of switch in second layer, k is number of switch in third layer, ECNTMarking a queue length threshold for ecn in the switch, Q being the switch queue length, Q1,i,j(t) denotes the initial iteration of the t-th roundQueue length, Q, of the ith switch port j of layer 1 from the beginning2,j,k(t) represents the queue length of the jth switch port k at layer 2 at the beginning of the tth iteration.
Further, the data network load unit comprises a first data network load part and a second data network load part, and the first data network load part is arranged on the server; the second data network load part is arranged on the exchange.
Further, the port queue length transmission actual data flow is obtained by the following formula:
Figure BDA0003480048030000031
Figure BDA0003480048030000032
wherein: t is iteration number, i is number of switch in access layer, j is number of switch in second layer, k is number of switch in third layer, dhRepresenting the destination address of flow h. Let the kth output port of the ith layer jth switch be Si,j,k
Figure BDA0003480048030000033
Indicating that flow h is at switch port Si,j,kThe amount of data actually transmitted into the next hop link,
Figure BDA0003480048030000034
indicating the actual input to switch port S during the t-th iterationi,j,kThe total amount of data to be processed,
Figure BDA0003480048030000035
indicating that flow h is actually input to switch port S during the t-th iterationi,j,kAmount of data, Qi,j,k(t) represents the switch port S at the initial time of the t-th iterationi,j,kThe accumulated length of the queue is processed,
Figure BDA0003480048030000036
indicating the flow h at the switch port S at the initial time of the t-th iterationi,j,kAccumulated length of (B)i,j,kIndicated at switch port Si,j,kThe bandwidth of the link to which it is connected.
Further, the data center network load module can be modeled in four classic load balancing machines of Hermes, Conga, DRILL and Presto.
Advantageous effects
(1) The invention carries out mathematical modeling on the network flow distribution of the universal data center, including the change of the flow TCP window value, the queue length of a network switch, the actual speed in a network link and the like.
(2) The invention provides a mathematical modeling mode for a data center network load balancing mechanism, and takes four classical load balancing mechanisms of Hermes, Conga, DRILL and Presto as examples to respectively perform mathematical modeling. The problem that a unified lightweight model is not available in the data center network load balancing is solved, and a theoretical basis is laid for the subsequent research of a new load balancing mechanism.
Drawings
FIG. 1 is a general flow diagram of the universal data center network traffic load balancing mathematical modeling of the present invention;
FIG. 2 is an abstract modeling diagram of a data center network switch node in accordance with the present invention;
fig. 3 is a general flow diagram of an implementation of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the following detailed discussion of the present invention will be made with reference to the accompanying drawings and examples, which are only illustrative and not limiting, and the scope of the present invention is not limited thereby.
As shown in fig. 1, the user takes as input the data center network topology, the number of servers, the arrival time of the data stream, and the byte length to construct the data center network topology. The invention can carry out abstract modeling on the network traffic load balancing mechanism of the data center according to the congestion information such as queue long queue change, link actual transmission rate and the like which can be obtained in the network traffic distribution model of the universal data center, and the modeling is the weight of each active flow passing through different parallel paths in each round of RTT. In each hop link (between server and switch, between switches), the universal data center network traffic distribution model calculates the input data volume, the switch queue accumulation volume and the actual passing data volume at the outlet.
Step 1: constructing a data center network topology, as shown in fig. 2, which is a classic clos topology, setting the number of switch layers N, and setting the number of switches N in each layer1,n2,...,nNAnd the number m of servers. The kth port to the next layer of the ith switch is denoted Si,j,kThe link bandwidth of the egress connection is denoted as Bi,j,k. In each iteration of t, each port Si,j,kWith input data volume
Figure BDA0003480048030000041
Switch queue length Qi,j,k(t), output data amount
Figure BDA0003480048030000042
For the above variables, the destination for which the flow is maintained is different receiving ends dhAmount of data of
Figure BDA0003480048030000051
Step 2: as shown in fig. 3, the amount of incoming packets for the first tier switch is first calculated as the sum of the server flow windows under all the switches. And within each round of RTT (the round number is denoted as t), all tcp flows in the time send data to the first-layer switch according to the window size. Here the behavior of a simple version of the TCP congestion window value is simulated. And ending the operation until all the streams are sent. The user can print the actual transmission quantity of each link and the queue length output at the outlet of the switch in the process, so as to improve the load balancing mechanism through analysis and research.
The window value for each TCP flow l is Fl. In each round of RTT, the window value is typically incremented by one; if TCP perceives congestion (ECN is 1), the window value is halved.
Figure BDA0003480048030000052
And 3, step 3: as shown in fig. 3, in each switch of the data center network, the traffic output of a node is used as the input of the next node, then the total amount of data packets input from each input port is calculated in each hop switch, and then an output port is selected for the total amount of data packets, and the data packets are proportionally distributed to each output port. When selecting an egress port, the type of the node switch selects the egress port according to one of the destination port of the flow and the adopted load balancing decision (as shown in fig. 1, the egress port is selected according to the initial destination address of the flow and the decision of the load balancer, which rule depends on the type of the switch, the core switch determines the routing through the destination address, and the switches at the multi-path distribution, such as the first layer switch in clos, perform the distribution according to the decision of the load balancing mechanism).
In the first layer switch, for output port S1,j,kThe size of the input flow is equal to all connections S1,j,kIs multiplied by the ratio R of the flow l to the path kl,k(t) (the ratio values are derived in the modeling of the load balancing mechanism):
Figure BDA0003480048030000053
wherein,
Figure BDA0003480048030000054
for a switch S1,jThe active flow of.
And 4, step 4: at other switch layers, output port Si,j,kInput flow of
Figure BDA0003480048030000061
Is part of the outgoing traffic of the upper layer switch
Figure BDA0003480048030000062
The sum of (a) and (b). For output port Si,j,kNew incoming traffic
Figure BDA0003480048030000063
Will be added to the queue. Input data volume plus the current cached data volume Q of the switchi,j,k(t) will go to the next hop as output, but will not exceed the upper bandwidth Bi,j,k
Figure BDA0003480048030000064
Since the links all have upper bandwidth, the part exceeding the link bandwidth will be accumulated in the current switch egress queue. Exceeding maximum length of switch queue
Figure BDA0003480048030000065
The part is discarded.
Figure BDA0003480048030000066
And 5: calculating the accumulation amount of the next hop actual transmission data and the queue length in the step 4, wherein the destination address d is senthPartial flow rate
Figure BDA0003480048030000067
And in the data packets entering the next hop, the data packets in the sequence switch queue enter the next hop preferentially, and the next time the data packets enter the data volume of the previous hop in the current round. Therefore, in the calculation process, 3 cases can occur according to the input amount of the previous hop, the data packet length of the current switch queue and the bandwidth limit of the next hop:
the input data amount of the previous hop and the accumulated length of the switch queue are less than that of the next hopLink bandwidth of one hop, the condition being formulated as:
Figure BDA0003480048030000068
at this point, the next hop goes to the destination address dhThe actual amount of transmission is the length accumulated in the queue of the switch and the destination address d from the previous hop linkhSum of actual data amounts of:
Figure BDA0003480048030000069
and (3) emptying the queue:
Figure BDA00034800480300000610
Figure BDA00034800480300000611
when the first condition is not satisfied, according to whether the length of the current switch queue is greater than the upper limit bandwidth of the next hop link, the method can be further divided into 2 conditions in detail;
secondly, the queue length of the current switch is smaller than the upper limit bandwidth of the next hop link, and the condition is expressed by a formula:
Figure BDA00034800480300000612
at this time, the destination address d in the switchhThe input part can not be totally input into the next hop, and then the input part can be input into the next hop in a proportional mode:
Figure BDA0003480048030000071
to destination address d in next round of queuehThe amount of data of (a) is the amount of accumulated input data:
Figure BDA0003480048030000072
Figure BDA0003480048030000073
the queue length of the current switch is larger than the upper limit bandwidth of the next hop link: b isi,j,k<Qi,j,k(t) of (d). At this point, the destination address d in the switchhIn a proportional manner into the next hop:
Figure BDA0003480048030000074
Figure BDA0003480048030000075
in-queue to destination address dhIs an input to destination address dhPlus the accumulation amount of the previous round:
Figure BDA0003480048030000076
according to the three conditions, at the switch port S1,j,kThe actual transmission data amount of the next hop is formulated as:
Figure BDA0003480048030000077
Figure BDA0003480048030000078
step 6: and calculating the congestion information of the network, namely the ECN mark in the process.
The mechanism principle of the ECN determines that each round of ACK is fed back to the sending end to obtain the path congestion information. When the switch queue exceeds the threshold (e.g., the ECN threshold of the switch queue in fig. 1 is in the queue), the packet is marked with ECN, and the ACK feeds back the congestion information. The switch queue length of this round is compared with the pre-set ECN threshold. In each round, regarding a path j from a switch i to a switch k, if the queue length of the switch on the path exceeds a threshold value, the path mark ECN is 1; otherwise, the present path ECN is 0. Is formulated as:
Figure BDA0003480048030000081
the information is used for calculating the window value of the sending end flow and for a load balancing mechanism to obtain the information for making a load decision. As shown in fig. 2, the ECN information is fed back to the flow window value calculation and load balancer in the data center network distribution model.
As shown in fig. 2, the load balancer selects parallel paths for each stream of each round. Modeling of a data center network traffic load balancing mechanism: by calculating the proportion of the flow allocated to different parallel paths for each flow. Different load balancing mechanism modeling is mainly applied to R on weight proportion of flow l on different equivalent paths kl,k(t) of (d). Modeling of four classical load balancing mechanisms is presented here as examples, hemmes, CONGA, DRILL, Presto respectively. The method is implemented on an end system, the HERMES and the Presto are provided with a congestion sensing mechanism, and the Presto has no congestion sensing; the CONGA and the DRILL are implemented on a switch, the CONGA is provided with congestion perception, and the DRILL is free of global congestion perception. (FIG. 3 shows the load balancer placed at the end system or the switch)
HERMES: in the hemmes, the server selects a new path to forward a stream when sending a new stream or after a certain delay. The hemmes classifies equal cost paths into GOOD, GRAY, BAD, and FAIL based on the number of ECN labels received per round and RTT measurements. In each path set, the server sends a new flow to the path with the least amount of data recently sent. If the RTT value and the round value of the received ECN are both larger than the threshold value, marking the ECN as BAD; if the RTT value and the round value of the received ECN are both smaller than the threshold value, marking the ECN as GOOD; if one is less than the threshold and the other is greater than the threshold, it is marked gray. In selecting a path, Hermes gives priority to the GOOD set, then the GRAY set, and finally the BAD set. In the same type of path set, at each server node i to the destination address dhIt maintains for each link p the most recently transmitted data quantity value
Figure BDA0003480048030000091
And then preferentially selecting the path with the least recently transmitted data. In server i, for each newActive flow j, the present invention selects path p which has the least amount of data to send recently. And the ratio of the path p selected by the stream l is set to 1 (R)l,k(t) ═ 1), and the remaining paths are 0. Then updated
Figure BDA0003480048030000092
Continuing to select a path for the next flow, calculating a path proportion value Rl,k(t)。
CONGA: the CONGA performs load balancing based on flow granularity in the switch. The output port of each switch keeps a dre congestion value for the link, and is updated by exponential weighted moving average for the ratio of the transmission rate of the link to the bandwidth. Specifically, first, at each switch port Si,j,kWith a link congestion value
Figure BDA0003480048030000093
And updates in each round by means of an exponentially weighted moving average:
Figure BDA0003480048030000094
Figure BDA0003480048030000095
where alpha is a parameter. The first layer switch maintains a congestion table consTable at the switch sending end i through the path j to the switch receiving end ki,j,kThe dre value fed back by the local dre and the receiving end leaf switch of the path congestion information in the congestion table is more consTablei,j,k=∑(dre_tmp1,i,j+dre_tmp2,j,k) (e.g., a three-tier switch, initial round dre _ tmpi,j,k=drei,j,k). For each flow l under each first-tier switch, the invention selects a path j such that its consTablei,j,kThe value is minimal. And updates the ratio of path j for flow l: rl,j(t) is 1. After selecting a path for flow l, the temporary dre value is updated:
Figure BDA0003480048030000096
and
Figure BDA0003480048030000097
then updating the ContsTablei,j,kAfter that, the path is selected for the next flow.
DRILL: in DRILL, each leaf switch sending end i carries out set division on each leaf switch receiving end j through the bandwidth ratio value of each hop link, and divides equivalent paths with the same ratio into the same equivalent path set
Figure BDA0003480048030000098
Each set maintains a number of bytes sent most recently. In each round of RTT, each flow selects a set of equal cost paths such that the ratio of the most recently sent bytes of each set is close to the ratio of the total bandwidth of the links of the set. In the set, setting a path proportion value according to the path bandwidth weight proportion in the set by a window of each flow:
Figure BDA0003480048030000101
presto: first, each flowcell size is fixed to 64 KB. First, the invention sets the length of the fixed flowcell to K. In round t, server j, several active flows l are divided into the same flowcell up to size K. If the stream is split into two. In each server i, a recently sent flowcell number is maintained for the destination h and the path j
Figure BDA0003480048030000102
Each flow cell flowcell selects a path p such that the ratio of the number of flowcells most recently transmitted for each path is close to the ratio of the bandwidth. Setting the path ratio of the flow l in the flowcell: rl,p(t) 1. If the same flow l is split into two flow units b when building the flow unit flowcell1And b2Sent to two paths p1And p2Then, then
Figure BDA0003480048030000103
The present invention is not limited to the above-described embodiments. The foregoing description of the embodiments is intended to describe and illustrate the invention and is provided for the purpose of illustration only and not for the purpose of limitation. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

1. A method for balancing data center network load is characterized in that: the method is based on a data center network load module, the data center network load module comprises a server, a data network load unit and a switch, the data network load unit performs balanced configuration on switch congestion data flow to select a target route for output, wherein:
calculating the output data flow distribution of the server to obtain the queue length of an output port;
calculating the length of the port queue to transmit actual data traffic,
judging whether the data volume of the port queue exceeds a preset threshold value or not; according to whether the queue accumulation amount exceeds a preset threshold value and the actual data flow, the data network load unit selects a parallel path for each data flow in the next round to output; if the queue accumulation amount exceeds the threshold value, the window value of each flow passing through the switch in the next round is reduced by half, and if not, one is added; repeating the first step; wherein: the preset threshold value is as follows:
Figure FDA0003480048020000011
wherein t is iteration number, i is number of switch in access layer, j is number of switch in second layer, k is number of switch in third layer, ECNTMarking a queue length threshold for ecn in the switch, Q being the switch queue length, Q1,i,j(t) represents the queue length of the ith switch port j of the layer 1 at the beginning of the tth iteration, Q2,j,k(t) for the jth switch port k at layer 2 at the beginning of the tth iterationQueue length.
2. The method of balancing data center network loads according to claim 1, wherein: the data network load unit comprises a first data network load part and a second data network load part, and the first data network load part is arranged on the server; the second data network load part is arranged on the exchange.
3. The method of balancing data center network loads according to claim 1, wherein: the port queue length transmission actual data flow is obtained by the following formula:
Figure FDA0003480048020000021
Figure FDA0003480048020000022
wherein: t is iteration number, i is number of switch in access layer, j is number of switch in second layer, k is number of switch in third layer, dhRepresenting the destination address of flow h. Let the kth output port of the ith layer jth switch be
Figure FDA0003480048020000023
Indicating flow h at switch port Si,j,kThe amount of data actually transmitted into the next hop link,
Figure FDA0003480048020000024
indicating the actual input to switch port S during the t-th iterationi,j,kThe total amount of data to be processed,
Figure FDA0003480048020000025
indicating that flow h is actually input to switch port S during the t-th iterationi,j,kData of (2)Amount, Qi,j,k(t) represents the switch port S at the initial time of the t-th iterationi,j,kThe accumulated length of the queue is processed,
Figure FDA0003480048020000026
indicating the flow h at the switch port S when the t-th iteration is initiatedi,j,kAccumulated length of (B)i,j,kIndicated at switch port Si,j,kThe bandwidth of the link to which it is connected.
4. The method for balancing the network load of the data center according to claim 1 is applied to the method, and is characterized in that: the data center network load module can be modeled in four classic load balancing machines of Hermes, Conga, DRILL and Presto.
CN202210083868.7A 2022-01-20 2022-01-20 Method for balancing network load of data center Withdrawn CN114448899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210083868.7A CN114448899A (en) 2022-01-20 2022-01-20 Method for balancing network load of data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210083868.7A CN114448899A (en) 2022-01-20 2022-01-20 Method for balancing network load of data center

Publications (1)

Publication Number Publication Date
CN114448899A true CN114448899A (en) 2022-05-06

Family

ID=81370370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210083868.7A Withdrawn CN114448899A (en) 2022-01-20 2022-01-20 Method for balancing network load of data center

Country Status (1)

Country Link
CN (1) CN114448899A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002040A (en) * 2022-05-27 2022-09-02 长沙理工大学 Load balancing method and system for sensing priority flow control based on big data
CN115396357A (en) * 2022-07-07 2022-11-25 长沙理工大学 Traffic load balancing method and system in data center network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152281A (en) * 2013-03-05 2013-06-12 中国人民解放军国防科学技术大学 Two-level switch-based load balanced scheduling method
CN104883321A (en) * 2015-05-05 2015-09-02 浙江大学 Intra-domain load balancing method based on switch load
CN105915467A (en) * 2016-05-17 2016-08-31 清华大学 Data center network flow balancing method and device oriented to software definition
CN107294865A (en) * 2017-07-31 2017-10-24 华中科技大学 The load-balancing method and software switch of a kind of software switch
CN107819695A (en) * 2017-10-19 2018-03-20 西安电子科技大学 A kind of distributed AC servo system SiteServer LBS and method based on SDN
CN107959633A (en) * 2017-11-18 2018-04-24 浙江工商大学 A kind of load balance method based on price mechanism in industry real-time network
CN108449269A (en) * 2018-04-12 2018-08-24 重庆邮电大学 Data center network load-balancing method based on SDN
CN109787913A (en) * 2019-03-15 2019-05-21 北京工业大学 A kind of data center network dynamic load balancing method based on SDN
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN111585911A (en) * 2020-05-22 2020-08-25 西安电子科技大学 Method for balancing network traffic load of data center
CN112437020A (en) * 2020-10-30 2021-03-02 天津大学 Data center network load balancing method based on deep reinforcement learning
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152281A (en) * 2013-03-05 2013-06-12 中国人民解放军国防科学技术大学 Two-level switch-based load balanced scheduling method
CN104883321A (en) * 2015-05-05 2015-09-02 浙江大学 Intra-domain load balancing method based on switch load
CN105915467A (en) * 2016-05-17 2016-08-31 清华大学 Data center network flow balancing method and device oriented to software definition
CN107294865A (en) * 2017-07-31 2017-10-24 华中科技大学 The load-balancing method and software switch of a kind of software switch
CN107819695A (en) * 2017-10-19 2018-03-20 西安电子科技大学 A kind of distributed AC servo system SiteServer LBS and method based on SDN
CN107959633A (en) * 2017-11-18 2018-04-24 浙江工商大学 A kind of load balance method based on price mechanism in industry real-time network
CN108449269A (en) * 2018-04-12 2018-08-24 重庆邮电大学 Data center network load-balancing method based on SDN
CN109787913A (en) * 2019-03-15 2019-05-21 北京工业大学 A kind of data center network dynamic load balancing method based on SDN
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN111585911A (en) * 2020-05-22 2020-08-25 西安电子科技大学 Method for balancing network traffic load of data center
CN112437020A (en) * 2020-10-30 2021-03-02 天津大学 Data center network load balancing method based on deep reinforcement learning
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李文信;齐恒;徐仁海;周晓波;李克秋;: "数据中心网络流量调度的研究进展与趋势", 计算机学报, no. 04 *
沈耿彪;李清;江勇;汪漪;徐明伟;: "数据中心网络负载均衡问题研究", 软件学报, no. 07 *
蔡岳平;樊欣唯;王昌平;: "光电混合数据中心网络负载均衡流量调度机制", 计算机应用与软件, no. 08 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002040A (en) * 2022-05-27 2022-09-02 长沙理工大学 Load balancing method and system for sensing priority flow control based on big data
CN115002040B (en) * 2022-05-27 2024-03-01 长沙理工大学 Big data-based load balancing method and system for perceived priority flow control
CN115396357A (en) * 2022-07-07 2022-11-25 长沙理工大学 Traffic load balancing method and system in data center network
CN115396357B (en) * 2022-07-07 2023-10-20 长沙理工大学 Traffic load balancing method and system in data center network

Similar Documents

Publication Publication Date Title
US10673763B2 (en) Learning or emulation approach to traffic engineering in information-centric networks
US8279753B2 (en) Efficient determination of fast routes when voluminous data is to be sent from a single node to many destination nodes via other intermediate nodes
Han et al. Multi-path tcp: a joint congestion control and routing scheme to exploit path diversity in the internet
Bu et al. Fixed point approximations for TCP behavior in an AQM network
US8547851B1 (en) System and method for reporting traffic information for a network
US6310881B1 (en) Method and apparatus for network control
Wang et al. Adaptive path isolation for elephant and mice flows by exploiting path diversity in datacenters
CN114448899A (en) Method for balancing network load of data center
CN107135158A (en) Optimal route selection method in a kind of multi-path transmission
US7525929B2 (en) Fast simulated annealing for traffic matrix estimation
CN107689919A (en) The dynamic adjustment weight fuzzy routing method of SDN
CN105743804A (en) Data flow control method and system
Li et al. Data-driven routing optimization based on programmable data plane
Khoobbakht et al. Hybrid flow-rule placement method of proactive and reactive in SDNs
CN111901237B (en) Source routing method and system, related device and computer readable storage medium
Ye et al. Minimizing packet loss by optimizing OSPF weights using online simulation
Lin et al. Proactive multipath routing with a predictive mechanism in software‐defined networks
Gadallah et al. A seven-dimensional state flow traffic modelling for multi-controller Software-Defined Networks considering multiple switches
Wang et al. TSN switch queue length prediction based on an improved LSTM network
CN114650257B (en) SDN network congestion control system and method based on RTT
Gunavathie et al. DLBA-A Dynamic Load-balancing Algorithm in Software-Defined Networking
Jouy et al. Optimal bandwidth allocation with dynamic multi-path routing for non-critical traffic in AFDX networks
CN113705826B (en) Parameter synchronous multicast method for distributed machine learning
He et al. Critical Flow Rerouting Based on Policy Gradient algorithm
CN114547832A (en) Network performance influence factor evaluation method and device based on simulation software

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220506

WW01 Invention patent application withdrawn after publication