CN115134304A - Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center - Google Patents

Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center Download PDF

Info

Publication number
CN115134304A
CN115134304A CN202210740907.6A CN202210740907A CN115134304A CN 115134304 A CN115134304 A CN 115134304A CN 202210740907 A CN202210740907 A CN 202210740907A CN 115134304 A CN115134304 A CN 115134304A
Authority
CN
China
Prior art keywords
path
data packet
delay
current
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210740907.6A
Other languages
Chinese (zh)
Other versions
CN115134304B (en
Inventor
胡晋彬
贺蔓
饶淑莹
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202210740907.6A priority Critical patent/CN115134304B/en
Publication of CN115134304A publication Critical patent/CN115134304A/en
Application granted granted Critical
Publication of CN115134304B publication Critical patent/CN115134304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a self-adaptive load balancing method for avoiding data packet disorder in a cloud computing data center, which relates to the technical field of data processing, and can self-adaptively judge whether to reroute or not according to the data packet interval of the same stream and the delay time difference of parallel equivalent paths in an RDMA network under the condition that a time interval threshold value related to a switching path is not required to be set, so that the disorder phenomenon is avoided, namely, when the time interval between the current data packet arriving at a switch and the forwarding time of the previous data packet of the stream to which the data packet belongs is greater than the delay time difference between the current path and other paths with smaller delay than the current path, the path with the smallest delay is selected from the other paths meeting the conditions as the rerouting path of the current data packet. The invention can effectively avoid the condition of out-of-order rerouting in the RDMA network, ensure the flexibility of the data packet switching path, promote the network load balance, reduce the transmission delay and improve the data transmission efficiency.

Description

Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center
Technical Field
The invention relates to the technical field of data processing, in particular to a self-adaptive load balancing method for avoiding data packet disorder in a cloud computing data center.
Background
In order to meet the reliable transmission requirements of low latency and high throughput of online data intensive services, distributed machine learning and storage applications in cloud computing and reduce the utilization rate of a CPU, Remote Direct Memory Access (RDMA) technology based on Ethernet is widely deployed in modern aggregation enhanced Ethernet data centers, so that data transmission bypasses a kernel processing system of a terminal host, and the processing latency of the data transmission on the terminal host is remarkably reduced.
However, existing load balancing solutions do not work efficiently in RDMA-deployed datacenters. A load balancing mechanism based on a packet switching path is prone to disorder, that is, a packet with a large sequence number arrives at a receiving end before a packet with a small sequence number, so that a network card of the receiving end needs to cache the disorder packet, and even the cache overflows and loses packets. Although the load balancing mechanism based on flow (flow) has no disorder, the network load is unbalanced due to the fact that the data packets cannot flexibly switch paths, and the link utilization rate is low. The flow sheet (flow) switching path based load balancing mechanism can avoid disorder, and meanwhile, when the flow sheet interval exceeds a certain preset threshold, the flow sheet can flexibly switch the path, but in the current RDMA transmission mechanism, because the forwarding rate usually passes through rate smoothing, the interval between data packets hardly reaches the set threshold, so that the flow rarely occurs, and the traffic cannot be flexibly rerouted, which results in load imbalance and low link utilization rate. If the threshold for occurrence of flowet intervals is set to a smaller value directly, frequent rerouting occurs and an out-of-order phenomenon between flow pieces occurs.
Therefore, in a data center with RDMA deployed, how to flexibly switch paths of data packets and avoid out-of-order, balance load, improve link utilization rate, and reduce transmission completion time of streams is an urgent problem to be solved.
Disclosure of Invention
The self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center can effectively solve the problem that the existing load balancing mechanism cannot simultaneously guarantee flexible rerouting of data packets and no disorder phenomenon in a data center network with RDMA deployed.
In order to solve the technical problems, the invention adopts the following technical method: a self-adaptive load balancing method for avoiding data packet disorder of a cloud computing data center comprises the following steps:
step one, initializing a base path round trip delay RTT and a path delay updating period T th The initial time t of the path delay updating period;
step two, the exchanger monitors whether a new data packet arrives, and if the new data packet arrives, the step three is carried out; otherwise, continuously monitoring whether a new data packet arrives;
step three, judging whether the current data packet is the first data packet of the new flow, if so, selecting the path with the minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step;
step four, calculating the arrival time t of the current data packet c The forwarding time t of the previous data packet of the flow to which the data packet belongs e Difference of t f ,t f =t c -t e And calculating the delay difference t between the current path and other paths with delay smaller than that of the current path p Turning to the fifth step;
step five, judging whether other paths exist or not so as to enable t f Greater than t p If yes, rerouting the data packet, selecting any other path meeting the conditions as a forwarding path, and turning to the sixth step; otherwise, the data packet does not switch the path, selects the current path as the forwarding path, and then goes to step six;
step six, forwarding time t of the previous data packet of the current data packet flow e And setting the forwarding time of the current data packet, and turning to the step two.
Further, in step one, the initial basic path round trip delay RTT is set to 50 μ s, and the path delay update period T is set to th Set to 2RTT, the start time t of the path delay update period is set to 0.
Further, any time from the monitoring of the second step to the arrival of the new data packet to the execution of the sixth step is judged, and the current time and the starting time of the path delay updating period are judgedWhether the difference between T is greater than or equal to the path delay updating period T th And if so, updating the round-trip delay of each path, and setting the starting time t of the path delay updating period as the current time.
Furthermore, when the round-trip delay of each path is updated, the updating is carried out according to the path one-way delay carried by the data packet of the detection path delay.
Further, in step five, if there are multiple other paths such that t is f Greater than t p If the data packet is rerouted, selecting the path which meets the minimum delay of the condition as a forwarding path, and turning to the sixth step; otherwise, the data packet does not switch the path, selects the current path as the forwarding path, and goes to step six.
The invention relates to a self-adaptive load balancing method for avoiding data packet disorder in a cloud computing data center, which can self-adaptively judge whether to reroute or not according to the data packet interval of the same stream and the delay difference of parallel equivalent paths in an RDMA (remote direct memory access) network under the condition that a time interval threshold value related to a switching path is not required to be set, so that the disorder phenomenon is avoided, namely when the time interval between the current data packet arriving at a switch and the forwarding time of the previous data packet of the stream to which the data packet belongs is greater than the delay difference between the current path and other paths with smaller delay than the current path, the path with the smallest delay is selected from the other paths meeting the condition as the rerouting path of the current data packet. The method can effectively avoid the out-of-order condition of the rerouting in the RDMA network, ensure the flexibility of the data packet switching path, promote the network load balance, reduce the transmission delay and improve the data transmission efficiency.
Drawings
Fig. 1 is a flowchart of an adaptive load balancing method for avoiding data packet misordering in a cloud computing data center according to the present invention;
FIG. 2 is a topology diagram of a test scenario of an NS-3 network simulation platform according to an embodiment of the present invention;
FIG. 3 is a diagram showing the comparison of the unordered ratio of MP-RDMA and RDMALB in the symmetric topology and the asymmetric topology according to the embodiment of the present invention;
fig. 4 is a graph comparing the average flow completion time and the 99-min flow completion time of seven load balancing mechanisms under different workloads and different load strengths according to the embodiment of the present invention.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
Before describing the present invention, the design concept of the present invention is described. The invention proposes a self-adaptive load balancing scheme without presetting a time interval threshold related to the route switching, and avoids the disorder phenomenon while ensuring the flexible rerouting of the data packet. Specifically, when the time interval between the forwarding time of the packet currently arriving at the switch and the forwarding time of the previous packet of the flow to which the packet belongs is greater than the delay difference between the current path and another path with the delay smaller than that of the current path, the current packet may be rerouted from the current path to the path with the delay smaller than that of the current path. Meanwhile, in order to reduce the transmission delay, the path with the minimum delay is selected from all the rerouting paths meeting the conditions as a forwarding path. For example: the data packet currently arriving at the switch, the previous forwarded data packet of the flow to which it belongs, is the slave path P 1 The time interval of the forwarding time of the data packet which arrives at the switch currently and the previous data packet of the flow to which the data packet belongs is t 1 If there are other paths P at this time x Is less than the path delay of the current path, and t 1 Greater than the current path and path P x When the current packet can be rerouted to a path P with a smaller delay than the current path x Therefore, the out-of-order phenomenon cannot occur, that is, the current data packet definitely reaches the receiving end after the previous data packet of the flow to which the data packet belongs. The invention aims to obtain a path P satisfying a condition x And selecting the path with the minimum path delay as the rerouting path of the current data packet.
The following describes in detail a self-adaptive load balancing method for avoiding data packet disorder in a cloud computing data center according to the present invention, as shown in fig. 1, the method includes:
step one, initialization: the base path round trip delay RTT is set to 50 mus, and the path delay updating period T th Set to 2RTT, the start time t of the path delay update period is set to 0.
Step two, the exchanger monitors whether a new data packet arrives, if so, the exchanger judges whether the difference value between the current time and the initial time T of the path delay updating period is larger than or equal to the path delay updating period T th If it is greater than or equal to the path delay updating period T th Updating the round-trip delay of each path according to the path one-way delay carried by the data packet for detecting the path delay, setting the initial time T of the path delay updating period as the current time, turning to the next step, and if the initial time T is less than the path delay updating period T th Then go to the next step directly, if no new data packet arrives, the switch continues to monitor whether a new data packet arrives.
Step three, judging whether the current data packet is the first data packet of the new flow, if so, selecting the path with the minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step.
Step four, calculating the arrival time t of the current data packet c The forwarding time t of the previous data packet of the flow to which the data packet belongs e Difference of difference t f ,t f =t c -t e And calculating the delay difference t between the current path and other paths with smaller delay than the current path p And turning to the fifth step.
Step five, judging whether other paths exist or not so as to enable t f Greater than t p If yes, the data packet is rerouted, the path with the minimum delay in other paths meeting the conditions is selected as a forwarding path, and the step six is carried out; otherwise, the data packet does not switch the path, and the current path is selected as the forwarding path, and the step six is carried out.
Step six, forwarding time t of the previous data packet of the current data packet flow e And setting the forwarding time of the current data packet, and turning to the step two.
It is worth noting that the current time and the path delay updating period are judgedIs greater than or equal to the path delay update period T th If it is greater than or equal to the path delay updating period T th Updating the round-trip delay of each path according to the path one-way delay carried by the data packet for detecting the path delay, setting the initial time T of the path delay updating period as the current time, turning to the next step, and if the initial time T is less than the path delay updating period T th Directly turning to the next step; "this step may be started from step two when a new packet arrives, or may be started from step three until any time before step six is executed.
In order to verify the effectiveness of the invention, the performance of the method involved in the invention is tested by using an NS-3 network simulation platform, and the experimental settings are as follows: in the NS-3 simulation experiment, as shown in fig. 2, a leaf-spine network topology is adopted, in which 10 leaf switches and 10 spine switches are connected to 30 servers and 10 spine switches through 40Gbps links, the over-purchase ratio of the leaf switch layer is 3:1, the link delay is 5 microseconds, the switch buffer size is set to 9MB, the PFC mechanism is enabled to guarantee lossless transmission, and the PFC threshold of each ingress port is set to 256 KB. Three typical workloads, namely web server, cache follower and data mining, are generated experimentally, the average stream size is between 64KB and 7.41KB, and the sending time of the stream obeys Poisson distribution. In a Web server workload, all flows are less than 1MB, whereas in a data mining workload, approximately 9% of the flows are greater than 1MB, each flow is generated between a random pair of source and target hosts, the traffic ratio within and between leaf switches is 1:3, and the network average load increases from 0.5 to 0.8.
Firstly, the number of parallel paths in the symmetrical topological structure is changed, and fig. 3(a) shows that the RDMALB provided by the present invention effectively avoids the out-of-order phenomenon as the number of parallel paths increases, and the proportion of out-of-order data packets is 0. This is because the delay difference between the new rerouting path and the current path of the data packet is smaller than the interval between two consecutive data packets in the same stream, which ensures that the data packet with the large sequence number in the same stream arrives at the receiving end later than the data packet with the small sequence number, i.e. the data packet arrives at the receiving end in order. For MP-RDMA, the method can only control the out-of-order degree of the data packets within a preset range, and cannot avoid the out-of-order phenomenon. Next, in a symmetric topology structure, the experiment changes the default bandwidth of 40Gbps of a part of parallel paths to 25Gbps to create a network topology with asymmetric bandwidth, and fig. 3(b) shows that even under the highest asymmetric degree (i.e. when the number of asymmetric paths is 4), the RDMALB provided by the present invention realizes out-of-order transmission, and the proportion of out-of-order data packets is still 0.
Finally, the experiment tests the average flow completion time and 99 min flow completion time of different schemes under different working loads and different load intensities, and the test results are shown in fig. 4.
FIG. 4(a) is a comparison graph of average flow completion times for seven load balancing mechanisms (i.e., ECMP, DC + ECMP, MP-RDMA, LetFlow, DC + LetFlow, PCN + LetFlow, RDMALB) at web service workload and four different load strengths (i.e., 0.5, 0.6, 0.7, 0.8);
FIG. 4(b) is a graph of 99 min bit stream completion times for seven load balancing mechanisms at web service workload and different load strengths;
FIG. 4(c) is a comparison graph of average flow completion times of seven load balancing mechanisms under the cache follower workload and different load strengths;
FIG. 4(d) is a comparison graph of 99 min-stall stream completion times of seven load balancing mechanisms under the cache follower workload and different load strengths;
FIG. 4(e) is a graph comparing average flow completion times for seven load balancing mechanisms under data mining workload and different load strengths;
FIG. 4(f) is a graph comparing the completion time of 99 sub-bitstreams of seven load balancing mechanisms under data mining workload and different load strengths;
the RDMALB provided by the present invention obtains the best performance in all three workloads, taking the data mining workload in fig. 4(e) as an example, under the condition that the load intensity is 0.8, the RDMALB reduces the average flow completion time by 65%, 58%, 76%, 70%, 18% and 38% respectively compared with ECMP, LetFlow, DC + ECMP (i.e., DCQCN + ECMP), DC + LetFlow (i.e., DCQCN + LetFlow), PCN + LetFlow and MP-RDMA. This is because the RDMALB adaptively and flexibly reroutes the packets out of order according to the interval and path delay difference of the packets, so as to ensure high link utilization and load balancing. Since ECMP transfers packets at a stream level, all packets of a stream are transferred over one path, and the link utilization is low, resulting in a large completion time of the stream. The LetFlow can reroute the packet only when a flow occurs, and the flow rarely occurs in an RDMA network environment, so the packet cannot flexibly switch paths, resulting in an increase in flow completion time. Even though these load balancing mechanisms work with existing congestion control mechanisms, the performance is worse than RDMALB. The specific reason is that the DC rate convergence process is slow, and the flow completion times of DCN + ECMP and DC + LetFlow are respectively greater than ECMP and LetFlow after the sending rate is restored to the line speed. MP-RDMA balances traffic in a congestion-aware manner, with performance significantly better than ECMP and LetFlow. Although in PCN + LetFlow, PCN can recognize the congestion flow and limit the congestion flow rate, significantly reducing PFC triggers, better than MP-RDMA, LetFlow is difficult to work with, unbalanced load, and still worse than RDMALB in performance.
Therefore, the self-adaptive load balancing method for avoiding the data packet disorder of the cloud computing data center can self-adaptively judge whether to reroute or not according to the data packet interval and the parallel equivalent path delay difference of the same stream in the RDMA network under the condition that a time interval threshold value related to a switching path is not required to be set, so that the disorder phenomenon is avoided, the flexibility of the data packet switching path is guaranteed, the network load balancing is promoted, the transmission delay is reduced, and the data transmission efficiency is improved.
The above embodiments are preferred implementations of the present invention, and the present invention can be implemented in other ways without departing from the spirit of the present invention.
Some of the drawings and descriptions of the present invention have been simplified to facilitate the understanding of the improvements over the prior art by those skilled in the art, and other elements have been omitted from this document for the sake of clarity, and it should be appreciated by those skilled in the art that such omitted elements may also constitute the subject matter of the present invention.

Claims (5)

1. The self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center is characterized by comprising the following steps of:
step one, initializing a base path round trip delay RTT and a path delay updating period T th The initial time t of the path delay updating period;
step two, the exchanger monitors whether a new data packet arrives, and if the new data packet arrives, the step three is carried out; otherwise, continuously monitoring whether a new data packet arrives;
step three, judging whether the current data packet is the first data packet of the new flow, if so, selecting the path with the minimum delay from all paths as a forwarding path; otherwise, turning to the fourth step;
step four, calculating the arrival time t of the current data packet c The forwarding time t of the previous data packet of the flow to which the data packet belongs e Difference of t f ,t f =t c -t e And calculating the delay difference t between the current path and other paths with smaller delay than the current path p Turning to the fifth step;
step five, judging whether other paths exist or not so as to enable t f Greater than t p If yes, rerouting the data packet, selecting any other path meeting the conditions as a forwarding path, and turning to the sixth step; otherwise, the data packet does not switch the path, selects the current path as the forwarding path, and then goes to step six;
step six, forwarding time t of the previous data packet of the current data packet flow e And setting the forwarding time of the current data packet, and turning to the step two.
2. The adaptive load balancing method for avoiding data packet disorder in the cloud computing data center according to claim 1, wherein: in step one, the basic path round trip delay RTT is set to 50 μ s during initialization, and the path delay is updatedPeriod T th Set to 2RTT, the start time t of the path delay update period is set to 0.
3. The adaptive load balancing method for avoiding data packet disorder in the cloud computing data center according to claim 2, wherein: monitoring any time from the second step to the sixth step, and judging whether the difference between the current time and the initial time T of the path delay updating period is greater than or equal to the path delay updating period T th And if so, updating the round-trip delay of each path, and setting the starting time t of the path delay updating period as the current time.
4. The adaptive load balancing method for avoiding data packet disorder in the cloud computing data center according to claim 3, wherein: and updating the round-trip delay of each path according to the path one-way delay carried by the data packet of the detection path delay.
5. The self-adaptive load balancing method for avoiding data packet disorder in the cloud computing data center according to claim 4, wherein the method comprises the following steps: in step five, if there are multiple other paths, let t f Greater than t p If so, when the data packet is rerouted, selecting the path with the minimum delay meeting the condition as a forwarding path, and turning to the step six; otherwise, the data packet does not switch the path, and the current path is selected as the forwarding path, and the step six is carried out.
CN202210740907.6A 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center Active CN115134304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210740907.6A CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210740907.6A CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Publications (2)

Publication Number Publication Date
CN115134304A true CN115134304A (en) 2022-09-30
CN115134304B CN115134304B (en) 2023-10-03

Family

ID=83379145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210740907.6A Active CN115134304B (en) 2022-06-27 2022-06-27 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Country Status (1)

Country Link
CN (1) CN115134304B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478615A (en) * 2023-12-28 2024-01-30 贵州大学 Method for solving burst disorder problem in reliable transmission of deterministic network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499957A (en) * 2008-01-29 2009-08-05 中国电信股份有限公司 Multipath load balance implementing method and data forwarding apparatus
US20140362705A1 (en) * 2013-06-07 2014-12-11 The Florida International University Board Of Trustees Load-balancing algorithms for data center networks
CN105873096A (en) * 2016-03-24 2016-08-17 重庆邮电大学 Optimization method of efficient throughput capacity of multipath parallel transmission system
US20170093732A1 (en) * 2015-09-29 2017-03-30 Huawei Technologies Co., Ltd. Packet Mis-Ordering Prevention in Source Routing Hitless Reroute Using Inter-Packet Delay and Precompensation
CN107426102A (en) * 2017-07-26 2017-12-01 桂林电子科技大学 Multipath parallel transmission dynamic decision method based on path quality
US20180077064A1 (en) * 2016-09-12 2018-03-15 Zixiong Wang Methods and systems for data center load balancing
US20180139139A1 (en) * 2015-07-16 2018-05-17 Huawei Technologies Co., Ltd. Clos Network Load Balancing Method and Apparatus
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN110932814A (en) * 2019-12-05 2020-03-27 北京邮电大学 Software-defined network time service safety protection method, device and system
CN111416777A (en) * 2020-03-26 2020-07-14 中南大学 Load balancing method and system based on path delay detection
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method
CN113810405A (en) * 2021-09-15 2021-12-17 佳缘科技股份有限公司 SDN network-based path jump dynamic defense system and method
CN114666278A (en) * 2022-05-25 2022-06-24 湖南工商大学 Data center load balancing method and system based on global dynamic flow segmentation

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499957A (en) * 2008-01-29 2009-08-05 中国电信股份有限公司 Multipath load balance implementing method and data forwarding apparatus
US20140362705A1 (en) * 2013-06-07 2014-12-11 The Florida International University Board Of Trustees Load-balancing algorithms for data center networks
US20180139139A1 (en) * 2015-07-16 2018-05-17 Huawei Technologies Co., Ltd. Clos Network Load Balancing Method and Apparatus
US20170093732A1 (en) * 2015-09-29 2017-03-30 Huawei Technologies Co., Ltd. Packet Mis-Ordering Prevention in Source Routing Hitless Reroute Using Inter-Packet Delay and Precompensation
CN105873096A (en) * 2016-03-24 2016-08-17 重庆邮电大学 Optimization method of efficient throughput capacity of multipath parallel transmission system
US20180077064A1 (en) * 2016-09-12 2018-03-15 Zixiong Wang Methods and systems for data center load balancing
CN107426102A (en) * 2017-07-26 2017-12-01 桂林电子科技大学 Multipath parallel transmission dynamic decision method based on path quality
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
CN110351187A (en) * 2019-08-02 2019-10-18 中南大学 Data center network Road diameter switches the adaptive load-balancing method of granularity
CN110932814A (en) * 2019-12-05 2020-03-27 北京邮电大学 Software-defined network time service safety protection method, device and system
CN111416777A (en) * 2020-03-26 2020-07-14 中南大学 Load balancing method and system based on path delay detection
CN113098789A (en) * 2021-03-26 2021-07-09 南京邮电大学 SDN-based data center network multipath dynamic load balancing method
CN113810405A (en) * 2021-09-15 2021-12-17 佳缘科技股份有限公司 SDN network-based path jump dynamic defense system and method
CN114666278A (en) * 2022-05-25 2022-06-24 湖南工商大学 Data center load balancing method and system based on global dynamic flow segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINGLONG DIAO: "Flex: A flowlet-level load balancing based on load-adaptive timeout in DCN", FUTURE GENERATION COMPUTER SYSTEMS *
沈耿彪;李清;江勇;汪漪;徐明伟;: "数据中心网络负载均衡问题研究", 软件学报, no. 07 *
阳旺;李贺武;吴茜;吴建平;: "基于最小反馈时延的多径应答路径选择算法", 清华大学学报(自然科学版), no. 07 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478615A (en) * 2023-12-28 2024-01-30 贵州大学 Method for solving burst disorder problem in reliable transmission of deterministic network
CN117478615B (en) * 2023-12-28 2024-02-27 贵州大学 Reliable transmission method in deterministic network

Also Published As

Publication number Publication date
CN115134304B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
He et al. Presto: Edge-based load balancing for fast datacenter networks
US10951733B2 (en) Route selection method and system, network acceleration node, and network acceleration system
CN107360092B (en) System and method for balancing load in data network
US10534601B1 (en) In-service software upgrade of virtual router with reduced packet loss
Perry et al. Fastpass: A centralized" zero-queue" datacenter network
US9065721B2 (en) Dynamic network load rebalancing
CN108306777B (en) SDN controller-based virtual gateway active/standby switching method and device
CN111600806A (en) Load balancing method and device, front-end scheduling server, storage medium and equipment
WO2020192358A1 (en) Packet forwarding method and network device
Li et al. MPTCP incast in data center networks
CN109088822B (en) Data flow forwarding method, device, system, computer equipment and storage medium
EP3949280A1 (en) Congestion avoidance in a slice-based network
Hu et al. TLB: Traffic-aware load balancing with adaptive granularity in data center networks
CN113746751A (en) Communication method and device
Huang et al. QDAPS: Queueing delay aware packet spraying for load balancing in data center
Aghdai et al. In-network congestion-aware load balancing at transport layer
CN115134304A (en) Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center
Chakraborty et al. A low-latency multipath routing without elephant flow detection for data centers
Wang et al. A-ECN minimizing queue length for datacenter networks
Nithin et al. Efficient load balancing for multicast traffic in data center networks using SDN
Sreekumari et al. An early congestion feedback and rate adjustment schemes for many-to-one communication in cloud-based data center networks
Li et al. VMS: Traffic balancing based on virtual switches in datacenter networks
KR101014977B1 (en) Load balancing method in the function of link aggregration
Xia et al. Multipath-aware TCP for data center traffic load-balancing
Wen et al. OmniFlow: Coupling load balancing with flow control in datacenter networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant