CN117176658A - Flowlet load balancing method for data center network receiving end driving transmission protocol - Google Patents

Flowlet load balancing method for data center network receiving end driving transmission protocol Download PDF

Info

Publication number
CN117176658A
CN117176658A CN202311166446.7A CN202311166446A CN117176658A CN 117176658 A CN117176658 A CN 117176658A CN 202311166446 A CN202311166446 A CN 202311166446A CN 117176658 A CN117176658 A CN 117176658A
Authority
CN
China
Prior art keywords
message
driving
switch
flowlet
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311166446.7A
Other languages
Chinese (zh)
Inventor
刘森
徐扬
闫威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202311166446.7A priority Critical patent/CN117176658A/en
Publication of CN117176658A publication Critical patent/CN117176658A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a flowlet load balancing method for a data center network receiving end drive transmission protocol, which utilizes the characteristic that a drive message in the receiving end drive transmission protocol can still generate a queue at a switch, combines a flowlet mechanism with the transmission of the drive message, realizes the purpose of multipath transmission of the receiving end drive transmission protocol, and transmits a data message according to a corresponding drive message transmission path, thereby avoiding the problem of packet loss caused by data packet disorder, and improving the link utilization rate in the data center network, namely the network performance of the data center. As described above, the method of the embodiment ensures that the data flow of the receiving end driving transmission protocol can work normally under the flowlet mechanism, and further, the method of the embodiment dynamically adjusts the flowlet threshold according to the information in the data center network, and ensures that the message can be rerouted in time before congestion occurs, thereby improving the performance of the data center network.

Description

Flowlet load balancing method for data center network receiving end driving transmission protocol
Technical Field
The invention belongs to the technical field of data center network load balancing, and particularly relates to a flowlet load balancing method for a data center network receiving end driving transmission protocol.
Background
In recent years, with the rapid expansion of internet scale and the advent of new services such as cloud computing, the concept of data center networks has been proposed. The data center network has the characteristics of high bandwidth and low time delay, an effective congestion control mechanism and a load balancing scheme need to be designed, various novel services deployed on the data center network can be guaranteed to be responded quickly, and the quality of the services is improved.
In conventional congestion control mechanisms, sender-driven transport protocols have been the subject of significant discussion by the academic community for a long time. The scheduling decision of the transmission protocol driven by the sending end is mainly carried out on the sending end, and the switch and the receiving end only play an auxiliary role. Specifically, the sending end only carries out rate adjustment on the data message according to partial network information (such as packet loss, timeout retransmission and the like), and cannot sense network congestion in advance. In addition, the transmission end driving transmission protocol detects the link bandwidth and can converge to a proper transmission rate through a plurality of round trip time delays, which can cause the problems of too high delay and low link utilization. In order to meet the low latency communication requirements of distributed applications in data center networks, related researchers have proposed the concept of receiver-driven transport protocols. The receiving end driving transmission protocol refers to transferring the scheduling right of the data message from the sending end to the receiving end, and the sending end sends the corresponding data message according to the information fed back by the receiving end. Because the sending end can not blindly send the data messages, the phenomenon that excessive data messages are accumulated in a queue at the switch can be avoided. Therefore, the receiving end driving transmission protocol can better meet the basic requirements of low delay, high throughput rate, rapid convergence and the like.
In addition to adopting appropriate congestion control mechanisms, load balancing schemes employed by data center networks can also affect the transmission performance of the flows. Packet granularity based, stream granularity based, and flowlet granularity based load balancing schemes are three commonly used load balancing schemes in data center networks. The data packet is used as a basic unit of network transmission, and the load balancing scheme based on the granularity of the packet is theoretically the load balancing scheme with the finest granularity, but the scheme cannot guarantee the orderly arrival of the data packet, so that serious disorder problem is caused. The load balancing scheme based on the flow granularity can select a forwarding path according to a routing strategy and forward all data packets of one flow through the path, so that the problem of data packet disorder is avoided. However, the load balancing scheme based on the flow granularity cannot dynamically adjust the forwarding policy, and the adjustment granularity is too coarse, which easily causes the problem of insufficient link utilization. The load balancing scheme based on the granularity of the flowlet is proposed before and after 2015, the flowlet can reroute data messages in time when the data center network is congested, the problem of disorder of a large number of packets is avoided, and meanwhile, the load balancing effect of fine granularity can be guaranteed.
Because the receiving end driving transmission protocol has potential of wide application in the future, the combination of the receiving end driving transmission protocol and the flowlet load balancing scheme is a new research direction. But directly combining the flowlet load balancing scheme with the receiver-driven transport protocol creates several problems. On one hand, the receiving end driving transmission protocol can degrade the performance of the load balancing scheme based on the flowlet to be equivalent to the performance of the load balancing scheme based on the flow granularity, so that the flow cannot be rerouted in time when encountering link congestion; on the other hand, the prior flowlet load balancing scheme adopts a fixed threshold value, and is difficult to adapt to the dynamic change of the traffic in the data center network. As the number of flows in the network increases, the adoption of a fixed threshold may cause serious congestion and packet loss problems. That is, the receiving end driving transmission protocol in the existing data center network cannot work normally under the flowlet load balancing mechanism, so that in order to further improve the performance of the data center network, a reasonable and efficient flowlet load balancing mechanism suitable for the receiving end driving transmission protocol is required.
Disclosure of Invention
The invention is carried out for solving the problems, and aims to provide a flowlet load balancing method for a receiving end driving transmission protocol, which enables the receiving end driving transmission protocol to achieve the aim of multipath transmission, thereby fully utilizing links and improving the network performance of a data center, and adopts the following technical scheme:
the invention provides a flowlet load balancing method for a data center network receiving end drive transmission protocol, which is characterized by comprising the following steps: step S1: performing initialization setting, maintaining a driving message time stamp table and a driving message path information table at a downlink switch of a receiving end host, and maintaining a data message path information back display table at a downlink switch of a transmitting end host; step S2: when the switch receives a new message, analyzing the head of the new message and judging the message type of the new message, wherein the message type comprises a data message and a driving message; step S3: when judging the data message in the step S2, the switch inquires a data message path information echo list and forwards the data message according to the path information in the list; step S4: when judging that the driving message is the driving message in the step S2, the switch inquires a time stamp table of the driving message and judges whether a data stream to which the driving message belongs is a new stream, if so, the driving message is forwarded based on a preset routing strategy selection path, and if not, the driving message is forwarded based on the path strategy and a preset flowlet mechanism selection path; step S5: and the switch displays the contents in the drive message path information table back to the data message path information back display table, and is used for indicating the transmission of the data message.
The method for balancing the flowlet load of the data center network receiving end-oriented driving transmission protocol provided by the invention can also have the technical characteristics that the step S4 comprises the following substeps: step S4-1: the switch updates a flowlet threshold according to network information, wherein the network information comprises the number of data flows passing through the switch, the residual bandwidth of a link and the length of a queue; step S4-2: the switch inquires the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table; step S4-3: when the step S4-2 judges that the driving message is transmitted by the switch according to the route strategy selection link, and correspondingly updating the driving message time stamp table and the driving message path information table; step S4-4: when the step S4-2 is judged to be no, the switch judges whether to generate a new flowlet or not based on the flowlet mechanism; step S4-5: when the step S4-4 judges that the driving message is forwarded by the switch based on the routing strategy selection link, and the driving message time stamp table and the driving message path information table are updated correspondingly; step S4-6: and when the step S4-4 is judged to be no, the switch inquires the path information table of the driving message and forwards the driving message according to the corresponding path information in the table.
The method for balancing the flowlet load of the data center network receiving end-oriented driving transmission protocol provided by the invention can also have the technical characteristics that the step S4 comprises the following substeps: step S4-1: the switch inquires the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table; step S4-2: when the step S4-1 judges that the flow threshold value is yes, the switch updates the flow threshold value according to network information, wherein the network information comprises the number of data flows passing through the switch, the residual bandwidth of a link and the length of a queue; step S4-3: the switch selects a link to forward the driving message according to the routing strategy, and correspondingly updates the driving message time stamp table and the driving message path information table; step S4-4: when the step S4-1 is judged to be no, the switch judges whether to trigger a random update event; step S4-5: when the step S4-4 judges that the flow threshold value is yes, the switch updates the flow threshold value according to the network information; step S4-6: after step S4-5 or when step S4-4 determines no, the switch determines whether to generate a new flowlet based on the flowlet mechanism; step S4-7: when the step S4-6 judges that the driving message is forwarded by the switch based on the routing strategy selection link, and the driving message time stamp table and the driving message path information table are updated correspondingly; step S4-8: and when the step S4-6 is judged to be no, the switch inquires the path information table of the driving message and forwards the driving message according to the corresponding path information in the table.
The invention provides a method for balancing the flow let load of a data center network receiving end-oriented driving transmission protocol, which can also have the technical characteristics that the exchanger judges whether to generate a new flow let based on the flow let mechanism, and specifically comprises the following steps: and the switch queries the time stamp table of the driving messages and judges whether the time interval between two adjacent driving messages is larger than a preset time interval threshold value according to the time stamp information in the table.
The method for balancing the flowlet load of the data center network receiving end-oriented driving transmission protocol provided by the invention can also have the technical characteristics that in the step S2, the switch analyzes the field of the new message header to judge the message type.
The actions and effects of the invention
According to the flowlet load balancing method for the data center network receiving end driving transmission protocol, the characteristic that the driving message in the receiving end driving transmission protocol can still generate a queue at the switch is utilized, the flowlet mechanism is combined to transmit the driving message, the purpose of multipath transmission of the receiving end driving transmission protocol is achieved, the data message is transmitted according to the corresponding driving message transmission path, and therefore the problem of packet loss caused by data packet disorder is avoided, the link utilization rate in the data center network is improved, and the network performance of the data center is improved. As described above, the method of the invention ensures that the data flow of the receiving end driving transmission protocol can work normally under the flowlet mechanism.
Drawings
FIG. 1 is a flow chart of a method for flow let load balancing for network receiver-side driven transport protocols in accordance with an embodiment of the present invention;
FIG. 2 is a class diagram of a driving protocol of a receiving end according to a first embodiment of the present invention;
FIG. 3 is a flowchart of step S4 in a first embodiment of the present invention;
FIG. 4 is a flowchart of step S4 in the second embodiment of the present invention;
FIG. 5 is a topology diagram of a test bed of a symmetrical topology in comparative example one of the present invention;
FIG. 6 is a graph of average flow completion time versus four load balancing schemes in comparative example one of the present invention;
FIG. 7 is a diagram showing a comparison of CPU overhead for four load balancing schemes in comparative example one of the present invention;
FIG. 8 is a topology diagram of a test bed of an asymmetric topology in comparative example two of the present invention;
FIG. 9 is a graph of average flow completion time versus four load balancing schemes in comparative example two of the present invention;
FIG. 10 is a topology diagram of a simulation test of the leaf-spin topology in comparative example three of the present invention;
fig. 11 is a packet loss ratio comparison chart of four load balancing schemes in the third comparative example of the present invention;
FIG. 12 is a graph of average flow completion time versus four load balancing schemes in comparative example three of the present invention;
FIG. 13 is a graph comparing average flow completion times for four load balancing schemes in each application mode in comparative example three of the present invention;
FIG. 14 is a graph comparing long-leveling average throughput rates of four load balancing schemes in the third comparative example of the present invention in each application mode;
fig. 15 is a packet loss ratio comparison chart of four load balancing schemes in comparative example four of the present invention;
FIG. 16 is a graph of average flow completion time versus four load balancing schemes in comparative example four of the present invention;
FIG. 17 is a topology diagram of a simulation test of the fat-tree topology in comparative example four of the present invention.
Detailed Description
In order to make the technical means, creation characteristics, achievement purposes and effects of the implementation of the invention easy to understand, the flow let load balancing method for the data center network receiving end driving transmission protocol of the invention is specifically described below with reference to the embodiments and the attached drawings.
Example 1
Fig. 1 is a flow chart of a flow let load balancing method for a network receiver-side driven transport protocol in this embodiment.
As shown in fig. 1, this embodiment provides a flowlet load balancing method for a receiving end driving transmission protocol of a data center network, and in order to facilitate understanding of the method, a certain description is given below for the receiving end driving transmission protocol of the data center network.
The receiver-side driven transport protocols in a data center network can be divided into two categories: and the transmission protocol is based on the end-to-end scheduling type in the receiving end driving mode and the transmission protocol is based on the auxiliary scheduling type of the switch in the receiving end driving mode. The transmission protocol based on end-to-end scheduling in the receiving end driving mode refers to a transmission protocol for regulating the message sending rate by the receiving end; the transmission protocol based on the auxiliary scheduling of the switch in the receiving end driving mode refers to a transmission protocol which enables the switch to participate in part of message scheduling work on the basis of the original receiving end driving transmission protocol.
Fig. 2 is a diagram of a class diagram of a driving protocol of a receiving end in the present embodiment.
As shown in fig. 2, common receiver-side driven transport protocols are differentiated by several dimensions of congestion control type, congestion control signals, driving location, and whether lossless environments are supported.
The pHost is used as a model of an early receiving end driving transmission protocol, and scheduling decisions are carried out on the sending end and the receiving end. The data sender sends a Request To Send (RTS) message before the data starts to be sent, the Request message contains various scheduling information of the flow, the receiving end makes a decision according to the arrival sequence of the RTS and the size of the flow, and sends a driving message to a certain sending end, so that the basic idea of simple driving transmission of the receiving end is realized.
NDP fully considers the congestion condition in the data center network, and well solves the defect that pHost only considers the congestion at the edge of the data center network. In a first Round Trip Time (RTT), the NDP sends data packets at the link rate. After the first RTT, the sender stops sending messages at the link rate, and converts the messages into a form driven by the receiver.
RCC and RRCC creatively apply the receiving end drive transmission protocol in RDMA environment, and adjust the sending rate of data message based on RTT and ECN (Explicit Congestion Notification) respectively, thus realizing the rudiment of the receiving end drive transmission protocol in lossless network environment.
AMRT introduces the concept of Anti-ECN, explicitly carrying the underutilized signal by ECN bits available in the IP header, which ensures that the sending rate is increased when the interval is large by detecting the interval between messages.
The RPO adopts a multi-priority queue, and ensures that the data message is sent by using the low-priority message detection under the condition of not increasing the queuing delay of the high-priority message. The AMRT and the RPO both well solve the problem of insufficient occupation ratio of the link bandwidth in the receiving end driving transmission protocol.
The ExpressPass ensures that the sending rate of the data message can be normally matched with the link bandwidth by the principle that the data message and the driving message are consistent according to the path, and well solves the problem of network internal congestion.
The Polo introduces the high-priority data packets, and the receiving end dynamically adjusts the sending quantity of the driving messages according to the congestion condition of the high-priority data packets in each time interval, so that the sending rate is ensured not to be too low or overflow. In addition, polo devised a multi-packet resume mechanism and a flow pause mechanism. The former is used for recovering lost data messages as soon as possible, and the latter is used for solving the problem of concurrency of a large number of flows in an Incast scene.
As described above, the existing receiver-side driven transport protocol cannot work normally under the flowlet load balancing mechanism. In order to solve the problem, the embodiment provides a flowlet load balancing method for a receiving end driving transmission protocol.
As shown in fig. 1, the method specifically includes the following steps:
step S1: and carrying out initialization setting, maintaining a driving message time stamp table and a driving message path information table at a downlink switch (or a router) at a host of a receiving end, and maintaining a data message path information display table at the downlink switch at a host of a transmitting end.
The driving message time stamp table is used for recording information such as time stamps of driving messages passing through the switch, and whether a new flowlet is generated can be judged according to the recorded information. The driving message path information table is used for recording the number of the driving message transmission path, and whether the path change occurs can be judged according to the information recorded by the driving message path information table.
The data message path information feedback list is used for indicating the data message transmission path, and the data message transmission path corresponds to the corresponding driving message transmission path in the list.
Step S2: when the exchanger receives the new message, analyzing the new message head and judging the message type of the new message, wherein the message type comprises a data message and a driving message.
The exchanger judges the type of the message by analyzing the field of the message head. In the receiver-side transport protocol, the drive message may create a queue at the switch.
Step S3: and when judging that the data message is in the step S2, the switch queries a data message path information echo table and forwards the data message according to the corresponding path number in the table.
Step S4: when the driving message is judged in the step S2, the switch inquires a driving message time stamp table and judges whether the data stream to which the driving message belongs is a new stream, the driving message is forwarded by selecting a link based on a preset routing strategy when the judging is yes, and the driving message is forwarded by selecting the link based on a preset path strategy and a flowlet mechanism when the judging is no.
Fig. 3 is a flowchart of step S4 in the present embodiment.
As shown in fig. 3, step S4 specifically includes the following sub-steps:
step S4-1: the switch updates the flowlet threshold according to the number of flows passing through the switch, the residual bandwidth of the link, the length of the queue and the like.
Step S4-2: the switch queries the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table, namely whether the data stream to which the driving message belongs is a new stream.
Step S4-3: and when the step S4-2 judges that the driving message is a new flow, the switch selects a link to forward the driving message according to a preset routing strategy, and correspondingly updates a driving message time stamp table and a driving message path information table.
In this embodiment, the switch randomly selects a link for forwarding.
Step S4-4: when the judgment in step S4-2 is no, that is, the data flow exists, the switch judges whether or not a new flowlet is generated based on a predetermined flowlet mechanism.
The switch queries a driving message time stamp table, and judges whether the time interval between two adjacent driving messages is larger than a preset time interval threshold value or not according to the time stamp information in the table, namely, whether the time interval is enough to generate a new flowlet or not.
Step S4-5: and when the step S4-4 judges that the driving message is yes, namely when a new flowlet is generated, the switch selects a link to forward the driving message based on a preset routing strategy and correspondingly updates a driving message time stamp table and a driving message path information table.
I.e. the switching path performs forwarding of the driving message.
Step S4-6: and when the step S4-4 judges that the new flowlet is not generated, the switch inquires a path information table of the driving message and forwards the driving message according to the corresponding path number in the table.
Step S5: the switch displays the contents in the drive message path information table back to the data message path information back display table for indicating the transmission of the data message.
That is, in the method of this embodiment, the data packet is directly forwarded according to the corresponding routing information in the packet path information echo table, and the driving packet is combined with the flowlet load balancing mechanism to implement load balancing. And, the flowlet threshold is updated for each drive message.
In this embodiment, the portions not described in detail are known in the art.
Example one action and Effect
According to the flowlet load balancing method for the data center network receiving end driving transmission protocol, provided by the embodiment, the characteristic that the driving message in the receiving end driving transmission protocol can still generate a queue at the switch is utilized, the flowlet mechanism is combined to transmit the driving message, the purpose of multipath transmission of the receiving end driving transmission protocol is achieved, the data message is transmitted according to the corresponding driving message transmission path, the problem of packet loss caused by data packet disorder is avoided, the link utilization rate in the data center network is improved, and the network performance of the data center is improved. As described above, the method of the embodiment ensures that the data flow of the receiving end driving transmission protocol can work normally under the flowlet mechanism, and further, the method of the embodiment dynamically adjusts the flowlet threshold according to the information in the data center network, and ensures that the message can be rerouted in time before congestion occurs, thereby improving the performance of the data center network.
The above examples are only for illustrating the specific embodiments of the present invention, and the present invention is not limited to the description scope of the above examples.
< example two >
The embodiment provides a flowlet load balancing method for a data center network receiving end driving transmission protocol, which is different from the first embodiment in that the specific sub-steps of step S4 are different.
Fig. 4 is a flowchart of step S4 in the present embodiment.
As shown in fig. 4, step S4 of the present embodiment specifically includes the following sub-steps:
step S4-1: the switch queries the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table, namely whether the data stream to which the driving message belongs is a new stream.
Step S4-2: and when the judgment of the step S4-1 is yes, namely, when the flow is a new flow, the switch updates the flowlet threshold according to the number of flows passing through the switch, the residual bandwidth of the link, the length of the queue and other information.
Step S4-3: and the switch selects a link to forward the driving message according to a preset routing strategy, and correspondingly updates a driving message time stamp table and a driving message path information table.
In this embodiment, the switch randomly selects a link for forwarding.
Step S4-4: when the step S4-1 is no, that is, when the data stream exists, the switch determines whether to trigger a random update event.
Step S4-5: when the judgment in the step S4-4 is yes, namely when a random update event is triggered, the switch updates the flowlet threshold according to the information such as the number of flows passing through the switch, the residual bandwidth of the link, the length of the queue and the like.
Step S4-6: after step S4-5 or when the decision of step S4-4 is negative, the switch decides whether or not to generate a new flowlet based on a predetermined flowlet mechanism.
Step S4-7: and when the step S4-6 judges that the driving message is yes, namely when a new flowlet is generated, the switch selects a link to forward the driving message based on a preset routing strategy and correspondingly updates a driving message time stamp table and a driving message path information table.
Step S4-8: and when the step S4-6 judges that the new flowlet is not generated, the switch inquires a path information table of the driving message and forwards the driving message according to the corresponding path number in the table.
That is, in the method of the present embodiment, the flowlet threshold is recalculated and updated only when there is a new stream join or when a random update event is triggered.
In this embodiment, other steps are the same as those in the first embodiment, and thus the description will not be repeated.
Example two actions and effects
According to the flowlet load balancing method for the data center network receiving end driving transmission protocol provided by the embodiment, on the basis of the action and effect of the first embodiment, because the flowlet threshold is not recalculated once for each driving message, but the flowlet threshold is recalculated only when the data flow corresponding to the driving message is a new flow or a random update event is triggered, compared with the method of the first embodiment, the algorithm cost of the method of the embodiment is less, and the CPU cost of a switch can be reduced.
Comparative example one ]
In the comparative example, load balancing schemes of four data center networks were tested in test beds, respectively, and the test results thereof were subjected to comparative analysis. The four load balancing schemes are LetFlow, CONGA, letFlow +RDAF and CONGA+RDAF, respectively. Among them, the LetFlow and CONGA are two typical Flowlet load balancing schemes in the prior art, both of which are load balancing schemes based on Flowlet granularity. The letflow+rdaf is a scheme formed by further deploying the method of the second embodiment on the existing LetFlow scheme, and the conga+rdaf is a scheme formed by further deploying the method of the second embodiment on the existing CONGA scheme.
In this comparative example, the receiver-side transport protocol used was ExpressPass.
Fig. 5 is a topology diagram of a test bed of the symmetrical topology in this comparative example.
As shown in fig. 5, in this comparative example, the test bed was used to have a symmetrical topology. In the test bed, the bandwidth of the server on-board network card is 1Gbps; the hop-by-hop round trip delay without queuing delay is 100 microseconds, the message size is set to 1.5KB, and the timeout time is set to 200 microseconds. All servers run Ubuntu 20.04 operating systems, and a plurality of virtual machines with the operating systems of Ubuntu 18.04 are built in the servers.
When testing, the transmitting end transmits a plurality of data streams simultaneously, the number of the data streams is gradually increased from 8 to 28, and test data such as the completion time of each data stream, the CPU resource consumption of the switch and the like are recorded.
Fig. 6 is a graph of average flow completion time versus the four load balancing schemes of this comparative example.
As shown in fig. 6, in the symmetrical topology, the average flow completion time is significantly reduced by adopting the letflow+rdaf scheme deploying the method of the second embodiment, compared to the existing LetFlow scheme. Likewise, the average flow completion time of the conga+rdaf scheme is also significantly reduced compared to the CONGA scheme. In a symmetrical topology, the letflow+rdaf scheme and the conga+rdaf scheme may reduce the average flow completion time by 39.8% compared to the LetFlow scheme or the CONGA scheme.
This is because the method of the embodiment can dynamically modify the flowlet threshold according to the number of current data flows, the remaining bandwidth and the queue length, thereby fully utilizing the path bandwidth.
Fig. 7 is a diagram for comparing CPU overhead of four load balancing schemes in this comparative example.
As shown in fig. 7, compared with the existing LetFlow scheme, the letflow+rdaf scheme employing the method of the deployment embodiment two increases the CPU resource consumption of the switch; likewise, with the conga+rdaf scheme, the CPU resource consumption of the switch is increased compared to the CONGA scheme, and the method of the embodiment introduces an additional CPU overhead of at most 31.25%. Since the method of the embodiment recalculates the flowlet threshold when and only new flows join or trigger a random update event, the additional CPU overhead incurred by recalculating the flowlet threshold when the number of flows is small is acceptable.
< comparative example two >
In the comparative example, load balancing schemes of four data center networks were tested in a test bed, respectively, and test results were subjected to comparative analysis. The four carrier equalization schemes and the receiver-side transmission protocol used are the same as those in comparative example one.
Fig. 8 is a topology diagram of a test bed of an asymmetric topology in this comparative example.
As shown in fig. 8, in this comparative example, the test bed was used to have an asymmetric topology. In the test bed, the bandwidths of the server on-board network cards are respectively 1Gbps and 2.5Gbps, and other parameter settings and specific test methods are the same as those in the first comparative example.
Fig. 9 is a graph of average flow completion time versus the four load balancing schemes of this comparative example.
As shown in fig. 9, in the asymmetric topology, the average flow completion time is significantly reduced with the letflow+rdaf scheme compared to the LetFlow scheme. Likewise, the average flow completion time of the conga+rdaf scheme is also significantly reduced compared to the CONGA scheme. In an asymmetric topology, the letflow+rdaf scheme and the conga+rdaf scheme may reduce the average flow completion time by 49.7% compared to the LetFlow scheme or the CONGA scheme.
In this comparative example, the CPU overhead is substantially the same as that in the first comparative example, and will not be described again.
Comparative example three
In the comparative example, load balancing schemes of four data center networks were tested in a simulated test environment, respectively, and test results were subjected to comparative analysis. The four carrier equalization schemes and the receiver-side transmission protocol used are the same as those in comparative example one.
In the comparative example, a simulation test environment was constructed using an NS2.35 network simulation platform. NS network simulators are a common type of multiprotocol network simulation software that is published publicly on the internet and is currently widely used by network researchers, NS2.35 being one of its versions.
In the first and second comparative examples, a local performance test based on a test bed is provided, and in order to more comprehensively compare and evaluate the effectiveness of the method of the second embodiment, various performance indexes of the network adopting the method of the second embodiment under a complex topology structure are further tested by a simulation test environment, including: packet loss number, average flow completion time, and average throughput.
FIG. 10 is a topology diagram of a simulation test of the leaf-spin topology in this comparative example.
As shown in fig. 10, in this comparative example, a symmetrical Leaf-Spine topology is employed in which each of the roof-top switches (Leaf switches) connected to the host are connected to all core switches (Spine switches) above in the figure. The link bandwidth from the Leaf switch to the Spine switch is 20Gbps. The buffer size of the switch is set to 250 messages, the round trip delay is 100 microseconds when no queue exists, and the message size is 1.5KB.
When testing is carried out, the transmitting end simultaneously transmits a plurality of data streams, the number of the data streams is gradually increased from 8 to 128, and test data such as average flow completion time, average throughput rate and the like of the packet loss number are recorded.
Fig. 11 is a graph showing the packet loss ratio of the four load balancing schemes in this comparative example.
As shown in fig. 11, in the symmetrical topology, the packet loss numbers of the LetFlow and CONGA sharply increase as the number of concurrent data streams increases from 8 to 128. This is because in both schemes, the data packets and the driving packets are not transmitted according to a consistent path, so that the transmission rate of the data packets may be too high, exceeding the actual bandwidth of the link, and further causing serious packet loss. In contrast, the packet loss rate of the letflow+rdaf scheme and the conga+rdaf scheme is significantly reduced. In a symmetrical topology, the method combining embodiment two can reduce the number of lost packets by 17.42%.
Fig. 12 is a graph of average flow completion time versus the four load balancing schemes of this comparative example.
As shown in fig. 12, the average flow completion time of the letflow+rdaf scheme and the conga+rdaf scheme is reduced compared to the LetFlow and CONGA. The decrease in average flow completion time is not very significant because of the low likelihood of data packet misordering that occurs in a symmetric topology.
In addition, in the comparative example, the effects of the four load balancing schemes are also compared and tested in different data center application modes. The data center application modes adopted are respectively as follows: data Mining mode (Data Mining), web Search mode (Web Search), cache following mode (Cache), and Web service mode (Web Server). In this test, the data stream size obeys a distribution given by each application mode.
Fig. 13 is a graph showing average flow completion time in each application mode for the four load balancing schemes in this comparative example. Fig. 14 is a graph showing the average throughput rate of long flows in each application mode for the four load balancing schemes in this comparative example.
As shown in fig. 13 and 14, regardless of the application mode, the letflow+rdaf scheme and the conga+rdaf scheme are superior to the LetFlow scheme and the CONGA scheme in both the average flow completion time and the long-leveling average throughput.
Comparative example IV
In the comparative example, load balancing schemes of four data center networks were tested in a simulated test environment, respectively, and test results were subjected to comparative analysis. The four carrier equalization schemes and the receiver-side transmission protocol used are the same as those in comparative example one.
In this comparative example, the simulated test environment and the adopted network topology are the same as those in comparative example three, except that an asymmetric topology is adopted, and link bandwidths from the Leaf switch to the Spine switch are respectively 20Gbps and 40Gbps, and other parameters are the same as those in comparative example three.
Fig. 15 is a graph showing the packet loss ratio of the four load balancing schemes in this comparative example.
As shown in fig. 15, in the asymmetric topology, as the number of concurrent data streams increases from 8 to 128, the packet loss numbers of the LetFlow and CONGA sharply increase, compared with that of the letflow+rdaf scheme and the conga+rdaf scheme, the packet loss rate significantly decreases, and the drop rate is higher than that of the symmetric topology of comparative example three.
Fig. 16 is a graph of average flow completion time versus the four load balancing schemes of this comparative example.
As shown in fig. 16, in the asymmetric topology, the average flow completion time of the letflow+rdaf scheme and the conga+rdaf scheme is significantly reduced as compared with the LetFlow and CONGA, and the effect of the two schemes combined with the embodiment two method is more remarkable as the number of concurrent data flows increases.
FIG. 17 is a topology diagram of a simulation test of the fat-tree topology in this comparative example.
In addition, as shown in fig. 17, the third and fourth comparative examples were tested based on the symmetrical and asymmetrical fat-tree topologies, and other parameters except the topologies were the same as those of the third and fourth comparative examples, and the test results were approximately the same, and will not be described again.

Claims (5)

1. A flowlet load balancing method for a data center network receiving end-oriented drive transmission protocol is used for carrying out load balancing on data streams of a data center network and is characterized by comprising the following steps:
step S1: performing initialization setting, maintaining a driving message time stamp table and a driving message path information table at a downlink switch of a receiving end host, and maintaining a data message path information back display table at a downlink switch of a transmitting end host;
step S2: when the switch receives a new message, analyzing the head of the new message and judging the message type of the new message, wherein the message type comprises a data message and a driving message;
step S3: when judging the data message in the step S2, the switch inquires a data message path information echo list and forwards the data message according to the path information in the list;
step S4: when judging that the driving message is the driving message in the step S2, the switch inquires a time stamp table of the driving message and judges whether a data stream to which the driving message belongs is a new stream, if so, the driving message is forwarded based on a preset routing strategy selection path, and if not, the driving message is forwarded based on the path strategy and a preset flowlet mechanism selection path;
step S5: and the switch displays the contents in the drive message path information table back to the data message path information back display table, and is used for indicating the transmission of the data message.
2. The method for balancing the flowlet load of the data center network receiver-oriented driving transmission protocol according to claim 1, wherein the method comprises the following steps:
wherein step S4 comprises the sub-steps of:
step S4-1: the switch updates a flowlet threshold according to network information, wherein the network information comprises the number of data flows passing through the switch, the residual bandwidth of a link and the length of a queue;
step S4-2: the switch inquires the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table;
step S4-3: when the step S4-2 judges that the driving message is transmitted by the switch according to the route strategy selection link, and correspondingly updating the driving message time stamp table and the driving message path information table;
step S4-4: when the step S4-2 is judged to be no, the switch judges whether to generate a new flowlet or not based on the flowlet mechanism;
step S4-5: when the step S4-4 judges that the driving message is forwarded by the switch based on the routing strategy selection link, and the driving message time stamp table and the driving message path information table are updated correspondingly;
step S4-6: and when the step S4-4 is judged to be no, the switch inquires the path information table of the driving message and forwards the driving message according to the corresponding path information in the table.
3. The method for balancing the flowlet load of the data center network receiver-oriented driving transmission protocol according to claim 1, wherein the method comprises the following steps:
wherein step S4 comprises the sub-steps of:
step S4-1: the switch inquires the driving message time stamp table and judges whether the data stream to which the driving message belongs is not recorded in the driving message time stamp table;
step S4-2: when the step S4-1 judges that the flow threshold value is yes, the switch updates the flow threshold value according to network information, wherein the network information comprises the number of data flows passing through the switch, the residual bandwidth of a link and the length of a queue;
step S4-3: the switch selects a link to forward the driving message according to the routing strategy, and correspondingly updates the driving message time stamp table and the driving message path information table;
step S4-4: when the step S4-1 is judged to be no, the switch judges whether to trigger a random update event;
step S4-5: when the step S4-4 judges that the flow threshold value is yes, the switch updates the flow threshold value according to the network information;
step S4-6: after step S4-5 or when step S4-4 determines no, the switch determines whether to generate a new flowlet based on the flowlet mechanism;
step S4-7: when the step S4-6 judges that the driving message is forwarded by the switch based on the routing strategy selection link, and the driving message time stamp table and the driving message path information table are updated correspondingly;
step S4-8: and when the step S4-6 is judged to be no, the switch inquires the path information table of the driving message and forwards the driving message according to the corresponding path information in the table.
4. A method for balancing the flow load of a data center network receiver-oriented driving transport protocol according to any one of claims 1 to 3, characterized in that:
the switch judges whether a new flowlet is generated based on the flowlet mechanism, specifically:
and the switch queries the time stamp table of the driving messages and judges whether the time interval between two adjacent driving messages is larger than a preset time interval threshold value according to the time stamp information in the table.
5. A method for balancing the flow load of a data center network receiver-oriented driving transport protocol according to any one of claims 1 to 3, characterized in that:
in step S2, the switch parses the field of the new message header to determine the message type.
CN202311166446.7A 2023-09-11 2023-09-11 Flowlet load balancing method for data center network receiving end driving transmission protocol Pending CN117176658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311166446.7A CN117176658A (en) 2023-09-11 2023-09-11 Flowlet load balancing method for data center network receiving end driving transmission protocol

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311166446.7A CN117176658A (en) 2023-09-11 2023-09-11 Flowlet load balancing method for data center network receiving end driving transmission protocol

Publications (1)

Publication Number Publication Date
CN117176658A true CN117176658A (en) 2023-12-05

Family

ID=88935134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311166446.7A Pending CN117176658A (en) 2023-09-11 2023-09-11 Flowlet load balancing method for data center network receiving end driving transmission protocol

Country Status (1)

Country Link
CN (1) CN117176658A (en)

Similar Documents

Publication Publication Date Title
JP4260631B2 (en) Method and apparatus for network congestion control
Cao et al. Explicit multipath congestion control for data center networks
Saeed et al. Annulus: A dual congestion control loop for datacenter and wan traffic aggregates
Santos et al. End-to-end congestion control for InfiniBand
Mirani et al. A data-scheduling mechanism for multi-homed mobile terminals with disparate link latencies
CN111585911B (en) Method for balancing network traffic load of data center
CN110868359B (en) Network congestion control method
Abdous et al. Burst-tolerant datacenter networks with vertigo
CN110324255B (en) Data center network coding oriented switch/router cache queue management method
CN117176658A (en) Flowlet load balancing method for data center network receiving end driving transmission protocol
Nabeshima Performance evaluation of multcp in high-speed wide area networks
US7734808B1 (en) End-to-end congestion control in a Fibre Channel network
Ruan et al. Polo: Receiver-driven congestion control for low latency over commodity network fabric
CN109257302B (en) Packet scattering method based on packet queuing time
Shi et al. PABO: Congestion mitigation via packet bounce
Wechta et al. Simulation-based analysis of the interaction of end-to-end and hop-by-hop flow control schemes in packet switching LANs
Turner et al. An approach for congestion control in InfiniBand
Kheirkhah Sabetghadam Mmptcp: a novel transport protocol for data centre networks
McAlpine et al. An architecture for congestion management in ethernet clusters
CN115190072B (en) Method for adjusting fairness rate between aggressive transmission protocol and conservative transmission protocol
Kung et al. Zero queueing flow control and applications
Chen et al. On meeting deadlines in datacenter networks
CN116346726B (en) Host load balancing method for self-adaptive burst traffic
Arshad et al. A simulation-based study of FAST TCP compared to SCTP: Towards multihoming implementation using FAST TCP
Usha et al. Pushout policy in active queue management to support quality of service guaranties in IP routers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination