CN114666278A - Data center load balancing method and system based on global dynamic flow segmentation - Google Patents

Data center load balancing method and system based on global dynamic flow segmentation Download PDF

Info

Publication number
CN114666278A
CN114666278A CN202210574372.XA CN202210574372A CN114666278A CN 114666278 A CN114666278 A CN 114666278A CN 202210574372 A CN202210574372 A CN 202210574372A CN 114666278 A CN114666278 A CN 114666278A
Authority
CN
China
Prior art keywords
queuing
time
data packet
switch
network delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210574372.XA
Other languages
Chinese (zh)
Other versions
CN114666278B (en
Inventor
史庆宇
李晓翠
张新玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202210574372.XA priority Critical patent/CN114666278B/en
Publication of CN114666278A publication Critical patent/CN114666278A/en
Application granted granted Critical
Publication of CN114666278B publication Critical patent/CN114666278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a data center load balancing method and a system based on global dynamic flow segmentation, which comprises the following steps: constructing a short-term network delay prediction model; when a data packet arrives at a source switch from a sending end, all flows are segmented and a target path is selected; inserting local queuing time passing through a source switch and an intermediate switch into the tail part of a data packet; when a data packet reaches a target switch, acquiring local queuing time, accumulating all the queuing time, calculating average queuing time, acquiring last average queuing time in a queuing time table, calculating a queuing gradient, calculating the difference between the queuing time of the data packet and the recording time of a source switch to obtain the network delay of a current path, and storing the network delay of the current path in a corresponding recording table of the target switch; and when the reverse data packet is returned to the source switch from the target switch, the path information carried by the tail part is stored in a record table of the source switch, so that the source switch performs dynamic flow segmentation based on network delay and queuing gradient to realize load balancing.

Description

Data center load balancing method and system based on global dynamic flow segmentation
Technical Field
The invention relates to the technical field of data center load balancing, in particular to a data center load balancing method and system based on global dynamic flow segmentation.
Background
In recent years, with the rapid development of technologies such as cloud computing, storage in distributed languages, big data and the like, a data center serves as a bottom infrastructure and architecture to provide infrastructure services for mass applications, including delay-sensitive services such as web search, online recommendation systems, instant messaging and the like, and computation-intensive services such as high-performance computing, data analysis and the like. To provide a satisfactory quality of service to the user, the internal network transmission performance of the data center can be critical. The network topology structure of the data center generally adopts a CLOS structure, a plurality of available network links are not needed between servers, the data transmission efficiency can be improved by transmitting data in parallel, and the data processing time of distributed application of the data center is reduced. With the rapid improvement of the read-write performance of the data center storage system and the continuous increase of the application performance requirements, if the network transmission performance is kept unchanged, the network becomes a system performance bottleneck, and the application service quality and the data center service income are reduced. For example, google reports that data center network performance demands are doubled every 12 to 15 months. In response to this problem, data center providers are constantly upgrading network hardware to increase network transmission rates using high bandwidth, microsecond-level low latency links at the levels of 10Gbps, 100Gbps, and so on. In addition, according to the characteristics of flow dynamics, burstiness and the like, researchers provide optimized network transmission control protocols and scheduling algorithms, and the transmission efficiency is explored and improved. However, the problem of traffic imbalance under multi-path transmission of the network in data cannot be solved by the scheme, and researches show that the utilization rate of a core network link of a data center is usually less than 25%, so that the design of an efficient traffic load balancing mechanism is very important.
In order to utilize burst flow, the existing load balancing mechanism sets a static or dynamic flow timeout threshold to segment flow, but the static flow timeout threshold cannot be changed along with the change of the burst flow, and if the static flow timeout threshold is too large, an overlarge segmentation granularity is formed, a proper scheduling opportunity is missed, and if the static flow timeout threshold is too small, an undersize segmentation granularity is formed, so that serious TCP disorder is caused. At present, a flow segmentation scheme based on a dynamic flow timeout threshold adjusts the threshold by using local switch load information, which is difficult to find a global flow burst condition, or adjusts the threshold by using implementation indexes such as global network delay detection, full link load intensity and the like, which causes data inaccuracy due to difficulty in timely feedback of detected data and micro-burst flow. In a word, at present, a mechanism for accurately calculating the flow timeout threshold based on the global network state is still lacked, which causes inaccurate traffic segmentation and damages to the load balancing performance.
Disclosure of Invention
The invention provides a data center load balancing method and system based on global dynamic traffic segmentation, and aims to solve the problem of inaccurate traffic segmentation of the existing load balancing method, further improve network transmission performance, greatly improve link bandwidth utilization rate and reduce transmission delay.
In order to achieve the above object, the present invention provides a data center load balancing method based on global dynamic traffic segmentation, including:
step 1, constructing a short-term network delay prediction model based on global network conditions;
step 2, when the data packet arrives at the source switch from the sending end, all the passing flows are segmented and a target path is selected, and the local queuing time is inserted into the tail of the data packet;
step 3, when the data packet reaches the intermediate switch, inserting the local queuing time into the tail part of the data packet;
step 4, when the data packet reaches the target switch, the local queuing time in the data packet is obtained, the queuing times of all switches are accumulated to obtain the instantaneous queuing time of the current path, the average queuing time is calculated, the last average queuing time of the current path is taken out from the queuing time table, and the queuing gradient is calculated
Figure 55125DEST_PATH_IMAGE001
Obtaining the network delay of the current path by calculating the difference between the dequeue time of the data packet and the recording time of the source switch, and storing the network delay of the current path into a network delay recording table of the target switch;
and 5, when the reverse data packet is returned to the source switch from the destination switch, selecting the path information of the latest updated item from the destination switch, taking out the queuing gradient and writing the path information into the data packet, taking out the carried path information from the tail part and storing the path information into a record table of the source switch, so that the source switch performs dynamic flow segmentation based on the globally recorded network delay and queuing gradient.
Wherein the short-term network delay prediction model is
Figure 554240DEST_PATH_IMAGE002
In the market place, the number of the main raw materials,
Figure 53967DEST_PATH_IMAGE003
Figure 296729DEST_PATH_IMAGE004
is a time of day
Figure 568442DEST_PATH_IMAGE005
Figure 504037DEST_PATH_IMAGE006
The network delay of (a) is set,
Figure 493990DEST_PATH_IMAGE007
is the queuing gradient value.
Wherein, step 2 includes: calculating short-term network delay under different paths based on a short-term network delay prediction model to obtain a target path with minimum short-term network delay, calculating a short-term network delay difference value between the target path and a current transmission path to be used as a flow timeout threshold, judging whether a switched transmission path is selected to be the target path by judging whether the arrival interval time of a data packet is greater than the flow timeout threshold, and inserting a path number, the time of entering a switch and the local queuing time of the data packet into the tail of the data packet by utilizing an INT technology.
And 3, inserting the path number, the time of entering the intermediate switch and the local queuing time of the data packet into the tail part of the data packet by utilizing an INT technology.
Wherein, the queuing times experienced by all the switches in the step 4 include: the source switch, the intermediate switch, and the local queuing time inserted into the packet trailer and the time at which the packet enters the source switch.
Wherein the average queuing time is
Figure 274864DEST_PATH_IMAGE008
In the formula (I), the compound is shown in the specification,
Figure 666662DEST_PATH_IMAGE009
for the length of the instantaneous queue that is monitored,
Figure 507579DEST_PATH_IMAGE010
is the weight of the instantaneous queue length to the average queue length, and the average queue time is updated to the queue time table;
wherein the queuing gradient
Figure 984828DEST_PATH_IMAGE001
Is composed of
Figure 772655DEST_PATH_IMAGE011
Wherein the content of the first and second substances,
Figure 143594DEST_PATH_IMAGE012
and
Figure 702882DEST_PATH_IMAGE013
are respectively time of day
Figure 404778DEST_PATH_IMAGE014
And time of day
Figure 793034DEST_PATH_IMAGE015
Average queuing length of data packets, said queuing gradient
Figure 893845DEST_PATH_IMAGE001
And storing the queuing gradient table of the target switch.
The invention also provides a data center load balancing system based on global dynamic flow segmentation, which is deployed in a switch and comprises a flow segmentation module and an information collection module, wherein the switch is divided into a source switch, an intermediate switch and a target switch according to the transmission direction of a data packet;
the data packet arrives at a source switch from a sending end, a target path is segmented and selected through a flow segmentation module, and local queuing time is inserted into the tail of the data packet; and sending the data packet to an intermediate switch, inserting local queuing time into the tail part of the data packet by the intermediate switch through an information collection module, and storing the average queuing time, the queuing gradient and the network delay into a corresponding queuing time table, a corresponding queuing gradient table and a corresponding network delay record table in the destination switch through the information collection module.
And the flow dividing module is used for calculating short-term network delay under different paths based on a short-term network delay prediction model to obtain a target path with the minimum short-term network delay, calculating a short-term network delay difference value between the target path and the current transmission path to be used as a flow overtime threshold, and judging whether the switched transmission path is selected to be the target path by judging whether the interval time of the arrival of the data packet is greater than the flow overtime threshold.
The information collection module is used for inserting the local queuing time into the tail part of the data packet in the intermediate switch; the method is used for acquiring local queuing time in a target switch, accumulating the queuing time of all switches to obtain the instantaneous queuing time of the current path, calculating the average queuing time, taking the last average queuing time of the current path from a queuing time table, and calculating the queuing gradient
Figure 76564DEST_PATH_IMAGE016
And obtaining the network delay of the current path by calculating the difference between the dequeue time of the data packet and the recording time of the source switch, and storing the network delay into a network delay recording table of the target switch.
The scheme of the invention has the following beneficial effects:
according to the method, the accuracy of monitoring the path network delay at the source switch end is improved by constructing a short-term network delay prediction model, and a dynamic flow segmentation method is designed by utilizing the predictable short-term network delay, so that a better data center network load balancing method is finally realized; further improve network transmission performance, greatly improve link bandwidth utilization rate and reduce transmission delay.
Other advantages of the present invention will be described in detail in the detailed description that follows.
Drawings
FIG. 1 is a schematic diagram of network data transmission according to an embodiment of the present invention;
FIG. 2 is a block diagram of a data center load balancing system according to an embodiment of the present invention;
FIG. 3 is a graph comparing the average total traffic completion time based on web search load according to an embodiment of the present invention;
fig. 4 is a comparison test chart of short average completion time based on web search load according to an embodiment of the present invention;
fig. 5 is a comparison test chart of the short tail delay based on the web search load according to the embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted", "connected" and "connected" are to be understood broadly, for example, as being either a locked connection, a detachable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Aiming at the existing problems, the invention provides a data center load balancing method and system based on global dynamic flow segmentation.
The embodiment of the invention provides a data center load balancing method based on global dynamic flow segmentation, which comprises the following steps:
step 1, constructing a short-term network delay prediction model based on the global network condition.
Step 2, when the data packet arrives at the source switch from the sending end, inquiring a network delay record table src-latency (pathId, latency) of the source switch, wherein pathId is the number of the transmission path, latency is the network delay of the transmission path, and a queuing gradient record table src-gradient (pathId, gradient) of the source switch, wherein gradient is the queuing gradient of the transmission path; calculating short-term network delay under different paths based on a short-term network delay prediction model to obtain a target path with minimum short-term network delay, calculating a short-term network delay difference value between the target path and a current transmission path to be used as a flow timeout threshold, judging whether a switched transmission path is selected as the target path by judging whether the arrival interval time of a data packet is greater than the flow timeout threshold, and inserting local queuing time into the tail of the data packet.
In particular, when a packet is at a time
Figure 528405DEST_PATH_IMAGE017
When a sender reaches a source switch, the queuing length flowing through each switch is obtained, and in order to eliminate the jitter influence of the network queuing length, the final network queuing length is calculated based on an Exponential Weighted Moving Average (EWMA) algorithm
Figure 454773DEST_PATH_IMAGE018
Namely:
Figure 410091DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 29291DEST_PATH_IMAGE020
for the length of the instantaneous queue that is monitored,
Figure 702849DEST_PATH_IMAGE022
is the weight of the instantaneous queue length to the average queue length.
Then calculates the queuing gradient (i.e. the slope of the data packet along with the time)
Figure 432908DEST_PATH_IMAGE023
At the moment of time
Figure 242732DEST_PATH_IMAGE024
In the queue gradient of
Figure 32833DEST_PATH_IMAGE023
Can be expressed as:
Figure 131370DEST_PATH_IMAGE025
wherein, the first and the second end of the pipe are connected with each other,
Figure 399541DEST_PATH_IMAGE026
and
Figure 60942DEST_PATH_IMAGE027
are respectively time of day
Figure 21944DEST_PATH_IMAGE028
And time of day
Figure 404515DEST_PATH_IMAGE029
Average queuing length of the data packets.
Numbering paths by INT (In-band Network Telemetry) technology
Figure 476377DEST_PATH_IMAGE030
Updated gradient value
Figure 526372DEST_PATH_IMAGE023
And network latency
Figure 392697DEST_PATH_IMAGE031
The packet or ACK packet to the source switch is inserted for delivery to the source switch for logging.
For the subsequent time
Figure 528143DEST_PATH_IMAGE032
The predicted value of the network queue length can be obtained for the data packet arriving at the source switch:
Figure 138116DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 42618DEST_PATH_IMAGE034
is the switch port forwarding rate.
The network delay can be divided into queuing delay and other delays, and for different data packets, the other delays can be basically regarded as the same, so that:
Figure 751948DEST_PATH_IMAGE035
wherein the content of the first and second substances,
Figure 499324DEST_PATH_IMAGE036
Figure 788354DEST_PATH_IMAGE037
is a time of day
Figure 968536DEST_PATH_IMAGE038
Figure 645505DEST_PATH_IMAGE029
The network delay of (1).
Calculating the short-term network delay according to the formulas (3) and (4), and obtaining a short-term network delay prediction model as follows:
Figure 614598DEST_PATH_IMAGE039
the local queuing time experienced by the data packet will be inserted into the end of the data packet using INT techniques.
The embodiment uses the power-of-two-routes mechanism in the prior art to avoid selecting the same switching path at the same time, i.e. randomly selecting the switching path first
Figure 441740DEST_PATH_IMAGE040
A path then from
Figure 445468DEST_PATH_IMAGE040
Selecting the path with the smallest predicted network delay from the paths, and sorting the path with the short-term network delay recorded at the last time from small to large
Figure 699863DEST_PATH_IMAGE041
The paths are compared and finally the
Figure 156252DEST_PATH_IMAGE042
Selecting the path with the minimum short-term network delay from the paths, and recording the path as the path
Figure 787085DEST_PATH_IMAGE043
Calculating the short-term network delay difference between the path and the current path, namely the flow overtime threshold, if the difference between the arrival times of the data packets before and after the data flow is greater than the flow overtime threshold, switching the transmission path to the path selected this time
Figure 645319DEST_PATH_IMAGE043
And 3, when the data packet reaches the intermediate switch, inserting the path number, the time of entering the switch and the local queuing time of the data packet into the tail of the data packet by utilizing an INT technology.
Step 4, when the data packet reaches the target switch, the local queuing time in the data packet is obtained, the queuing times of all switches are accumulated to obtain the instantaneous queuing time of the current path, the average queuing time is calculated according to the formula (1), the last average queuing time of the current path is taken out from the queuing time table, and the queuing gradient is calculated by the formula (2)
Figure 132932DEST_PATH_IMAGE044
And calculating the difference between the dequeue time of the data packet and the recording time of the source switch to obtain the network delay of the current path, and storing the network delay into a network delay recording table of the destination switch.
The queuing times experienced by all switches include: the source switch and the intermediate switch insert the local queuing time in the data packet and the time of the data packet entering the source switch respectively.
And 5, when the reverse data packet is returned to the source switch from the destination switch, selecting the path information of the latest updated item from the destination switch, taking out the queuing gradient and writing the path information into the data packet, taking out the carried path information from the tail part and storing the path information into a record table of the source switch, wherein the path information comprises a path number, a path network delay and a queuing gradient, and the source switch is enabled to carry out dynamic flow segmentation based on the globally recorded network delay and queuing gradient.
The embodiment of the invention can utilize the global network queuing information to predict the short-term network delay and realize the accurate calculation of the flow overtime threshold based on the global network state, thereby improving the accuracy of flow segmentation, further improving the load balancing performance, improving the utilization rate of link bandwidth and reducing the flow transmission time.
The invention also provides a data center load balancing system based on the global dynamic flow segmentation.
As shown in fig. 1 and 2, an embodiment of the present invention provides a data center load balancing system based on global dynamic traffic segmentation, which is deployed in a switch, and adopts a Leaf-Spine network topology commonly used in a data center, including a traffic segmentation module and an information collection module, and for each data packet entering a network, the switch can be divided into a source switch, an intermediate switch, and a destination switch according to a data packet transmission direction and a network location.
The data packet is transmitted from left to right, the data packet arrives at a source switch from a sending end, a target path is segmented and selected through a flow segmentation module, local queuing time is inserted into the tail of the data packet and is sent to an intermediate switch, the intermediate switch inserts the local queuing time into the tail of the data packet through an information collection module, the data packet arrives at a target switch, and average queuing time, queuing gradient and network delay are stored into a corresponding queuing time table, queuing gradient table and network delay record table in the target switch through an information collection module.
And the flow dividing module is used for calculating short-term network delay under different paths based on a short-term network delay prediction model to obtain a target path with the minimum short-term network delay, calculating a short-term network delay difference value between the target path and the current transmission path to be used as a flow overtime threshold, and judging whether the switched transmission path is selected to be the target path by judging whether the interval time of the arrival of the data packet is greater than the flow overtime threshold.
The information collection module is used for inserting the local queuing time into the tail part of the data packet in the intermediate switch; the method is used for acquiring local queuing time in a destination switch, and accumulating the queuing time of all switches to obtain a current pathCalculating the instantaneous queue time of the path, calculating the average queue time, taking the last average queue time of the current path from the queue time table, and calculating the queue gradient
Figure 951984DEST_PATH_IMAGE045
And obtaining the network delay of the current path by calculating the difference between the dequeue time of the data packet and the recording time of the source switch, and storing the network delay into a network delay recording table of the target switch.
According to the method, the accuracy of monitoring the path network delay at the source switch end is improved by constructing a short-term network delay prediction model, and a dynamic flow segmentation method is designed by utilizing the predictable short-term network delay, so that a better data center network load balancing method is finally realized; further improve network transmission performance, greatly improve link bandwidth utilization rate and reduce transmission delay.
The embodiment of the invention performs performance test in the NS3 simulation environment, uses 8 × 8 Leaf-Spine network topology, sets the link bandwidth to 10Gbps, and totally has 128 servers. To simulate an asymmetric network, 20% of Leaf to Spine switch links were randomly selected, cutting the link bandwidth to 2 Gbps. The test load selects the widely used actual load web search. The method comprises the steps of selecting a scheme CONGA (distributed congestion perception data center load balancing mechanism based on network) and a LetFlow (data center load balancing method based on asymmetric network flow) with representative load balancing schemes through comparison test, testing and observing the average completion time of the total flow, the average completion time of delay sensitive short flow and the tail delay of the short flow, wherein the smaller the completion time is, the better the performance is, and detecting whether the load balancing method based on global dynamic flow segmentation provided by the invention improves the performance.
Fig. 3, 4, and 5 are performance comparison test charts under the web search load, in which the present invention is labeled as GDTS (Global Dynamic Traffic Splitting) during the test, and the average Traffic completion time of other schemes is normalized to GDTS, the abscissa in the diagrams is the load degree, and the ordinate is the normalized completion time. It can be seen that, compared with the CONGA, the present invention improves the transmission performance by 16%, 34% and 26% at most in the total flow completion time, the short flow average completion time and the short flow tail delay, respectively; compared with the LetFlow, the invention improves the transmission performance by 29%, 51% and 49% at most respectively.
In a word, compared with the similar methods in the field, the data center load balancing method based on the global dynamic flow segmentation provided by the invention can further reduce the total flow completion time and the delay sensitive short flow completion time, greatly reduce the short flow tail delay and provide a stronger performance guarantee for typical application of a data center.
While the foregoing is directed to the preferred embodiment of the present invention, it will be appreciated by those skilled in the art that various changes and modifications may be made therein without departing from the principles of the invention as set forth in the appended claims.

Claims (7)

1. A data center load balancing method based on global dynamic traffic segmentation comprises the following steps:
step 1, constructing a short-term network delay prediction model based on global network conditions;
step 2, when the data packet arrives at the source switch from the sending end, all the passing flows are segmented and a target path is selected, and the local queuing time is inserted into the tail of the data packet;
step 3, inserting local queuing time into the tail part of the data packet when the data packet reaches the intermediate switch;
step 4, when the data packet reaches the target exchanger, the local queuing time in the data packet is obtained, the local queuing times of all the exchangers are accumulated to obtain the instantaneous queuing time of the current path, the average queuing time is calculated, the last average queuing time of the current path is taken out from the queuing time table, and the queuing gradient is calculated
Figure 5001DEST_PATH_IMAGE001
Obtaining the network delay of the current path by calculating the difference between the dequeue time of the data packet and the recording time of the source switch, and delaying the network delay of the current pathStoring the network delay record table into a network delay record table of the target switch;
and 5, when the reverse data packet is returned to the source switch from the destination switch, selecting the path information of the latest updated item from the destination switch, taking out the queuing gradient and writing the path information into the data packet, taking out the carried path information from the tail part and storing the path information into a record table of the source switch, so that the source switch performs dynamic flow segmentation based on the globally recorded network delay and queuing gradient.
2. The global dynamic traffic segmentation-based data center load balancing method according to claim 1, wherein the short-term network delay prediction model is
Figure 453300DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 639562DEST_PATH_IMAGE003
Figure 565930DEST_PATH_IMAGE004
is a time of day
Figure 786827DEST_PATH_IMAGE005
Figure 343710DEST_PATH_IMAGE006
The network delay of (a) is set,
Figure 141902DEST_PATH_IMAGE001
is the queuing gradient value.
3. The data center load balancing method based on global dynamic traffic segmentation according to claim 1, wherein the step 2 includes: and calculating short-term network delay under different paths based on the short-term network delay prediction model to obtain a target path with minimum short-term network delay, calculating a short-term network delay difference value between the target path and the current transmission path as a flow timeout threshold, judging whether the switched transmission path is selected as the target path by judging whether the arrival interval time of the data packet is greater than the flow timeout threshold, and inserting a path number, the time of entering a source switch and the local queuing time of the data packet into the tail of the data packet by utilizing an INT technology.
4. The data center load balancing method based on global dynamic traffic segmentation according to claim 1, wherein step 3 inserts a path number, a time of entering an intermediate switch, and a queuing time of a packet into a packet tail by using an INT technology.
5. The global dynamic traffic segmentation based data center load balancing method according to claim 1, wherein the queuing time experienced by all the switches in the step 4 includes: the source switch and the intermediate switch insert the local queuing time in the data packet and the time of the data packet entering the source switch respectively.
6. The method for data center load balancing based on global dynamic traffic segmentation according to claim 5,
the average queuing time is
Figure 278485DEST_PATH_IMAGE008
Wherein the content of the first and second substances,
Figure 353888DEST_PATH_IMAGE009
for the length of the instantaneous queue that is monitored,
Figure 550514DEST_PATH_IMAGE010
is the weight of the instantaneous queue length to the average queue length, and the average queue time is updated to the queue time table;
the queuing gradient
Figure 836002DEST_PATH_IMAGE001
Is composed of
Figure 713959DEST_PATH_IMAGE011
Wherein the content of the first and second substances,
Figure 440607DEST_PATH_IMAGE012
and
Figure 401610DEST_PATH_IMAGE013
are respectively time of day
Figure 377656DEST_PATH_IMAGE014
And time of day
Figure 56375DEST_PATH_IMAGE015
Average queuing length of data packets, said queuing gradient
Figure 231004DEST_PATH_IMAGE016
And storing the queuing gradient table of the target switch.
7. A data center load balancing system based on global dynamic flow segmentation is deployed in a switch and is characterized by comprising a flow segmentation module and an information collection module, wherein the switch is divided into a source switch, an intermediate switch and a destination switch according to the transmission direction of a data packet;
the data packet arrives at a source switch from a sending end, a target path is segmented and selected through the flow segmentation module, local queuing time is inserted into the tail of the data packet, the data packet is sent to an intermediate switch, the intermediate switch inserts the local queuing time into the tail of the data packet through an information collection module, the data packet arrives at a target switch, and average queuing time, queuing gradient and network delay are stored into a corresponding queuing time table, a corresponding queuing gradient table and a corresponding network delay record table in the target switch through the information collection module;
the flow dividing module is used for calculating short-term network delay under different paths based on a short-term network delay prediction model to obtain a target path with minimum short-term network delay, calculating a short-term network delay difference value between the target path and a current transmission path to be used as a flow overtime threshold, and judging whether a switched transmission path is selected as the target path or not by judging whether the interval time of arrival of a data packet is greater than the flow overtime threshold or not;
the information collection module is used for inserting local queuing time into the tail part of the data packet in the intermediate switch; the target exchanger is used for obtaining local queuing time, accumulating the queuing time of all exchangers to obtain the instantaneous queuing time of the current path, calculating the average queuing time, and taking the last average queuing time of the current path from the queuing time table to calculate the queuing gradient
Figure 972695DEST_PATH_IMAGE001
And calculating the difference between the dequeue time of the data packet and the recording time of the source switch to obtain the network delay of the current path, and storing the network delay of the current path into a network delay recording table of the target switch.
CN202210574372.XA 2022-05-25 2022-05-25 Data center load balancing method and system based on global dynamic flow segmentation Active CN114666278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210574372.XA CN114666278B (en) 2022-05-25 2022-05-25 Data center load balancing method and system based on global dynamic flow segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210574372.XA CN114666278B (en) 2022-05-25 2022-05-25 Data center load balancing method and system based on global dynamic flow segmentation

Publications (2)

Publication Number Publication Date
CN114666278A true CN114666278A (en) 2022-06-24
CN114666278B CN114666278B (en) 2022-08-12

Family

ID=82038222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210574372.XA Active CN114666278B (en) 2022-05-25 2022-05-25 Data center load balancing method and system based on global dynamic flow segmentation

Country Status (1)

Country Link
CN (1) CN114666278B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134304A (en) * 2022-06-27 2022-09-30 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
CN114039922A (en) * 2021-11-22 2022-02-11 中国通信建设集团有限公司河南省通信服务分公司 Congestion control method and system based on path congestion degree gray prediction
CN114285790A (en) * 2021-12-21 2022-04-05 天翼云科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110351196A (en) * 2018-04-02 2019-10-18 华中科技大学 Load-balancing method and system based on accurate congestion feedback in cloud data center
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
CN114039922A (en) * 2021-11-22 2022-02-11 中国通信建设集团有限公司河南省通信服务分公司 Congestion control method and system based on path congestion degree gray prediction
CN114285790A (en) * 2021-12-21 2022-04-05 天翼云科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QINGYU SHI: "IntFlow: Integrating Per-Packet and Per-Flowlet Switching Strategy for Load Balancing in Datacenter Networks", 《 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT ( VOLUME: 17, ISSUE: 3, SEPT. 2020)》 *
程尚等: "混合SDN网络中的能源节约与流量优化策略", 《高技术通讯》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134304A (en) * 2022-06-27 2022-09-30 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder in cloud computing data center
CN115134304B (en) * 2022-06-27 2023-10-03 长沙理工大学 Self-adaptive load balancing method for avoiding data packet disorder of cloud computing data center

Also Published As

Publication number Publication date
CN114666278B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
US10574546B2 (en) Network monitoring using selective mirroring
EP3955550A1 (en) Flow-based management of shared buffer resources
US9166916B1 (en) Traffic spraying in a chassis-based network switch
WO2017199208A1 (en) Congestion avoidance in a network device
CN110351196B (en) Load balancing method and system based on accurate congestion feedback in cloud data center
US20040085901A1 (en) Flow control in a network environment
CA2164489A1 (en) Traffic management and congestion control for packet-based networks
WO2018036100A1 (en) Data message forwarding method and apparatus
US20220038374A1 (en) Microburst detection and management
US20180288145A1 (en) Providing a snapshot of buffer content in a network element using egress mirroring
WO2015101952A1 (en) Accurate measurement of distributed counters
CN114666278B (en) Data center load balancing method and system based on global dynamic flow segmentation
CN116671081A (en) Delay-based automatic queue management and tail drop
US11962505B1 (en) Distributed dynamic load balancing in network systems
US8948011B2 (en) Pseudo-relative mode WRED/tail drop mechanism
US20110122883A1 (en) Setting and changing queue sizes in line cards
US11470010B2 (en) Head-of-queue blocking for multiple lossless queues
CN112737940A (en) Data transmission method and device
CN116980342A (en) Method and system for transmitting data in multi-link aggregation mode
US7218608B1 (en) Random early detection algorithm using an indicator bit to detect congestion in a computer network
Chan et al. An active queue management scheme based on a capture-recapture model
CN116827867A (en) Low-delay congestion flow identification method based on data center network
WO2022152230A1 (en) Information flow identification method, network chip, and network device
Almasi et al. Protean: Adaptive management of shared-memory in datacenter switches
CN111404783B (en) Network state data acquisition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant