CN113543209A - Token scheduling-based congestion control method and device - Google Patents

Token scheduling-based congestion control method and device Download PDF

Info

Publication number
CN113543209A
CN113543209A CN202110707046.7A CN202110707046A CN113543209A CN 113543209 A CN113543209 A CN 113543209A CN 202110707046 A CN202110707046 A CN 202110707046A CN 113543209 A CN113543209 A CN 113543209A
Authority
CN
China
Prior art keywords
flow
priority
token
existing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110707046.7A
Other languages
Chinese (zh)
Other versions
CN113543209B (en
Inventor
张娇
石佳明
高煜轩
潘恬
黄韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110707046.7A priority Critical patent/CN113543209B/en
Publication of CN113543209A publication Critical patent/CN113543209A/en
Application granted granted Critical
Publication of CN113543209B publication Critical patent/CN113543209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0289Congestion control

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a congestion control method and a device based on token scheduling, wherein the method only needs to be operated at a data receiving end, the data receiving end determines the priority of each flow according to the flow to be transmitted of a new added flow and the residual flow of the existing flow, and arranges the new added flow and the existing flow in each priority according to the sequence of the residual bytes from small to large; for a certain number of shorter flows in each priority, determining the sending rate of the token packet in a set sending rate interval according to the flow distribution relation, and sending the token packet to corresponding data sending ends in sequence, so as to realize the shortest remaining time priority in global scheduling and greatly reduce the short flow delay.

Description

Token scheduling-based congestion control method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for controlling congestion based on token scheduling.
Background
With the rapid expansion of online cloud services, data centers are rapidly developing. Different from the traditional internet which aims at fairly distributing bandwidth to all traffic, the data center network focuses more on reducing the response time of cloud service. Therefore, in data center networks, those skilled in the art have attempted to reduce traffic delays from different perspectives.
In order to reduce the Flow Completion Time (FCT), those skilled in the art have proposed a variety of Transmission Control Protocols (TCPs), which initially reduce the queue delay by slightly changing the congestion window adjustment method of the TCP, using some active queue management techniques, such as ECN mechanism, so that the Flow Completion Time becomes smaller than that of the conventional TCP protocol. Some tcp protocols reduce queuing delay while maintaining fair allocation of bandwidth to each flow, so that the delay of short flows is still high due to contention with the bandwidth of other long flows. To further reduce stream completion Time, achieve optimal average or tail stream completion Time, some transport protocols assign priorities to streams and attempt to use some scheduling policy, such as Shortest task First (SJF), Least reached Service (LAS), or Shortest Remaining Time First (SRTF).
Among these scheduling strategies, SRTF has been proven to be the optimal strategy to minimize the average FCT, while SRTF is nearly optimal for reducing the tail FCT. Therefore, it is desirable to be able to model the SRTF to achieve the optimal mean/tail FCT. However, some existing transmission control protocols aiming at approximate SRTF require a centralized controller to control the sending rate and transmission sequence of all traffic, which can greatly limit the network size. As for the distributed transmission control protocol, either it cannot work in the existing data center or it can only implement the locally optimal SRTF, and if the bandwidth oversubscription rate of the data center is greater than 1:1, it cannot work well.
Therefore, a new data transmission method is needed to realize a better SRTF in the existing data center.
Disclosure of Invention
The embodiment of the invention provides a congestion control method and a congestion control device based on token scheduling, which are used for eliminating or improving one or more defects in the prior art and solving the problems that the existing transmission control protocol is difficult to realize global SRTF scheduling and is difficult to deploy in the existing data center.
The technical scheme of the invention is as follows:
in one aspect, the present invention provides a congestion control method based on token scheduling, configured to operate simultaneously on multiple data receiving ends, where each data receiving end includes:
receiving one or more connection request packets sent by at least one data sending end in each round trip time, wherein each connection request packet comprises a to-be-transmitted flow of a corresponding new flow increase, determining the priority of each new flow increase according to the to-be-transmitted flow, and the smaller the to-be-transmitted flow is, the higher the priority of the corresponding new flow increase is;
acquiring a transmission data table maintained by the data receiving end and residual flow of all existing flows in the data table waiting for transmission, and determining the priority of each existing flow according to the residual flow of each existing flow, wherein the smaller the residual flow is, the higher the priority of the corresponding existing flow is;
placing each new added flow and each existing flow into a transmission data priority queue with corresponding priority according to the priority, and arranging the new added flow and the existing flow from small to large according to the sequence of the flow to be transmitted and the residual flow;
reserving and recording the first number of new added flows or existing flows in each transmission data priority queue in the transmission data list, and recording the rest new added flows or existing flows in the waiting transmission list;
acquiring the distribution relation of the to-be-transmitted flow of each new added flow and the residual flow of each existing flow in each transmission data priority queue, configuring the token packet transmission rate of each new added flow and the token packet transmission rate of each existing flow according to the same distribution relation in a set transmission rate interval, and generating a token packet for each new added flow and each existing flow in the transmission data list;
and sequentially sending token packets corresponding to each newly added flow and the existing flow in the transmission data list to the data sending end according to the corresponding token packet sending rate so as to initiate data transmission.
In some embodiments, determining the priority of each newly added flow according to the traffic to be transmitted includes:
receiving a plurality of standard connection request packets in a set standard time period, and recording the flow of a standard flow in each standard connection request packet;
arranging the standard flows according to the sequence of the flow from small to large, dividing a second set number of flow intervals, wherein each flow interval corresponds to a priority, the smaller the flow of the standard flows is, the higher the priority is, and the number of the standard flows contained in each flow interval is the same;
and acquiring a traffic interval to which the traffic to be transmitted of each new added flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding new added flow.
In some embodiments, obtaining the transmission data table maintained by the data receiving end and the remaining traffic of all existing flows in the transmission-waiting data table, and determining the priority of each existing flow according to the remaining traffic of each existing flow includes:
and acquiring a traffic interval to which the residual traffic of each existing flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding existing flow.
In some embodiments, determining the priority of each newly added flow according to the traffic to be transmitted further includes:
selecting a plurality of alternative time periods in the last work cycle of the data receiving end, wherein the work cycle is 1 day or 1 week;
merging a plurality of the candidate time periods into the set standard time period.
In some embodiments, determining the priority of each newly added flow according to the traffic to be transmitted further includes:
and acquiring a current time point, and taking the alternative time period closest to the current time point in the working cycle as the set standard time period.
In some embodiments, the distribution relationship between the to-be-transmitted traffic of each newly added stream and the remaining traffic of each existing stream in each priority queue of the transmission data is obtained, and the distribution relationship is set in the sending rate setting intervalIn the token packet sending rate of each new added flow and the existing flow configured according to the same distribution relation, the flow f in the ith transmission data priority queueiToken packet transmission rate of
Figure BDA0003131796610000032
The calculation formula is as follows:
Figure BDA0003131796610000031
wherein c is the lower limit value of the token sending rate, α c is the upper limit value of the token sending rate, α>1;tiIs the upper bound, t, of the flow interval corresponding to the ith priority queue of the transmission datai-1The flow interval corresponding to the ith transmission data priority queue is the next time; δ is the flow fiThe remaining flow rate of (c).
In some embodiments, under the condition that the bottleneck switch limits the total token packet transmission rate, each data receiving end adds a random transmission interval to the token packet to control the ratio of the token packet transmission rate between different data receiving ends to be constant.
In some embodiments, after initiating a data transfer, each transmit data priority queue in the transmit data table has only one transmitting flow at a time.
In another aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method are implemented.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the steps of the above-mentioned method.
The invention has the beneficial effects that:
in the congestion control method and device based on token scheduling, the method only needs to be operated at a data receiving end, the data receiving end determines the priority of each stream according to the flow to be transmitted of the newly added stream and the residual flow of the existing stream, and arranges the newly added stream and the existing stream according to the sequence of the residual bytes from small to large in each priority; for a certain number of shorter flows in each priority, determining the sending rate of the token packet in a set sending rate interval according to the flow distribution relation, and sending the token packet to corresponding data sending ends in sequence, so as to realize the shortest remaining time priority in global scheduling and greatly reduce the short flow delay.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the specific details set forth above, and that these and other objects that can be achieved with the present invention will be more clearly understood from the detailed description that follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a flowchart illustrating a congestion control method based on token scheduling according to an embodiment of the present invention;
fig. 2 is a distribution diagram of the remaining bytes of all streams at the data receiving end within a set standard time period in the congestion control method based on token scheduling according to an embodiment of the present invention;
FIG. 3 is a diagram of traffic intervals divided according to the distribution of the remaining bytes of FIG. 2;
fig. 4 is a diagram of a transmission data list and a waiting data list in the congestion control method based on token scheduling according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating determining a sending rate of a token packet according to a distribution relation of remaining bytes of each flow in the congestion control method based on token scheduling according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the scheme according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
Because traffic in a data center network requires ultra-low latency, data center networks prefer to use transmission control protocols that minimize flow completion time rather than just pursuing fairness between flows. However, existing transmission control protocols either do not enable global SRTF scheduling approximately or are difficult to deploy in current data centers.
Fair transmission control protocols such as DCTCP (central transmission control protocol) and ExpressPass approximate fair sharing scheduling policies. PDQ and PIAS simulate SJF and LAS, respectively. pFabric approximates optimal SRTF scheduling by proposing a priority calculation method and queue management algorithm. However, it requires a special switch and cannot be deployed in today's data centers. Homa overcomes the disadvantages of pFabric, transmits short flows preferentially on the receive side, and requires only limited priority queues on the switch. However, each terminal can only obtain partial flow information of the whole network, and only can realize the local optimal SRTF scheduling. In addition, it requires load balancing of packet granularity, which works well in data center networks with bandwidth non-oversubscription. If the data center's bandwidth oversubscription rate is greater than 1:1, it does not work well.
The invention provides a congestion control method and a congestion control device based on token scheduling, which can realize SRTF approximate to global optimum under the condition of not modifying a data center network switch. To achieve an approximate global SRTF, dynamic priority assignment and traffic length based rate control algorithms are combined at the receiving end to achieve infinite priority scheduling of traffic. More specifically, each flow is assigned a priority based on the flow size distribution. The token packet transmission rate for that flow is then set to be inversely proportional to the remaining traffic size. Thus, even if several flows are in the same priority queue, more token packets can be sent for a shorter flow. Therefore, a flow with a shorter remaining traffic can obtain more bandwidth.
It should be noted that the present invention is mainly operated based on a data receiving end, and includes a plurality of data transmitting ends and data receiving ends connected to switches in the same data center network. And each data receiving terminal schedules the stream to be transmitted according to the same method.
In one aspect, the present invention provides a congestion control method based on token scheduling, which is used for operating at multiple data receiving ends simultaneously, and at each data receiving end, as shown in fig. 1, the method includes steps S101 to S106, where the steps S101 to S106 are performed within one Round Trip Time (RTT) and are repeated within each round trip time.
Step S101: and in each round-trip time, receiving one or more connection request packets sent by at least one data sending end, wherein each connection request packet comprises the flow to be transmitted of the corresponding new added flow, the priority of each new added flow is determined according to the flow to be transmitted, and the smaller the flow to be transmitted, the higher the priority of the corresponding new added flow.
Step S102: acquiring a transmission data table maintained by a data receiving end and residual flow of all existing flows in the transmission data table, determining the priority of each existing flow according to the residual flow of each existing flow, wherein the smaller the residual flow is, the higher the priority of the corresponding existing flow is.
Step S103: and placing each new added flow and each existing flow into a transmission data priority queue with corresponding priority according to the priority, and arranging the flow to be transmitted and the residual flow from small to large.
Step S104: and reserving and recording the first number of new added flows or existing flows in each transmission data priority queue in a transmission data list, and recording the rest new added flows or existing flows in a waiting transmission list.
Step S105: acquiring the distribution relation of the to-be-transmitted flow of each new added flow and the residual flow of each existing flow in each transmission data priority queue, configuring the token packet transmission rate of each new added flow and the token packet transmission rate of each existing flow according to the same distribution relation in a set transmission rate interval, and generating the token packet for each new added flow and each existing flow in the transmission data list.
Step S106: and sequentially sending the token packets corresponding to each new added flow and the existing flow in the transmission data list to a data sending end according to the corresponding token packet sending rate so as to initiate data transmission.
In step S101, a data receiving end receives a connection request packet of a data sending end, where the connection request packet is generated after the data sending end receives some data of an upper layer application, and at least carries traffic size information of a data stream to be sent. The data sending end is in a waiting state after sending a connection request packet, and only enters a normal transmission state when receiving a token packet sent by the data receiving end.
And the data receiving end defines the stream corresponding to the received connection request packet as a new added stream, and further determines the priority of the new added stream according to the to-be-transmitted flow of the new added stream, wherein the smaller the to-be-transmitted flow is, the higher the priority is. Correspondingly, higher token packet sending rate is allocated to shorter flows in the same priority, so that the corresponding flows can respond faster and are closer to the shortest remaining time for priority.
In some embodiments, in step S101, determining the priority of each new added flow according to the traffic to be transmitted includes steps S201 to S203:
step S201: and receiving a plurality of standard connection request packets in a set standard time period, and recording the flow of the standard flow in each standard connection request packet.
Step S202: arranging the standard flows according to the sequence of the flow from small to large, dividing a second set number of flow intervals, wherein each flow interval corresponds to a priority, the smaller the flow of the standard flow is, the higher the priority is, and the number of the standard flows contained in each flow interval is the same.
Step S203: and acquiring a traffic interval to which the traffic to be transmitted of each new added flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding new added flow.
In this embodiment, a standard connection request packet received by a data receiving end within a set standard time period is used as a reference, a traffic distribution condition of a data stream to be transmitted is analyzed, a second set number of traffic intervals are divided according to corresponding traffic distribution, and an upper bound and a lower bound of each traffic interval are determined, where the shorter the flow is, the higher the priority is. As shown in fig. 2, the sink device receives a total of 15 streams corresponding to the standard connection request packets a1 to a15 within the set standard time, and the lengths of the streams are different. After being arranged in the order from small to large, as shown in fig. 3, the traffic flow section is divided into 3 traffic flow sections according to the distribution situation, each traffic flow section comprises 5 flows, the upper bound of the section 1 is t2, the lower bound thereof is t1, the upper bound of the section 2 is t3, the lower bound thereof is t2, and the upper bound of the section 3 is t4, and the lower bound thereof is t 3. The priority of the section 1 is highest, and the priority of the section 3 is lowest. And attributing the to-be-transmitted flow to a corresponding interval according to the to-be-transmitted flow corresponding to the newly added flow, and taking the priority of the corresponding interval as the priority of the newly added flow.
In some embodiments, for the time period set by the standard in step S201, a certain time period before the current time of the data receiving end may be selected, so as to divide the traffic interval according to the actual working state of the data receiving end in the data center network, and obtain the priority determination standard matching the actual condition, specifically, a plurality of alternative time periods may be selected in the last working cycle of the data receiving end, the working cycle may be 1 day or 1 week, or the working cycle may be determined according to the actual application requirement, for example, the time for executing a specific task may be taken as a cycle; furthermore, a plurality of the alternative time periods are combined into a set standard time period, so as to more comprehensively reflect the flow distribution condition of the flow received by the data receiving end in the working process.
Further, for streams collected in a plurality of recorded time periods in a working cycle, in order to obtain a more accurate flow distribution condition, a current time point may be obtained, and a candidate time period closest to the current time point in the working cycle is used as a set standard time period. For example, in the working cycle of the last day that the data receiving end works, a period of 2 minutes around each integral point time is collected as the alternative time period, and if the current time is 14:20, the alternative time period at 14:00 in the working cycle of the last day can be used as the set standard time period to obtain a more matched traffic distribution situation, so that the traffic interval divided for determining the priority level more matches the current working state.
In another embodiment, the traffic interval may also be directly divided according to the traffic distribution of the actually transmitted stream in a certain period before the current time to determine the priority, so as to implement dynamic change. For example, if the current time is 14:00, 13:58 to 14:00 may be used as a set standard time period, and the traffic interval is divided to determine the priority according to the traffic to be received corresponding to the connection request packet actually received by the data receiving end in the time period.
In another embodiment, the reference traffic for dividing the traffic interval includes not only the newly added stream corresponding to the connection request packet received by the data receiving end, but also the existing stream that the data receiving end has not accepted completion, so as to determine the priority more accurately.
Accordingly, in step S102, for existing flows already existing at the data receiving end, the priority of each existing flow is determined in the same manner as in steps S201 to S203 within the same RTT.
Specifically, in some embodiments, in step S102, obtaining a transmission data table maintained by the data receiving end and remaining flows of all existing flows in the transmission-waiting data table, and determining the priority of each existing flow according to the remaining flows of each existing flow includes step S301: and acquiring a traffic interval to which the residual traffic of each existing flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding existing flow.
In step S103, the new stream and the existing stream corresponding to each priority are arranged in the order of the remaining bytes from small to large. It should be noted that, the traffic to be transmitted corresponding to the newly added stream is the remaining bytes of the newly added stream, and the remaining traffic of the existing stream is the remaining bytes of the existing stream. Thus, each new added flow and existing flow in each priority queue further form a high-to-low priority order.
In step S104, since the range of the transmission rate is limited, if the number of streams in one transmission data priority queue is too large, the difference in the token packet rate of each packet is small. Therefore, there is a need to further control the number of flows in each transmit data priority queue. The number of flows in each transmission data priority queue is set to be at most a first number, the first number can be determined according to the load capacity of the equipment and the data center network, and the larger the load capacity is, the larger the value of the first number is. When the number of flows in a certain transmission data priority queue is higher than the first number, one or more of the flows with longer flows are transferred to a waiting transmission list. And the flow transferred to the waiting transmission list does not participate in transmission or calculation of the transmission rate of the token packet in the current RTT.
Illustratively, as shown in fig. 4, there are multiple flows (the shaded part represents flows) in each transmission data priority queue in the transmission data list T1, in order to ensure that there is a difference in the speed of the flows with different lengths in the speed control process, at most only 4 flows in each data priority queue may be controlled, and when the number of flows corresponding to a certain priority is more than 4, the longer flows are transferred to the waiting transmission data list T2, so as to ensure that the number of flows of each priority in the transmission data list T1 is kept within 4. In the subsequent step, rate allocation is performed only with reference to the flow recorded in each priority queue of the transmission data in the transmission data list T1, so that the token packet transmission rates of flows with different lengths in the same priority have a significant difference.
In step S105, there are multiple flows in each priority of the transmitted data, and if the network is shared fairly, mutual contention occurs, so that the optimal SRTF cannot be reached. Specifically, in step S105 of this embodiment, when determining the token packet transmission rate of each flow of the same priority, the token packet transmission rate should have a suitable lower limit and an upper limit. Without a suitable lower bound, when all flows are fairly large, the lowest token packet transmission rate is assigned, and the bandwidth utilization of the bottleneck link is insufficient. Without a reasonable upper bound, when all flows are quite small, the highest token packet transmission rate is assigned, and much of the bandwidth between each receiver and its connected ToR switch will be occupied by token packets, resulting in a large amount of bandwidth waste. As shown in fig. 5, in the present embodiment, token packet transmission rates of each new added flow and existing flow are arranged in a set transmission rate interval according to the distribution relationship of the remaining bytes of each flow in the same priority and according to the distribution relationship.
Specifically, flow f in ith transmission data priority queueiToken packet transmission rate of
Figure BDA0003131796610000082
The calculation formula is as follows:
Figure BDA0003131796610000081
wherein c is the lower limit value of the token sending rate, α c is the upper limit value of the token sending rate, α>1;tiIs the upper bound, t, of the flow interval corresponding to the ith priority queue of the transmission datai-1The flow interval corresponding to the ith transmission data priority queue is the next time; δ is the flow fiThe remaining flow rate of (c).
In step S106, according to the token packet transmission rate of each flow determined in step S105, all the token packets of the new added flow and the existing flow from the high priority to the low priority are sequentially transmitted out, so as to implement the global optimal SRTF.
In some embodiments, under the condition that the bottleneck switch limits the total token packet transmission rate, each data receiving end adds a random transmission interval to the token packet to control the ratio of the token packet transmission rate between different data receiving ends to be constant.
In some embodiments, after initiating a data transfer, each transmit data priority queue in the transmit data table has only one transmitting flow at a time.
In another aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method are implemented.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the steps of the above-mentioned method.
The invention is illustrated below with reference to a specific example:
the congestion control method based on token scheduling in this embodiment is used for operating at a data receiving end, and in the same data center network, there are multiple data sending ends and data receiving ends connected through a switch.
The work performed by the data sending end includes the following: after receiving some data of upper application, the data sending end firstly sends a connection request packet to the data receiving end, and the packet carries flow size information. After sending the connection request packet, the data sending end enters a connection waiting state. Then, the data sending end enters different stages according to the type of the received data packet. And if the data sending end receives the waiting data packet, entering a transmission waiting state. And if the token packet is received, entering a normal transmission state. Each time a token packet is received, a corresponding data packet is sent until the end of the streaming. The flow in the waiting state will be transferred to the normal transmission state after receiving the token packet. In the whole transmission process, the data sending end does not need to perform other calculation operations, and only needs to send the data packet under the drive of the data receiving end.
The work performed by the data receiving end comprises the following contents: the data receiving end plays an important role in the transmission process, allocates priority to each stream and performs rate control. Each receiver maintains two connection information tables, and a table T1 maintains information of the stream currently being transmitted, i.e., a transmission data list. Where each priority has only one current stream being transmitted. Another table T2 maintains information for flows waiting to be transmitted, i.e. a waiting to transmit list. When a data receiving end receives a connection request packet of a new flow, firstly, the priority value P of the flow is determined according to the flow length carried in the connection request packet. After determining the traffic priority, the data receiving end checks whether the number of flows with priority P in table T1 exceeds the set number. If so, the stream with the highest remaining byte count with priority P in Table T1 is moved to Table T2. Finally, the scheduler calculates the corresponding token packet transmission rates for all flows in table T1. In each RTT, the receiving end updates the flow remaining bytes and priorities in tables T1, T2, keeps the number of flows in each priority within a threshold, and recalculates the token transmission rate according to a rate control mechanism.
For the method for determining the priority value P, refer to the above steps S201 to S203, and the upper bound and the lower bound of the traffic interval corresponding to the queues with different priorities need to be calculated according to the traffic size distribution. Let i equal 1, 2.. N denote the ordinal number of the priority of data transmission, tiRepresenting the upper bound of the corresponding traffic range of the queues with different data transmission priorities, F (-) being a traffic distribution function, F (t)i) Indicates a flow rate less than tiThe following calculation formula exists for the ratio of the flow(s):
Figure BDA0003131796610000101
by calculating equation 2, the boundaries of the traffic intervals corresponding to the priorities can be obtained, and the boundaries are used for determining the priority of the flow. It should be emphasized that, in the data center network, a plurality of data receiving terminals should determine the priority according to the same standard, that is, the traffic intervals corresponding to the priorities in different data receiving terminals should be consistent.
After the priority queues of the transmission data in the T1 table determine the flows, the token packets of each flow need to be assigned with sending rates. If flows with the same priority value compete with each other, fair sharing of network bandwidth will make traffic scheduling performance closer to PS rather than SRTF. To approach the global SRTF, one intuitive approach is to send more token packets for streams with smaller remaining bytes. Thus, these flows may send more packets than other flows with the same priority value. To achieve this, the rate control method should follow some basic principles. First, all data receiving end hosts should adopt the same priority determination method. Otherwise, it is possible that at data sink host R1, the stream with the larger remaining bytes is assigned a higher priority value, while at data sink host R2, the stream with the smaller remaining bytes of traffic is assigned a lower priority value. Therefore, according to the SRTF policy, the bandwidth cannot be shared. Second, there should be a suitable lower and upper limit on the token packet transmission rate. Without a suitable lower bound, when all flows are fairly large, the lowest token packet transmission rate is assigned, and the bandwidth utilization of the bottleneck link is insufficient. Without a reasonable upper bound, when all flows are quite small, the highest token packet transmission rate is assigned, and much of the bandwidth between each receiver and its connected ToR switch will be occupied by token packets, resulting in a large amount of bandwidth waste. Therefore, setting an appropriate token sending rate according to the traffic remaining bytes can approximately implement SRTF, thereby reducing the flow completion time.
Specifically, the rate control algorithm can refer to the description of step S105, and the flow f in the ith priority queue of the transmission dataiToken packet transmission rate of
Figure BDA0003131796610000103
The calculation formula is as follows:
Figure BDA0003131796610000102
wherein c is the lower limit value of the token sending rate, α c is the upper limit value of the token sending rate, α>1;tiIs the upper bound, t, of the flow interval corresponding to the ith priority queue of the transmission datai-1The flow interval corresponding to the ith transmission data priority queue is the next time; δ is the flow fiThe remaining flow rate of (c).
The work performed by the switch includes the following: the switch needs to ensure that the rate of token packets does not exceed a certain limit during transmission. The smallest ethernet frame size is used as the size of the token packet, i.e. 64 bytes. Thus, in actual transmission, the minimum length of the token packet is equal to the sum of the preamble of (64+8) bytes plus the ethernet frame gap of 12 bytes, i.e. 84 bytes. Since the largest ethernet frame is 1538 bytes, each switch should limit the overall rate of token packets to
Figure BDA0003131796610000111
Where L represents link capacity to ensure high utilization of the link and near zero queuing delay. In practice, the total number of token packets may be limited to a threshold value using a queue control mechanism supported by the current switch.
In addition, the switch also needs to ensure fairness and randomness of token packets corresponding to different flows. For example, if the sending rate of token packets is limited to c at the bottleneck switch, and two data receivers send token packets to the bottleneck switch at the sending rates of 2c and c, respectively, the switch should discard the same proportion of token packets per flow. Specifically, the token packet transmission rate of the first flow should be reduced to
Figure BDA0003131796610000112
The token packet transmission rate of the second flow should be reduced to
Figure BDA0003131796610000113
In this way it is still guaranteed that the ratio of token packets of both flows is the same as before. This can be achieved by adding random packet transmission intervals and coordinated switching at the receiving endThe flow shaping of the machine. By the two methods, a plurality of token packets of one flow can be prevented from continuously arriving at the same port of the switch in a short time, and further, the whole bandwidth is occupied. In addition, the switch in this embodiment needs to adopt symmetric hash to ensure the symmetry of the route, that is, the forward path and the reverse path of one flow are the same.
In summary, in the congestion control method and apparatus based on token scheduling, the method only needs to be executed at the data receiving end, the data receiving end determines the priority of each flow according to the traffic to be transmitted of the new added flow and the remaining traffic of the existing flow, and arranges the new added flow and the existing flow in the order of the remaining bytes from small to large within each priority; for a certain number of shorter flows in each priority, determining the sending rate of the token packet in a set sending rate interval according to the flow distribution relation, and sending the token packet to corresponding data sending ends in sequence, so as to realize the shortest remaining time priority in global scheduling and greatly reduce the short flow delay.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein may be implemented as hardware, software, or combinations of both. Whether this is done in hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments in the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A congestion control method based on token scheduling, for operating at multiple data receiving ends simultaneously, at each data receiving end, the method comprising:
receiving one or more connection request packets sent by at least one data sending end in each round trip time, wherein each connection request packet comprises a to-be-transmitted flow of a corresponding new flow increase, determining the priority of each new flow increase according to the to-be-transmitted flow, and the smaller the to-be-transmitted flow is, the higher the priority of the corresponding new flow increase is;
acquiring a transmission data table maintained by the data receiving end and residual flow of all existing flows in the data table waiting for transmission, and determining the priority of each existing flow according to the residual flow of each existing flow, wherein the smaller the residual flow is, the higher the priority of the corresponding existing flow is;
placing each new added flow and each existing flow into a transmission data priority queue with corresponding priority according to the priority, and arranging the new added flow and the existing flow from small to large according to the sequence of the flow to be transmitted and the residual flow;
reserving and recording the first number of new added flows or existing flows in each transmission data priority queue in the transmission data list, and recording the rest new added flows or existing flows in the waiting transmission list;
acquiring the distribution relation of the to-be-transmitted flow of each new added flow and the residual flow of each existing flow in each transmission data priority queue, configuring the token packet transmission rate of each new added flow and the token packet transmission rate of each existing flow according to the same distribution relation in a set transmission rate interval, and generating a token packet for each new added flow and each existing flow in the transmission data list;
and sequentially sending token packets corresponding to each newly added flow and the existing flow in the transmission data list to the data sending end according to the corresponding token packet sending rate so as to initiate data transmission.
2. The method for controlling congestion based on token scheduling according to claim 1, wherein determining the priority of each new added flow according to the traffic to be transmitted comprises:
receiving a plurality of standard connection request packets in a set standard time period, and recording the flow of a standard flow in each standard connection request packet;
arranging the standard flows according to the sequence of the flow from small to large, dividing a second set number of flow intervals, wherein each flow interval corresponds to a priority, the smaller the flow of the standard flows is, the higher the priority is, and the number of the standard flows contained in each flow interval is the same;
and acquiring a traffic interval to which the traffic to be transmitted of each new added flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding new added flow.
3. The method according to claim 2, wherein the obtaining of the transmission data table maintained by the data receiving end and the remaining traffic of all existing flows in the transmission-waiting data table, and determining the priority of each existing flow according to the remaining traffic of each existing flow comprises:
and acquiring a traffic interval to which the residual traffic of each existing flow belongs, and taking the priority of the corresponding traffic interval as the priority of the corresponding existing flow.
4. The method for controlling congestion based on token scheduling according to claim 2, wherein the determining the priority of each new added flow according to the traffic to be transmitted further comprises:
selecting a plurality of alternative time periods in the last work cycle of the data receiving end, wherein the work cycle is 1 day or 1 week;
merging a plurality of the candidate time periods into the set standard time period.
5. The method of claim 4, wherein the determining the priority of each new added flow according to the traffic to be transmitted further comprises:
and acquiring a current time point, and taking the alternative time period closest to the current time point in the working cycle as the set standard time period.
6. The congestion control method based on token scheduling as claimed in claim 1, wherein the distribution relationship between the pending traffic of each new added flow and the remaining traffic of each existing flow in each priority queue of the transmission data is obtained, the token packet transmission rates of each new added flow and each existing flow are configured according to the same distribution relationship in the set transmission rate interval, and the flow f in the ith priority queue of the transmission dataiToken packet transmission rate of
Figure FDA0003131796600000021
The calculation formula is as follows:
Figure FDA0003131796600000022
wherein c is the lower limit value of the token sending rate, α c is the upper limit value of the token sending rate, α>1;tiIs the upper bound, t, of the flow interval corresponding to the ith priority queue of the transmission datai-1The flow interval corresponding to the ith transmission data priority queue is the next time; δ is the flow fiThe remaining flow rate of (c).
7. The congestion control method based on token scheduling as claimed in claim 1, wherein under the condition that the bottleneck switch limits the total sending rate of the token packets, each data receiving end controls the sending rate of the token packets to be constant by adding random sending intervals to the token packets.
8. The method of claim 1, wherein each priority queue of transmission data in the transmission data table has only one transmitting flow at a time after initiating data transmission.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 8 are implemented when the processor executes the program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202110707046.7A 2021-06-24 2021-06-24 Token scheduling-based congestion control method and device Active CN113543209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110707046.7A CN113543209B (en) 2021-06-24 2021-06-24 Token scheduling-based congestion control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110707046.7A CN113543209B (en) 2021-06-24 2021-06-24 Token scheduling-based congestion control method and device

Publications (2)

Publication Number Publication Date
CN113543209A true CN113543209A (en) 2021-10-22
CN113543209B CN113543209B (en) 2022-05-06

Family

ID=78096734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110707046.7A Active CN113543209B (en) 2021-06-24 2021-06-24 Token scheduling-based congestion control method and device

Country Status (1)

Country Link
CN (1) CN113543209B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285790A (en) * 2021-12-21 2022-04-05 天翼云科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114301845A (en) * 2021-12-28 2022-04-08 天津大学 Self-adaptive data center network transmission protocol selection method
CN115118671A (en) * 2022-05-30 2022-09-27 中国信息通信研究院 Method and device for token ring scheduling, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938403A (en) * 2009-06-30 2011-01-05 中国电信股份有限公司 Assurance method of multi-user and multi-service quality of service and service access control point
CN109246031A (en) * 2018-11-01 2019-01-18 郑州云海信息技术有限公司 A kind of switch port queues traffic method and apparatus
CN109391555A (en) * 2017-08-08 2019-02-26 迈普通信技术股份有限公司 Method for dispatching message, device and communication equipment
CN110868359A (en) * 2019-11-15 2020-03-06 中国人民解放军国防科技大学 Network congestion control method
CN111355669A (en) * 2018-12-20 2020-06-30 华为技术有限公司 Method, device and system for controlling network congestion
CN111628940A (en) * 2020-05-15 2020-09-04 清华大学深圳国际研究生院 Flow scheduling method, device, system, switch and computer storage medium
CN111970204A (en) * 2020-08-10 2020-11-20 江苏创通电子股份有限公司 Network flow control method and device
CN112437019A (en) * 2020-11-30 2021-03-02 中国人民解放军国防科技大学 Active transmission method based on credit packet for data center
CN112995048A (en) * 2019-12-18 2021-06-18 深圳先进技术研究院 Blocking control and scheduling fusion method for data center network and terminal equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938403A (en) * 2009-06-30 2011-01-05 中国电信股份有限公司 Assurance method of multi-user and multi-service quality of service and service access control point
CN109391555A (en) * 2017-08-08 2019-02-26 迈普通信技术股份有限公司 Method for dispatching message, device and communication equipment
CN109246031A (en) * 2018-11-01 2019-01-18 郑州云海信息技术有限公司 A kind of switch port queues traffic method and apparatus
CN111355669A (en) * 2018-12-20 2020-06-30 华为技术有限公司 Method, device and system for controlling network congestion
CN110868359A (en) * 2019-11-15 2020-03-06 中国人民解放军国防科技大学 Network congestion control method
CN112995048A (en) * 2019-12-18 2021-06-18 深圳先进技术研究院 Blocking control and scheduling fusion method for data center network and terminal equipment
CN111628940A (en) * 2020-05-15 2020-09-04 清华大学深圳国际研究生院 Flow scheduling method, device, system, switch and computer storage medium
CN111970204A (en) * 2020-08-10 2020-11-20 江苏创通电子股份有限公司 Network flow control method and device
CN112437019A (en) * 2020-11-30 2021-03-02 中国人民解放军国防科技大学 Active transmission method based on credit packet for data center

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285790A (en) * 2021-12-21 2022-04-05 天翼云科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114301845A (en) * 2021-12-28 2022-04-08 天津大学 Self-adaptive data center network transmission protocol selection method
CN114301845B (en) * 2021-12-28 2023-08-01 天津大学 Self-adaptive data center network transmission protocol selection method
CN115118671A (en) * 2022-05-30 2022-09-27 中国信息通信研究院 Method and device for token ring scheduling, electronic equipment and storage medium
CN115118671B (en) * 2022-05-30 2024-01-26 中国信息通信研究院 Method and device for token ring scheduling, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113543209B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN113543209B (en) Token scheduling-based congestion control method and device
US10205683B2 (en) Optimizing buffer allocation for network flow control
US7035220B1 (en) Technique for providing end-to-end congestion control with no feedback from a lossless network
US7161907B2 (en) System and method for dynamic rate flow control
JP3953819B2 (en) Scheduling apparatus and scheduling method
US6621791B1 (en) Traffic management and flow prioritization over multiple physical interfaces on a routed computer network
WO2019157867A1 (en) Method for controlling traffic in packet network, and device
US7616567B2 (en) Shaping apparatus, communication node and flow control method for controlling bandwidth of variable length frames
EP0430570A2 (en) Method and apparatus for congestion control in a data network
EP1035688A2 (en) An RSVP-based tunnel protocol providing integrated services
JPH08274793A (en) Delay minimization system provided with guaranteed bandwidthdelivery for real time traffic
JPH07221795A (en) Isochronal connection processing method and packet switching network
JP2002232470A (en) Scheduling system
CN113162789A (en) Method, device, equipment, system and storage medium for adjusting service level
CN113746751A (en) Communication method and device
CN114124830B (en) RDMA service quality assurance method and system for multiple application scenes of data center
CN109995608B (en) Network rate calculation method and device
CN112005528A (en) Data exchange method, data exchange node and data center network
CN112437019B (en) Active transmission method based on credit packet for data center
US20120057605A1 (en) Bandwidth Control Method and Bandwidth Control Device
JP4536047B2 (en) Admission control apparatus and method
KR100745679B1 (en) Method and apparatus for packet scheduling using adaptation round robin
JP2002305538A (en) Communication quality control method, server and network system
KR20050099241A (en) An apparatus for schedualing capable of providing guaranteed service for edge-node and a method thereof
CN114157610B (en) High-speed network protocol system and transmission method suitable for block chain network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant