CN112737970A - Data transmission method and related equipment - Google Patents

Data transmission method and related equipment Download PDF

Info

Publication number
CN112737970A
CN112737970A CN201911043193.8A CN201911043193A CN112737970A CN 112737970 A CN112737970 A CN 112737970A CN 201911043193 A CN201911043193 A CN 201911043193A CN 112737970 A CN112737970 A CN 112737970A
Authority
CN
China
Prior art keywords
queue
request
rate
represented
sequence number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911043193.8A
Other languages
Chinese (zh)
Other versions
CN112737970B (en
Inventor
雷凯
张烨
蒋竞颉
黄俊琳
白铂
张帆
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911043193.8A priority Critical patent/CN112737970B/en
Publication of CN112737970A publication Critical patent/CN112737970A/en
Application granted granted Critical
Publication of CN112737970B publication Critical patent/CN112737970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a data transmission method and related equipment. The data transmission method comprises the following steps: the first equipment sends a token according to a first rate, and the token is used for a receiving end of the token to send a data packet to the first equipment; the data packet is forwarded by the second equipment, and the first equipment receives the data packet forwarded by the second equipment; the first device detects a first mark in the data packet, wherein the first mark is marked by the second device according to the network congestion condition; and if the number of the target data packets received by the first equipment in the first period meets a first condition, reducing a first rate of sending the token by the first equipment according to the number of the target data packets. The network congestion condition is judged by detecting the first mark in the data packet, and if the number of the data packets with the congestion represented by the first mark meets a first condition, the rate of sending the token is reduced, so as to relieve the network congestion.

Description

Data transmission method and related equipment
Technical Field
The present application relates to the field of communication networks, and in particular, to a data transmission method and related device.
Background
Applications running on a data center, such as parallel computing, distributed deep learning, and also typical data center applications such as MapReduce, web searching, and distributed machine learning, incur a large amount of communication traffic in a data center network. These applications use a "many-to-one" data communication mode (Incast), that is, many hosts can send data to the same host at the same time, which results in a bottleneck of congestion at the receiving end, and this communication mode often results in situations of traffic burst and traffic concurrency, and at the same time, the data center network has high requirements on the transmission control protocol because the buffers of the switches in the data center network are all small.
In the prior art, the receiver-driven protocol, pHost, implements proactive congestion control through token packets (also known as credit or message authorization). When a sender initiates a new Flow, a Flow-Start Request (FSR) message is first sent to a receiver. Upon receiving such a message, the receiving end determines the network capacity available to each sender. The corresponding token number for each Maximum Transmission Unit (MTU) time is then published. The sender sends data according to the received token, and one token can be used for sending one data packet. the token sender severely limits the traffic it can inject into the network. In essence, the receiver informs the sender of the available network capacity, and keeps congestion by limiting the number of tokens, gracefully avoiding token bursts. However, pHost only works well without congestion. When network congestion occurs and becomes sluggish, the receiving end still distributes tokens at a fixed rate. When severe congestion occurs, the pHost stops sending tokens, waits for a fixed time, and resumes token distribution at the same rate after a certain time. Such "on-off" behavior causes transient throughput instability and affects the performance of the application.
Therefore, how to minimize queuing delay for streams and provide weighted shared bandwidth guarantees is an ongoing problem for those skilled in the art.
Disclosure of Invention
Embodiments of the present application provide a data transmission method and related device, which can minimize queuing delay of delay-sensitive streams and provide weighted shared bandwidth guarantee.
In a first aspect, an embodiment of the present application provides a data transmission method, which may include: the method comprises the steps that a first device sends a token according to a first rate, and the token is used for a receiving end of the token to send a data packet to the first device; the first device receiving the data packet; the data packet is forwarded through second equipment, and the data packet comprises a first mark, and the first mark represents the network congestion condition of the second equipment; and if the number of the target data packets received by the first equipment in the first period meets a first condition, reducing the first rate according to the number of the target data packets, wherein the first mark of the target data packet represents network congestion.
Implementing the method described in the first aspect, the first device sends a token at a first rate, the token being used by a receiver of the token to send a packet to the first device; the data packet is forwarded by the second equipment, and the first equipment receives the data packet forwarded by the second equipment; detecting a first marker in the data packet, the first marker being marked by the second device based on a network congestion condition; the method comprises the steps that a data packet with a first mark representing network congestion is a target data packet, if the number of the target data packets received by first equipment in a first time period meets a first condition, a first rate of sending tokens by the first equipment is reduced according to the number of the target data packets.
In a possible implementation manner, before the first device sends the token at the first rate, the method further includes: the first device receives a first request sent by a third device, wherein the first request is used for the third device to request to send data to the first device; the first device initializes a sequence number of a flow represented by the first request, and sequentially adds the flow represented by the first request into a first queue according to the sequence number, wherein the first queue comprises one or more flows, each flow in the first queue comprises a respective sequence number, and the sequence numbers are used for sequencing all the flows in the first queue, and the sequencing in the first queue is earlier when the value of the sequence number is smaller; the first equipment distributes tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed. The implementation mode describes that before a first device sends a token, a third device sends a first request to a second device connected to the third device, the first request is forwarded to the first device through the second device, after the first device receives the first request, the flows represented by the first request are added into a first queue in sequence according to the sequence numbers of the flows represented by the first request, wherein one token is allocated to each flow represented by the first request, the sequence number of the flow represented by the first request is increased, the sequence of the first queue may be changed, that is, one token is allocated to each flow, the sequence in the first queue is changed once, then the tokens are allocated to the flows in the first queue through the changed sequence, and the token is allocated according to the sequence of the flows, so that fair token distribution to each flow is realized, and the queuing delay of the flows is reduced.
In one possible implementation, the method further includes: when the completed bytes of the stream represented by the first request exceed a first threshold, the first device adds the stream represented by the first request into a second queue, and sorts the stream represented by the first request in the second queue according to the sequence number of the stream represented by the first request, wherein the frequency of distributing tokens to the second queue by the first device is less than the frequency of distributing tokens to the first queue; the first equipment distributes tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed. The implementation mode describes that when the completed bytes of the stream represented by the first request exceed a first threshold, the stream represented by the first request is a big stream, and after the completed bytes of the stream represented by the first request reach the first threshold, the stream represented by the first request is added into a second queue from the first queue, wherein the frequency of distributing tokens in the second queue is less than that of the first queue, so that the phenomenon that the waiting time of a small stream is too long due to too long processing time of the big stream is avoided, and because the token distributing frequencies of the first queue and the second queue are different, the throughput of the big stream can be guaranteed on the basis of guaranteeing that the sending frequency of the small stream is higher than that of the big stream.
In a possible implementation manner, if the number of target packets received by the first device in the first period satisfies a first condition, decreasing the first rate according to the number of target packets includes: the first device calculates the proportion of the target data packet received by the first device in the first period to all data packets received by the first device in the first period according to the data packets received by the first device in the first period, wherein the rate of the first device for distributing tokens in the first period is the first rate; if the proportion of the target data packet received by the first device in the first period to all the data packets received by the first device in the first period is greater than a second threshold, the first device reduces the first rate according to the proportion. The second threshold is set in the implementation manner, the proportion of the target data packet received by the first device in the first period to all the data packets received by the first device in the first period is counted, and if the proportion exceeds the second threshold, the first rate is reduced according to the proportion, so that the problem of time-out response lag caused by congestion control based on packet loss can be avoided, and the problem of low network utilization rate caused by a congestion control method based on time delay can also be avoided.
In one possible implementation, the reducing, by the first device, the first rate according to the ratio includes: the first device decreases the first rate according to a first formula with the ratio as a parameter, the first formula including:
Figure BDA0002249988620000031
wherein alpha isn+1=(1-g)*αn+g*f,ratenA first rate at which tokens are sent for the first device, raten+1For the reduced first rate, weight is the weight of the stream in the first period, and g is an adjustable parameter (0)<g<1) And f is the proportion of the target data packet received by the first device in the first period to all data packets received in the first period. The implementation mode provides a formula for reducing the first rate, the first rate is reduced through the proportion of target data packets received by the first device in the first period in all data packets received in the first period and the weight of the flow, and each flow has different backoff steps according to the weight of the flow, so that the equal proportion distribution of bandwidth can be guaranteed, and different quality requirements can be realized.
In one possible implementation, the method further includes: and if the number of the target data packets received by the first device in the first period meets a second condition, increasing the first rate. The implementation mode can improve the token sending rate of the first equipment and improve the data transmission efficiency under the condition that the network is not congested.
In one possible implementation, the increasing the first rate includes: the first device increasing the first rate according to a second formula, the second formula comprising:
raten+1=raten+1
wherein, ratenA first rate at which tokens are sent for the first device, raten+1The rate is the first rate after the liftingn+1Not exceeding a link capacity of the second device. The implementation mode provides a formula for increasing the first rate, if the network is not congested, 1 is added to the first rate, and the increased rate does not exceed the link capacity of the second device all the time, so that the network congestion is avoided actively.
In a second aspect, an embodiment of the present application provides another data transmission method, which may include: the method comprises the steps that first equipment receives a first request sent by third equipment, wherein the first request is used for the third equipment to request to send data to the first equipment; the first device initializes a sequence number of a flow represented by the first request, and sequentially adds the flow represented by the first request into a first queue according to the sequence number, wherein the first queue comprises one or more flows, each flow in the first queue comprises a respective sequence number, and the sequence numbers are used for sequencing all the flows in the first queue, and the sequencing in the first queue is earlier when the value of the sequence number is smaller; the first equipment distributes tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
In the method described in the second aspect, first, the third device sends the first request to the second device connected to the third device, the second device forwards the first request to the first device, and after receiving the first request, the first device adds the flows represented by the first request to the first queue in sequence according to the sequence numbers of the flows represented by the first request, where one token is allocated to each flow represented by the first request, and the sequence number of the flow represented by the first request increases, so that the ordering of the first queue may change, that is, one token is allocated to each flow, and then the tokens are allocated to the flows in the first queue according to the changed ordering.
In one possible implementation, the method further includes: when the completed bytes of the stream represented by the first request exceed a first threshold, the first device adds the stream represented by the first request into a second queue, and sorts the stream represented by the first request in the second queue according to the sequence number of the stream represented by the first request, wherein the frequency of distributing tokens to the second queue by the first device is less than the frequency of distributing tokens to the first queue; the first equipment distributes tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed. The implementation mode describes that when the completed bytes of the stream represented by the first request exceed a first threshold, the stream represented by the first request is a big stream, and after the completed bytes of the stream represented by the first request reach the first threshold, the stream represented by the first request is added into a second queue from the first queue, wherein the frequency of distributing tokens in the second queue is less than that of the first queue, so that the phenomenon that the waiting time of a small stream is too long due to too long processing time of the big stream is avoided, and because the token distributing frequencies of the first queue and the second queue are different, the throughput of the big stream can be guaranteed on the basis of guaranteeing that the sending frequency of the small stream is higher than that of the big stream.
In a third aspect, an embodiment of the present application provides a data transmission apparatus, including:
the system comprises a sending unit, a receiving unit and a sending unit, wherein the sending unit is used for sending a token according to a first rate, and the token is used for sending a data packet to the sending unit by a receiving end of the token;
a first receiving unit, configured to receive the data packet; the data packet is forwarded through second equipment, and the data packet comprises a first mark, and the first mark represents the network congestion condition of the second equipment;
and the reducing unit is used for reducing the first rate according to the number of the target data packets if the number of the target data packets received in the first period meets a first condition, wherein the first mark of the target data packets represents network congestion.
In the data transmission device, firstly, a token is sent by a sending unit according to a first rate, and the token is used for a receiving end of the token to send a data packet to the sending unit; the data packet is forwarded by the second device, and the first receiving unit receives the data packet forwarded by the second device; detecting a first marker in the data packet, the first marker being marked by the second device based on a network congestion condition; the first mark represents that a data packet of network congestion is a target data packet, if the number of the target data packets received by the first device in the first period meets a first condition, the reducing unit reduces a first rate of sending tokens by the first device according to the number of the target data packets, namely, the network congestion condition is judged through detecting the first mark in the data packet, and if the number of the data packets of which the first mark represents congestion meets the first condition, the rate of sending tokens is reduced to relieve the network congestion.
In one possible implementation, the apparatus further includes:
the first receiving unit is further configured to receive, by the first receiving unit, a first request sent by a third device before the sending unit sends the token at the first rate, where the first request is used for the third device to request to send data to the first receiving unit;
a first ordering unit, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to order all the streams in the first queue, where a smaller value of a sequence number leads to an earlier order in the first queue;
a first allocation unit, configured to allocate tokens to flows in the first queue according to the ordering of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
In one possible implementation, the apparatus further includes:
the first ordering unit is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and order in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the first allocation unit is further configured to allocate tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
In a possible implementation manner, the reducing unit specifically includes:
a counting unit, configured to count, according to a data packet received in a first period, a ratio of the target data packet received in the first period to all data packets received in the first period, where a rate at which the first device allocates tokens in the first period is the first rate;
the reducing unit is further configured to reduce the first rate according to a ratio of the target data packet received by the first device in the first period to all data packets received in the first period, if the ratio is greater than a second threshold.
In a possible implementation manner, the reducing unit is further configured to reduce the first rate according to a first formula by using the ratio as a parameter, where the first formula includes:
Figure BDA0002249988620000051
wherein alpha isn+1=(1-g)*αn+g*f,ratenFirst rate for sending tokensn+1For the reduced first rate, weight is the weight of the stream in the first period, and g is an adjustable parameter (0)<g<1) And f is the proportion of the target data packet received by the first device in the first period to all data packets received in the first period.
In one possible implementation, the apparatus further includes:
a raising unit, configured to raise the first rate if the number of target data packets received by the first device in the first period meets a second condition.
In a possible implementation manner, the raising unit is further configured to raise the first rate according to a second formula, where the second formula includes:
raten+1=raten+1
wherein, ratenFirst rate for sending tokensn+1The rate is the first rate after the liftingn+1Not exceeding a link capacity of the second device.
It should be understood that the third aspect of the present application is consistent with the technical solution of the first aspect of the present application, and similar beneficial effects are obtained in various aspects and corresponding possible implementations, and therefore, detailed description is omitted.
In a fourth aspect, an embodiment of the present application provides a data transmission apparatus, including:
a second receiving unit, configured to receive a first request sent by a third device, where the first request is used for the third device to request to send data to the second receiving unit;
a second sorting unit, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to sort all the streams in the first queue, where a smaller value of a sequence number leads to a higher sorting in the first queue;
a second allocating unit, configured to allocate tokens to the flows in the first queue according to the ordering of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
In one possible implementation, the apparatus further includes:
the second ordering unit is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and perform ordering in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the second allocating unit is further configured to allocate tokens to the flows in the second queue according to the ordering of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
It should be understood that the fourth aspect of the present application is consistent with the technical solution of the second aspect of the present application, and the beneficial effects achieved by the aspects and the corresponding possible embodiments are similar and will not be described again.
In a fifth aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor, and the processor is configured to support the terminal device to implement a corresponding function in the data transmission method provided in the first aspect. The terminal device may also include a memory, coupled to the processor, that stores program instructions and data necessary for the terminal device. The terminal device may also include a communication interface for the terminal device to communicate with other devices or a communication network. It should be understood that the fifth aspect of the present application is consistent with the technical solution of the first aspect of the present application, and similar beneficial effects are obtained in various aspects and corresponding possible implementations, and therefore, detailed description is omitted.
In a sixth aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor, and the processor is configured to support the terminal device to implement a corresponding function in the data transmission method provided in the second aspect. The terminal device may also include a memory, coupled to the processor, that stores program instructions and data necessary for the terminal device. The terminal device may also include a communication interface for the terminal device to communicate with other devices or a communication network. It should be understood that the sixth aspect of the present application is consistent with the technical solution of the second aspect of the present application, and the beneficial effects obtained by the aspects and the corresponding possible embodiments are similar and will not be described again.
In a seventh aspect, an embodiment of the present application provides a computer storage medium for storing computer software instructions for use in a processor in the data transmission device provided in the third aspect or the fourth aspect, where the computer software instructions include a program designed to execute the above aspects.
In an eighth aspect, the present application provides a computer program, where the computer program includes instructions, and when the computer program is executed by a computer, the computer can execute the flow executed by the processor in the data transmission device in the third aspect or the fourth aspect.
In a ninth aspect, the present application provides a chip system, where the chip system includes a processor, configured to support a network device to implement the functions referred to in the first aspect or the second aspect, for example, to generate or process information referred to in the data transmission method. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the data transmission device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a system architecture diagram of a data transmission method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a data transmission method according to an embodiment of the present application;
fig. 3 is a schematic application flow diagram of a data transmission method according to an embodiment of the present application;
fig. 4 is an algorithm diagram of a data transmission method provided in an embodiment of the present application;
fig. 5 is an algorithm diagram of another data transmission method provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a data transmission apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another data transmission device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a data transmission device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The terms "first," "second," "third," "fourth," "fifth," "sixth," "seventh," and "eighth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
First, some terms in the present application are explained so as to be easily understood by those skilled in the art.
(1) token: in the invention, one token corresponds to one data packet, and the rate control data packet is sent by controlling the token.
(2) ECN: a congestion control algorithm, switches perform congestion marking based on queue length, and congestion control based on this feedback. If the Transport layer supports the ECN function, an ECT (ECN-Capable Transport) indication may be set in the IP packet header, when the RED algorithm of the intermediate router detects that a certain packet should be used to indicate congestion, if the ECT indication of the packet is valid, the packet may be marked as CE, then when the receiving end TCP receives the packet, if the CE indication is found to be valid, an ECN-Echo flag bit may be set in the TCP header of the subsequent ACK packet to indicate congestion, when the sending end receives the congestion indication, a corresponding response may be made to the network congestion, and a CWR flag in the TCP header is set in the subsequent packet, the receiving end knows that the sending end has received and processed the ECN-Echo flag when the CWR indication is received, and the subsequent ACK packet does not set the ECN-Echo flag any more (note that pure is unreliable to be transmitted, the receiving end needs to send ECN-Echo until receiving the CWR indication of the sending end). After receiving the ECN-Echo indication, the TCP sender typically switches the congestion status to CWR, which is a similar status as Recovery status. The value of ECN in the embodiment of the present application is the value of ECN-Echo flag.
(3) FSR signaling: the Flow Start Request is a Request sent from a sending end to a receiving end before the Flow starts, so that each Flow has only one FSR, and the packet can carry Flow information.
(4) Flow: in computer programming, a stream is an object of a class, and input and output operations of many files are provided in a mode of member functions of the class. Streaming in a computer is in fact a kind of transformation of information. It is an ordered stream, so with respect to an object, we generally refer to the object as receiving external information Input (Input) as Input stream, and correspondingly outputting (Output) information from the object as Output stream, collectively referred to as Input/Output Streams (I/O Streams). When information or data is exchanged between objects, the objects or data are always converted into a certain form of stream, and then the stream is converted into object data after reaching a target object through stream transmission. A stream can therefore be regarded as a carrier of data, by means of which data exchange and transmission can be effected.
(5) Incast: a "many-to-one" communication mode, in which multiple transmitting ends transmit data to one receiving end, generally occurs in a data center distributed parallel computing scenario.
(6) RTT: Round-Trip Time, Round-Trip delay, is also an important performance indicator in computer networks, and represents the total Time delay experienced from the Time when the sender sends data to the Time when the sender receives an acknowledgement from the receiver (the receiver sends an acknowledgement immediately after receiving the data). In this embodiment, a round-trip delay refers to a time from when the receiving end device sends a token to when the receiving end device receives a data packet corresponding to the token.
(7) FIFO: first Input First Output, a First in First out queue, is a traditional sequential execution method, in which an instruction that enters First completes and retires First, and then executes a second instruction. When the CPU is not ready to respond to all the instructions in a certain period of time, the instructions are arranged in an FIFO queue, for example, the instruction 0 enters the queue first, then the instruction 1 and the instruction 2 are followed, when the CPU finishes the current instruction, the instruction 0 is taken out from the queue to be executed first, at the moment, the instruction 1 takes over the position of the instruction 0, and similarly, the instruction 2 moves forward by one position.
Fig. 1 shows a system architecture to which an embodiment of the present invention relates. The following specifically analyzes and solves the technical problems in the above-mentioned schemes by combining the data transmission system architecture provided in the present application and a data transmission method flow provided based on the data transmission system architecture.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a data transmission system provided in an embodiment of the present application, where the system may include a first device 101, a second device 102, and a third device 103. Wherein the first device 101 and the third device 103 communicate via the second device 102. There may be one or more first devices 101, second devices 102 and third devices 103 in the system, the first device 101 being primarily used to control the sending of tokens and to control the scheduling of tokens. As shown in fig. 1, in which,
the first device 101 and the third device 103 may be servers or terminal devices, the servers may include, but are not limited to, background servers, component servers, data processing servers, storage servers, or computing servers, etc., and the servers may communicate with a plurality of devices through the internet. The terminal device may be a device located at the outermost periphery of a network in a computer network, such as a communication terminal, a mobile device, a User terminal, a mobile terminal, a wireless communication device, a portable terminal, a User agent, a User Equipment, a service device, or a User Equipment (UE), and is mainly used for data input, output or display of a processing result, and may also be a software client, an application program, and the like installed or run on any one of the above devices. For example, the terminal may be a mobile phone, a cordless phone, a smart watch, a wearable device, a tablet device, a handheld device with wireless communication capabilities, a computing device, an in-vehicle communication module, a smart meter or other processing device connected to a wireless modem, and so forth. The first device 101 and the third device 103 are used to send and receive messages.
The second device 102 may be a switch or a router with a function of forwarding messages, and is configured to forward the messages sent by the first device 101 and the third device 103. As shown in fig. 1, a first device 101 and a third device 103 and a second device 102 are connected, respectively. Each of the first device 101 and the third device 103 may be assigned with a unique Internet Protocol (IP) address, and each of the switching devices may determine a receiver server of the packet according to a destination IP address in the forwarding packet, so as to forward the packet. It should be noted that the type of the second device 102 may vary according to the architecture of the data center network. For example, for a three-tier architecture network, the second device 102 may be an access switch, an aggregation switch, or a core switch; for a Leaf-Spine (Leaf-Spine) architecture network, the second device 102 may be a Leaf switch or a Spine switch.
Taking the third device 103 as a sending end device, the first device 101 as a receiving end device, and the second device 102 as a switch as an example, to exemplarily explain a data transmission process, first, the third device 103 as a sending end device sends a first request to the switch 102 connected thereto, the switch 102 receives the first request and forwards the first request to the first device 101, the first device 101 as a receiving end device receives the first request, sends a token to the switch 102 according to the first request, the switch 102 forwards the token to the third device 103, the third device 103 sends a corresponding data packet to the switch 102 after receiving the token, after receiving the data packet, the switch 102 detects a queue length in the switch, and if the queue length exceeds a threshold value, it is determined that the switch 102 is currently network-congested (i.e., the data packet is congested), the switch 102 marks a CE bit in the data packet as 1(CE is default 0), the method includes the steps that network congestion of a current switch is represented, the data packet is sent to a first device 101, after the first device 101 receives the data packet, a CE bit in the data packet is detected, if the CE bit is 1, the first device 101 records the data packet, an ECN mark is added with 1, the first device 101 counts the value of the ECN in a time period (for example, in one round trip delay), the rate of sending tokens by the first device 101 in a next time period (for example, in the next round trip delay) is reduced according to the value of the ECN, and if the data packet in the time period has no congestion mark, the rate of sending tokens by the first device 101 in the next time period (for example, in the next round trip delay) is increased by the first device 101.
It is understood that the system architecture in fig. 1 is only an exemplary implementation of the embodiments of the present invention, and the system in the embodiments of the present invention includes, but is not limited to, the above system architecture.
Referring to fig. 2, fig. 2 is a schematic diagram of a data transmission method flow provided in an embodiment of the present application. The data transmission method provided by the embodiment of the present application is described with reference to fig. 2, where the third device 103 is taken as a sending end device, the first device 101 is taken as a receiving end device, and the second device 102 is taken as an example. The method may include the following steps S201 to S208.
Step S201: the sending end device sends a first request.
Specifically, the sender device sends a first Request to the switch, where the first Request includes a Flow Start Request (FSR) for requesting the sender device to Start a Flow to the receiver device, where the Flow is used for data transmission, and each Flow has only one FSR, where the FSR may carry a Flow weight, and a value of the Flow weight is equal to 1 by default, or is given by a different application program, or may be set in advance by itself, and the switch forwards the first Request to the receiver device after receiving the first Request.
In a possible implementation manner, when the sending end device sends the first request to the switch, several pioneer tokens may be used to send the data packet at the same time, and in an actual process, 8 pioneer tokens are generally used, so that the sending end device may also send the data packet in the round-trip time consumed by waiting for the first token, which saves time and reduces the delay of the streamlets.
Step S202: the sink device adds the flow represented by the first request to the first queue.
Specifically, after receiving a first request, a receiving end device first initializes a sequence number of a flow represented by the first request, and sequentially adds the flow represented by the first request into a first queue according to the sequence number, where the first queue includes one or more flows, each flow in the first queue includes a respective sequence number, and the sequence number is used to order all the flows in the first queue, where a smaller value of the sequence number is, the earlier the ordering in the first queue is, when two or more flow sequence numbers are the same, the flows may be ordered according to time of the flow, the earlier the flow request is the earlier the ordering in the first queue is, the flows may also be ordered according to weights of the flows, and the larger value of the flow weight is the earlier the ordering in the first queue. For example, after receiving the first request, the receiving end device initializes a value of a sequence number SN of a flow represented by the first request to 0, and adds the flow represented by the first request to a first queue, where the first queue may have multiple flows and performs sorting according to the sequence number SN, generally, the smaller the sequence number, the earlier the flow is sorted, and the SN is 0, and is generally arranged at the head of the queue, and if the sequence numbers of two flows are both 0, the flows may be sorted according to arrival times of the two flows or may be sorted according to weights of the flows.
Step S203: and the receiving terminal equipment distributes tokens to the flows in the first queue according to the sequencing of the first queue.
Specifically, after adding the flow represented by the first request to the first queue in sequence according to the sequence number, the receiving end device allocates a token to the flow in the first queue according to the sorting of the first queue, where the token may carry a flow weight, and a value of the flow weight may be equal to 1 by default, or the flow weight may be marked according to an application priority of the receiving end device, or the flow weight may be set in advance, or the flow weight may be controlled according to a superior scheduling instruction (for example, a flow weight is controlled by a controller), where if the receiving end device allocates one token to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and then the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed again. That is, if the receiving device allocates one token, the value of the sequence number of the flow to which the token is allocated increases, and the flows in the first queue are reordered. For example, the receiving end device allocates tokens to the first stream in the first queue according to the sorting of the first queue, that is, if the first stream represented by the first request is at the first queue of the first queue, the receiving end device sends a token to the first stream represented by the first request, and after the receiving end device sends a token to the first stream represented by the first request, the sequence number of the first request is increased, and the increased sequence number may be updated by the following formula,
Figure BDA0002249988620000101
wherein SN isn+1For each updated sequence number, PacketSize is a fixed-size data block that can be transmitted by token, and the size of the data block can be set to 1500 bytes (1460 actually transmitted)Bytes), i.e. when PacketSize is 1460 bytes, frequency is the frequency with which the sink device sends tokens to the first queue. After the receiving end device updates the serial number of the first request, the ordering of the first request in the first queue is changed according to the updated serial number, because the SN is increased, generally, the first request will be ordered backwards, then the receiving end device allocates tokens to the first stream in the first queue according to the ordering of the first queue, that is, the receiving end device allocates one token per stream, all streams in the first queue are reordered, and the ordering is continuously updated and the tokens are sent.
In a possible implementation manner, a second queue also exists in the receiving end device, when the completed bytes of the stream represented by the first request exceed a first threshold, the receiving end device adds the stream represented by the first request into the second queue, where the frequency of allocating tokens to the second queue by the receiving end device is less than the frequency of allocating tokens to the first queue, the receiving end device adds the stream represented by the first request into the second queue, and sorts the streams represented by the first request in the second queue according to the sequence number of the stream represented by the first request, where the completed bytes of the stream represented by the first request are updated once every time the receiving end device sends one token to the stream represented by the first request; the receiving terminal equipment distributes tokens to the flows in the second queue according to the sequencing of the second queue; and if a token is allocated to each flow represented by the first request, increasing the value of the sequence number of the flow represented by the first request, and executing the step of sorting in the second queue according to the sequence number of the flow represented by the first request. That is, if the receiving device allocates one token, the value of the sequence number of the flow to which the token is allocated increases, and the flows in the second queue are reordered. The manner in which the sequence number is incremented may be updated by the following formula,
Figure BDA0002249988620000102
wherein SN isn+1For each updated sequence number, PacketSize is a fixed size block of data that may be transmitted by a token, andthe size of the data block is set to be 1500 bytes (1460 bytes are actually transmitted), namely the PacketSize is 1460 bytes, and the frequency is the frequency of the receiving end device sending the token to the second queue.
For example, each receiving end device maintains an active flow queue that is divided into a virtual high priority queue (first queue) and a virtual low priority queue (second queue). The flows in the two priority queues will distribute token tokens on two different frequencies, with the high priority queue transmitting at a high frequency and the low priority queue transmitting at a low frequency. For example, as shown in FIG. 3, suppose FSR1 (F1) is a large stream and FSR3 (F3) is a small stream. When R1 receives FSR1 and FSR3, F1 and F3 will be placed in the high priority queue (first queue), with token assignment frequency of 8 in high priority. The recipient device updates the completed bytes finishedbytes of the flow once for each token sent by the flow in the first queue, wherein finishedbytesn+1=finishedbytesn+ PacketSize, which is a fixed-size data block transmittable by a token, may set the size of the data block to 1500 bytes (1460 bytes actually transmittable), that is, at this time, to 1460 bytes. When the completed bytes of F1 exceed the first threshold, which may be set to 5000 bytes in advance, that is, when the completed bytes of F1 exceed 5000 bytes, F1 will demote to the low priority queue (second queue) and its transmission frequency is updated, the token allocation frequency in the low priority is 2, that is, assuming that the total rate of token allocation by the receiving end device is one token per second, if there is only F3 in the high priority queue and F1 demotes to the low priority queue, the frequency of F3: F1 is 8:2, which means that the token rate of F3 is 4/5 (token/sec) and F1 is 1/5 (token/sec), the tokens of F3 will be transmitted in 0, 1, 2, 3 seconds every 5 seconds, which means that the waiting time of F3 is 1 s. The token of F1 will be sent in the 4 th second, so the latency of F1 is 4 seconds. The receiving device is only allowed to send its token when the latency of one stream is over.
Illustratively, an algorithm framework diagram of this implementation is shown in FIG. 4. The implementation mode describes that when the completed bytes of the stream represented by the first request exceed a first threshold, the stream represented by the first request is a big stream, and after the completed bytes of the stream represented by the first request reach the first threshold, the stream represented by the first request is added into a second queue from the first queue, wherein the frequency of distributing tokens in the second queue is less than that of the first queue, so that the phenomenon that the waiting time of a small stream is too long due to too long processing time of the big stream is avoided, and because the token distributing frequencies of the first queue and the second queue are different, the throughput of the big stream can be guaranteed on the basis of guaranteeing that the sending frequency of the small stream is higher than that of the big stream.
In a possible implementation manner, a second queue and a third queue also exist in the receiving end device, the frequency of allocating tokens to the second queue by the receiving end device is less than the frequency of allocating tokens to the first queue, the frequency of allocating tokens to the third queue by the receiving end device is less than the frequency of allocating tokens to the second queue, when the completed bytes of the stream represented by the first request exceed the first threshold and do not exceed the third threshold, the receiving end device adds the stream represented by the first request to the second queue, and sorts the streams according to the sequence numbers of the streams represented by the first request, wherein the completed bytes of the stream represented by the first request are updated once every time the receiving end device sends one token to the stream represented by the first request; the receiving terminal equipment distributes tokens to the flows in the second queue according to the sequencing of the second queue; and if a token is allocated to each flow represented by the first request, increasing the value of the sequence number of the flow represented by the first request, and executing the step of sorting in the second queue according to the sequence number of the flow represented by the first request. When the completed bytes of the stream represented by the first request exceed a third threshold value, the receiving end device adds the stream represented by the first request into a third queue, and sorts the stream represented by the first request in the third queue according to the sequence number of the stream represented by the first request, wherein the completed bytes of the stream represented by the first request are updated once every time the receiving end device sends a token to the stream represented by the first request; the receiving terminal equipment distributes tokens to the flows in the third queue according to the sequencing of the third queue; and if a token is allocated to each flow represented by the first request, increasing the value of the sequence number of the flow represented by the first request, and performing the step of sorting in the third queue according to the sequence number of the flow represented by the first request. That is, if the receiving device allocates one token, the value of the sequence number of the flow to which the token is allocated increases, and the flows in the queue are reordered.
For example, each receiving end device maintains an active flow queue, which may include three or more virtual priority queues, for example, three virtual priority queues may be divided into a virtual high priority queue (first queue), a virtual medium priority queue (second queue), and a virtual low priority queue (third queue), where flows in the three priority queues will be allocated token tokens on three different frequencies, the high priority queue has a high transmission frequency, and the low priority queue has a low transmission frequency. For example, when R1 receives FSR1 (abbreviated F1), FSR3 (abbreviated F3), and FSR5 (abbreviated F5), F1, F3, and F5 will be placed in a high priority queue (first queue), with a token allocation frequency of 8 in the high priority. The recipient device updates the completed bytes finishedbytes of the flow once for each token sent by the flow in the first queue, wherein finishedbytesn+1=finishedbytesn+ PacketSize, which is a fixed-size data block, may set the size of a packet to 1500 bytes (1460 bytes may actually be transmitted), i.e., this time, the PacketSize is 1460 bytes. When the completed byte of F1 exceeds the first threshold, which may be preset to 5000 bytes, this third threshold may be preset to 8000 bytes, F1 will demote to the medium priority queue (second queue), and its transmission frequency is updated, the token assignment frequency in low priority is 4, when the completed bytes of F1 exceed the third threshold of 8000 bytes, F1 will demote to the low priority queue (third queue), and its transmission frequency is updated, the token allocation frequency in the low priority is 2, that is, assuming that the total rate at which the receiving end device allocates token tokens is one token per second, if there is only F5 in the high priority queue, and F3 demotes to the medium priority queue, F1 demotes to the low priority queue, f5: f3: the frequency of F1 is 8:4:2, which means that the token rate of F5 is 4/7 (t).oken/sec), the token rate of F3 is 2/7 (token/sec) and F1 is 1/7 (token/sec), the token of F5 will be sent in 0, 1, 2, 3 seconds every 7 seconds, which means that the latency of F3 is 3 s. The token of F3 will be sent at 5, 6 seconds, which means that the latency of F3 is 5 s. The token of F1 will be sent at the 7 th second, so the latency of F1 is 6 seconds. The receiving device is only allowed to send its token when the latency of one stream is over.
Step S204: the sending end device sends the data packet.
Specifically, the receiving end device allocates tokens to the flows in the first queue according to the sorting of the first queue, and after receiving the tokens, the sending end device sends a data packet to the receiving end device, where the data packet is forwarded by the switch, that is, the sending end device sends the data packet to the switch, and the switch forwards the data packet to the receiving end device. As shown in fig. 3, after the sender device receives the tokens of F1 and F4 in sequence, the sender device may send the packets of F1 and F4 to the switch by using the FIFO rule.
Step S205: the switch detects the queue length and determines a first marker in the packet based on the queue length.
Specifically, sending end equipment sends a data packet to a switch, and after the switch receives the data packet, the switch detects the queue length in the switch; if the queue length in the switch exceeds a preset threshold, modifying a first mark in a data packet in the queue so that the first mark represents network congestion, wherein the preset threshold may be a preset fixed value or a variable value related to round trip delay of data transmission and link bandwidth of the switch, for example, after the switch receives the data packet, the switch detects the queue length in the switch, if the data packet is about to overflow (i.e., the queue length exceeds the preset threshold), a CE bit in the data packet is modified to be 1 to represent network congestion of the current switch, wherein the CE bit defaults to be 0, and if the queue length of the switch does not exceed the preset threshold, the CE bit is 0, or is modified to a value other than 1 to represent that the network of the current switch is not congested.
In a possible implementation manner, a plurality of switches may be included between the receiving end device and the sending end device, and a data packet sent by the sending end device is forwarded to the receiving end device through the plurality of switches, where, taking two switches as an example, the sending end device sends the data packet to a first switch, and after the first switch receives the data packet, the first switch detects a queue length in the first switch; and if the queue length in the first switch exceeds a preset threshold value, modifying the first mark in the data packet in the queue so as to enable the first mark to represent network congestion, namely modifying the CE bit in the data packet to be 1, wherein the CE bit is defaulted to be 0, and if the queue length of the switch does not exceed the preset threshold value, not modifying the CE bit. Then the first exchanger sends the data packet with CE bit 1 to the second exchanger, after the second exchanger receives the data packet, the second exchanger detects the queue length in the second exchanger; if the length of the queue in the second switch does not exceed the preset threshold, the first mark of the data packet is not modified, at this time, the CE bit in the data packet is still 1 to represent the current network congestion, and if the length of the queue in the second switch exceeds the preset threshold, the CE bit in the data packet is still set to be 1.
Step S206: the switch sends the packet.
Specifically, after determining a first flag in a data packet according to the queue length, the switch sends the data packet to the receiving end device.
Step S207: and if the number of the target data packets received by the receiving end equipment in the first time period meets a first condition, reducing the first rate.
Specifically, after receiving a data packet, a receiving end device first detects a first flag in the data packet, if the first flag in the data packet represents network congestion, the data packet is a target data packet, and if the number of target data packets received by the receiving end device in a first time period meets a first condition, a first rate at which the first device sends a token is reduced, where the first condition may be that the number of target data packets received by the receiving end device in the first time period is greater than a second threshold, or that a proportion of the target data packets received by the receiving end device in the first time period to all the received data packets is greater than the second threshold. If the number of the target data packets meets a first condition, reducing the first rate, and in a next time period, the receiving end device sends the token at the reduced first rate, where the first time period may be a preset fixed time period, and the first time period may also be a round-trip delay (time from when the receiving end device sends a token to when the receiving end device receives a data packet corresponding to the token). For example, when the first condition is that the receiving end device receives the number of target packets in the first period of time greater than the second threshold, the first period of time is 100 seconds, and the receiving end device receives 50 packets in 100 seconds, where the first flag indicates that there are 20 packets in network congestion, that is, the number of target packets is 20, and the second threshold may be set to 0, and 20 is greater than 0, then the rate at which the receiving end device sends tokens is reduced. That is, when the second threshold is 0, as long as the receiving end device receives the data packet with the network congestion flag in the first period, the rate at which the receiving end device sends the token is reduced according to the value of the target data packet.
In a possible implementation manner, after receiving a data packet, a receiving end device first detects a first flag in the data packet, and if the first flag in the data packet represents network congestion, the data packet is a target data packet, and counts a value of an ECN in a first period. The numerical value of the ECN is the proportion of the target data packets received by the receiving terminal in the first period to all the data packets received by the receiving terminal in the first period, and if the numerical value of the ECN is larger than a second threshold value, the rate of sending the token by the receiving terminal equipment is reduced according to the numerical value of the ECN. The ECN is to display a feedback flag, and the ECN is used as a congestion index, so that the problem of delay in response to timeout caused by congestion control based on packet loss can be avoided, and the problem of low network utilization caused by a congestion control method based on delay can also be avoided, and the value of the ECN in the first period is counted, and the value of the ECN represents the proportion of data packets in which the first flag represents network congestion received in the first period, for example, one round-trip delay is 100 seconds, and the receiving end device receives 50 data packets in 100 seconds, where the number of the data packets in which the first flag represents network congestion is 20, that is, the value of the ECN may be 2/5 (where the target data packet represents the proportion of all data packets received by the receiving end device in the first period).
In a possible implementation manner, taking a ratio of a target data packet received by the receiving end in the first period to all data packets received by the receiving end in the first period as a parameter, the first rate is reduced according to a first formula, where the first formula may include:
Figure BDA0002249988620000131
wherein alpha isn+1=(1-g)*αn+g*f,ratenFirst rate, at which tokens are sent for a first devicen+1For the updated first rate, weight is the weight of the stream in the first period, and g is a parameter (0) that can be adjusted as needed<g<1) F is the value of the display feedback flag ECN, which can respond to congestion from the receiver; alpha is a back-off step length adjusted according to the priority (stream weight) of different streams, and can ensure equal proportion distribution of bandwidth and realize different quality requirements.
In a possible implementation manner, the first device obtains, according to the data packets received in the first period, the number of received target data packets in the first period, and if the number of received target data packets satisfies a second condition, the first rate is increased, where the second condition may be that the number of the target data packets received by the receiving end device in the first period is less than or equal to a second threshold, or that a ratio of the target data packets received by the receiving end device in the first period to all the received data packets is less than or equal to the second threshold. Namely, under the condition that the number of the target data packets represents that the network is not congested, the token sending rate of the first device can be increased, and the data transmission efficiency is improved. In the next period, the sink device transmits the token using the boosted first rate. For example, when the round trip delay is 100 seconds, 50 data packets are received by the receiving end device within 100 seconds, where the first flag indicates that there are 0 data packets of the network congestion, the number of the data packets of the network congestion is 0, the ratio of the data packets of the network congestion is 0, and the second threshold is preset to be 0, that is, if the receiving end device does not receive a data packet with the network congestion flag within the first period, the first rate is increased according to the second formula, where the second formula may include:
raten+1=raten+1
wherein, ratenFirst rate, at which tokens are sent for a first devicen+1For the updated first rate, raten+1The link capacity of the switch is not exceeded. The realization method adds 1 to the first rate, and the promoted first rate does not exceed the link capacity of the switch all the time, thereby avoiding actively causing network congestion. Illustratively, an algorithmic framework of this implementation is shown in FIG. 5.
Step S208: and after receiving all the data packets, the receiving end equipment sends a confirmation instruction.
Specifically, after receiving all data packets of a flow represented by a first request, a receiving end device sends a confirmation instruction to a switch, the switch sends the confirmation instruction to a sending end device to indicate that the first request processing is completed, each token corresponds to one data packet, each token is allocated to the flow represented by the first request, the data packet represented by the first request is received, when the receiving end device receives all data packets of the flow represented by the first request, the receiving end device sends the confirmation instruction, and the sending end device receives the confirmation instruction.
In the embodiment of the present application, first, a sending end device sends a first request to a switch connected to the sending end device, the switch receives the first request and forwards the first request to a receiving end device, after the receiving end device receives the first request, the flows represented by the first request are sequentially added to a first queue according to sequence numbers of the flows represented by the first request, and tokens are sent to the switch according to a sequence of the first queue, where a token is allocated to each flow represented by the first request, and the sequence number of the flow represented by the first request is increased, so that the sequence of the first queue may be changed, when a completed byte of the flow represented by the first request exceeds a first threshold, the flow represented by the first request is added to a second queue to reduce frequency of allocating tokens to the flow represented by the first request, reduce queuing delay of a small flow, and the switch forwards the token to the sending end device, the sending end equipment sends a corresponding data packet to the switch after receiving the token, the switch detects the queue length in the switch after receiving the data packet, if the queue length exceeds a threshold value, judging the current network congestion of the switch (namely the congestion of the data packet), marking the CE bit in the data packet as 1 by the switch (the CE is defaulted to be 0), representing the current network congestion of the switch, and sends the data packet to the receiving end device, the receiving end device detects the CE bit in the data packet after receiving the data packet, if the CE bit is 1, the receiving end device records the data packet, the first device 101 counts the number of CE bits of the data packet marked as 1 in a time period (for example, in a round-trip delay), and reducing the token sending rate of the receiving end equipment in the next time period according to the quantity, and if the data packet in the time period has no congestion mark, the receiving end equipment improves the token sending rate of the receiving end equipment in the next time period. According to the embodiment of the application, different token allocation frequencies are used for different queues, the flow completion time of a small flow can be minimized, the throughput of a large flow is guaranteed, the network congestion is reduced by detecting the network congestion and dynamically adjusting the sending rate of the tokens by using ECN marks, the retreat step length is adjusted according to the priorities of different flows, and the bandwidth of the large flow is guaranteed to be allocated in proportion.
The method of the embodiment of the present application is explained in detail above, and the following provides a related data transmission device related to the embodiment of the present application. The data transmission device can be an electronic device which has a part of calculation function and can be connected with an intelligent terminal or various terminal devices and exists in the form of a portable accessory. Referring to fig. 6, fig. 6 is a schematic diagram of a data transmission apparatus according to an embodiment of the present application, the data transmission apparatus 10 includes a sending unit 601, a first receiving unit 602, and a reducing unit 603, wherein,
a sending unit 601, configured to send a token according to a first rate, where the token is used for a receiving end of the token to send a data packet to the sending unit;
a first receiving unit 602, configured to receive the data packet; the data packet is forwarded through second equipment, and the data packet comprises a first mark, and the first mark represents the network congestion condition of the second equipment;
a reducing unit 603, configured to reduce the first rate according to the number of target packets if the number of target packets received in the first period meets a first condition, where a first flag of the target packet indicates network congestion.
In one possible implementation, the apparatus further includes:
the first receiving unit 602 is further configured to, before the sending unit 601 sends the token at the first rate, receive, by the first receiving unit, a first request sent by a third device, where the first request is used for the third device to request to send data to the first receiving unit;
a first ordering unit 604, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to order all the streams in the first queue, where a smaller value of a sequence number leads to an earlier ordering in the first queue;
a first allocating unit 605, configured to allocate tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
In one possible implementation, the apparatus further includes:
the first ordering unit 604 is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and perform ordering in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the first allocating unit 605 is further configured to allocate tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
In a possible implementation manner, the reducing unit specifically includes:
a counting unit 606, configured to count, according to a data packet received in a first period, a ratio of the target data packet received in the first period to all data packets received in the first period, where a rate at which the first device allocates tokens in the first period is the first rate;
the reducing unit 603 is further configured to reduce the first rate according to the ratio if the ratio of the target data packet received by the first device in the first period to all data packets received in the first period is greater than a second threshold.
In a possible implementation manner, the reducing unit 603 is further configured to reduce the first rate according to a first formula by using the ratio as a parameter, where the first formula includes:
Figure BDA0002249988620000151
wherein alpha isn+1=(1-g)*αn+g*f,ratenFirst rate for sending tokensn+1For the reduced first rate, weight is the weight of the stream in the first period, and g is an adjustable parameter (0)<g<1) And f is the proportion of the target data packet received by the first device in the first period to all data packets received in the first period.
In one possible implementation, the apparatus further includes:
the increasing unit 607 is configured to increase the first rate if the number of target packets received by the first device in the first period satisfies a second condition.
In a possible implementation manner, the boosting unit 607 is further configured to boost the first rate according to a second formula, where the second formula includes:
raten+1=raten+1
wherein, ratenFirst rate for sending tokensn+1The rate is the first rate after the liftingn+1Not exceeding a link capacity of the second device.
It should be noted that, for the functions of each functional unit in the data transmission device 60 described in the embodiment of the present application, reference may be made to the related description of step S202, step S203, and step S207 in the method embodiment described in fig. 2, and details are not described here again.
As shown in fig. 7, fig. 7 is a schematic structural diagram of another data transmission apparatus according to an embodiment of the present application, and the data transmission apparatus 10 includes a second receiving unit 701, a second sorting unit 702, and a second allocating unit 703, wherein,
a second receiving unit 701, configured to receive a first request sent by a third device, where the first request is used for the third device to request to send data to the second receiving unit;
a second sorting unit 702, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to sort all the streams in the first queue, where a smaller value of a sequence number is the earlier in the first queue;
a second allocating unit 703, configured to allocate tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
In one possible implementation, the apparatus further includes:
the second sorting unit 702 is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and sort in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the second allocating unit 703 is further configured to allocate tokens to the flows in the second queue according to the ordering of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
It should be noted that, for the functions of each functional unit in the data transmission device 70 described in the embodiment of the present application, reference may be made to the related description of step S203 in the embodiment of the method described in fig. 2, and details are not repeated here.
Fig. 8 is a schematic diagram of a possible hardware structure of the electronic device according to the foregoing embodiments, provided for an embodiment of the present application. As shown in fig. 8, the electronic device 800 may include: one or more processors 801, one or more memories 802, and one or more communication interfaces 803. These components may be connected by a bus 804 or otherwise, as illustrated in FIG. 8 by a bus connection. Wherein:
the communication interface 803 may be used for the electronic device 800 to communicate with other communication devices, such as other electronic devices. In particular, the communication interface 803 may be a wired interface.
The memory 802 may be coupled to the processor 801 via a bus 804 or an input/output port, and the memory 802 may be integrated with the processor 801. The memory 802 is used to store various software programs and/or sets of instructions or data. Specifically, the Memory 802 may be a Read-Only Memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. Memory 802 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 802 may store an operating system (hereinafter, referred to as a system), such as an embedded operating system like uCOS, VxWorks, RTLinux, or the like. The memory 802 may also store a network communication program that may be used to communicate with one or more additional devices, one or more user devices, one or more electronic devices. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 802 is used for storing application program codes for executing the above schemes, and is controlled by the processor 801 to execute. The processor 801 is used to execute application program code stored in the memory 802.
The processor 801 may be a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of certain functions, including for example one or more microprocessors, a combination of digital signal processors and microprocessors, or the like.
In embodiments of the present application, the processor 801 may be configured to read and execute computer readable instructions. Specifically, the processor 801 may be configured to call a program stored in the memory 802, for example, an implementation program of the image detection method provided in one or more embodiments of the present application on the electronic device 800 side, and execute instructions included in the program.
It is understood that the electronic device 800 may be the electronic device 101 in the system of the data transmission method shown in fig. 1, and may be implemented as a Basic Service Set (BSS), an Extended Service Set (ESS), a mobile phone or a computer terminal, etc. The electronic device 800 shown in fig. 8 is only one implementation manner of the embodiment of the present application, and in practical applications, the electronic device 800 may further include more or less components, which is not limited herein. For specific implementation of the electronic device 800, reference may be made to the foregoing description in the embodiment of the method shown in fig. 2, and details are not repeated here.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (22)

1. A method of data transmission, comprising:
the method comprises the steps that a first device sends a token according to a first rate, and the token is used for a receiving end of the token to send a data packet to the first device;
the first device receiving the data packet; the data packet is forwarded through second equipment, and the data packet comprises a first mark, and the first mark represents the network congestion condition of the second equipment;
and if the number of the target data packets received by the first equipment in the first period meets a first condition, reducing the first rate according to the number of the target data packets, wherein the first mark of the target data packet represents network congestion.
2. The method of claim 1, wherein before the first device sends the token at the first rate, further comprising:
the first device receives a first request sent by a third device, wherein the first request is used for the third device to request to send data to the first device;
the first device initializes a sequence number of a flow represented by the first request, and sequentially adds the flow represented by the first request into a first queue according to the sequence number, wherein the first queue comprises one or more flows, each flow in the first queue comprises a respective sequence number, and the sequence numbers are used for sequencing all the flows in the first queue, and the sequencing in the first queue is earlier when the value of the sequence number is smaller;
the first equipment distributes tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
3. The method of claim 2, further comprising:
when the completed bytes of the stream represented by the first request exceed a first threshold, the first device adds the stream represented by the first request into a second queue, and sorts the stream represented by the first request in the second queue according to the sequence number of the stream represented by the first request, wherein the frequency of distributing tokens to the second queue by the first device is less than the frequency of distributing tokens to the first queue;
the first equipment distributes tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
4. The method of claim 1, wherein the reducing the first rate according to the number of target packets if the number of target packets received by the first device in the first period satisfies a first condition comprises:
the first device calculates the proportion of the target data packet received by the first device in the first period to all data packets received by the first device in the first period according to the data packets received by the first device in the first period, wherein the rate of the first device for distributing tokens in the first period is the first rate;
if the proportion of the target data packet received by the first device in the first period to all the data packets received by the first device in the first period is greater than a second threshold, the first device reduces the first rate according to the proportion.
5. The method of claim 4, wherein the first device decreases the first rate according to the ratio, comprising:
the first device decreases the first rate according to a first formula with the ratio as a parameter, the first formula including:
Figure FDA0002249988610000021
wherein alpha isn+1=(1-g)*αn+g*f,ratenA first rate at which tokens are sent for the first device, raten+1For the reduced first rate, weight is the weight of the stream in the first period, and g is an adjustable parameter (0 < g)<1) And f is the proportion of the target data packet received by the first device in the first period to all data packets received in the first period.
6. The method of claim 1, further comprising:
and if the number of the target data packets received by the first device in the first period meets a second condition, increasing the first rate.
7. The method of claim 6, wherein said increasing said first rate comprises:
the first device increasing the first rate according to a second formula, the second formula comprising:
raten+1=raten+1
wherein, ratenA first rate at which tokens are sent for the first device, raten+1Is that it isA first rate after the lifting, the raten+1Not exceeding a link capacity of the second device.
8. A method of data transmission, comprising:
the method comprises the steps that first equipment receives a first request sent by third equipment, wherein the first request is used for the third equipment to request to send data to the first equipment;
the first device initializes a sequence number of a flow represented by the first request, and sequentially adds the flow represented by the first request into a first queue according to the sequence number, wherein the first queue comprises one or more flows, each flow in the first queue comprises a respective sequence number, and the sequence numbers are used for sequencing all the flows in the first queue, and the sequencing in the first queue is earlier when the value of the sequence number is smaller;
the first equipment distributes tokens to the flows in the first queue according to the sorting of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
9. The method of claim 8, further comprising:
when the completed bytes of the stream represented by the first request exceed a first threshold, the first device adds the stream represented by the first request into a second queue, and sorts the stream represented by the first request in the second queue according to the sequence number of the stream represented by the first request, wherein the frequency of distributing tokens to the second queue by the first device is less than the frequency of distributing tokens to the first queue;
the first equipment distributes tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
10. A data transmission apparatus, comprising:
the system comprises a sending unit, a receiving unit and a sending unit, wherein the sending unit is used for sending a token according to a first rate, and the token is used for sending a data packet to the sending unit by a receiving end of the token;
a first receiving unit, configured to receive the data packet; the data packet is forwarded through second equipment, and the data packet comprises a first mark, and the first mark represents the network congestion condition of the second equipment;
and the reducing unit is used for reducing the first rate according to the number of the target data packets if the number of the target data packets received in the first period meets a first condition, wherein the first mark of the target data packets represents network congestion.
11. The apparatus of claim 10, further comprising:
the first receiving unit is further configured to receive, by the first receiving unit, a first request sent by a third device before the sending unit sends the token at the first rate, where the first request is used for the third device to request to send data to the first receiving unit;
a first ordering unit, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to order all the streams in the first queue, where a smaller value of a sequence number leads to an earlier order in the first queue;
a first allocation unit, configured to allocate tokens to flows in the first queue according to the ordering of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
12. The apparatus of claim 11, further comprising:
the first ordering unit is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and order in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the first allocation unit is further configured to allocate tokens to the flows in the second queue according to the sorting of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
13. The apparatus according to claim 10, wherein the reducing unit specifically includes:
a counting unit, configured to count, according to a data packet received in a first period, a ratio of the target data packet received in the first period to all data packets received in the first period, where a rate at which the first device allocates tokens in the first period is the first rate;
the reducing unit is further configured to reduce the first rate according to a ratio of the target data packet received by the first device in the first period to all data packets received in the first period, if the ratio is greater than a second threshold.
14. The apparatus of claim 13, wherein the reducing unit is further configured to reduce the first rate according to a first formula using the ratio as a parameter, the first formula comprising:
Figure FDA0002249988610000031
wherein alpha isn+1=(1-g)*αn+g*f,ratenFirst rate for sending tokensn+1For the reduced first rate, weight is a weight of a stream in the first period, g is an adjustable parameter (0 < g < 1), and f is a proportion of the target packet received by the first device in the first period to all packets received in the first period.
15. The apparatus of claim 10, further comprising:
a raising unit, configured to raise the first rate if the number of target data packets received by the first device in the first period meets a second condition.
16. The apparatus of claim 15, wherein the boosting unit is further configured to boost the first rate according to a second formula, the second formula comprising:
raten+1=raten+1
wherein, ratenFirst rate for sending tokensn+1The rate is the first rate after the liftingn+1Not exceeding a link capacity of the second device.
17. A data transmission apparatus, comprising:
a second receiving unit, configured to receive a first request sent by a third device, where the first request is used for the third device to request to send data to the second receiving unit;
a second sorting unit, configured to initialize sequence numbers of streams represented by the first request, and add the streams represented by the first request to a first queue in sequence according to the sequence numbers, where the first queue includes one or more streams, each stream in the first queue includes a respective sequence number, and the sequence numbers are used to sort all the streams in the first queue, where a smaller value of a sequence number leads to a higher sorting in the first queue;
a second allocating unit, configured to allocate tokens to the flows in the first queue according to the ordering of the first queue; wherein, if a token is allocated to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of adding the flow represented by the first request to the first queue in sequence according to the sequence number is executed.
18. The apparatus of claim 17, further comprising:
the second ordering unit is further configured to, when the completed bytes of the stream represented by the first request exceed a first threshold, add the stream represented by the first request into a second queue, and perform ordering in the second queue according to a sequence number of the stream represented by the first request, where a frequency of allocating tokens to the second queue is less than a frequency of allocating tokens to the first queue;
the second allocating unit is further configured to allocate tokens to the flows in the second queue according to the ordering of the second queue; wherein if a token is assigned to each flow represented by the first request, the value of the sequence number of the flow represented by the first request is increased, and the step of sorting in the second queue according to the sequence number of the flow represented by the first request is performed.
19. A terminal device comprising a processor, a memory and a communication interface, wherein the memory is configured to store information transmission program code and the processor is configured to invoke the data transmission program code to perform the method of any of claims 1-9.
20. A chip system, comprising at least one processor, a memory, and an interface circuit, the memory, the interface circuit, and the at least one processor interconnected by a line, the at least one memory having instructions stored therein; the method of any of claims 1-9 implemented when the instructions are executed by the processor.
21. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-9.
22. A computer program, characterized in that the computer program comprises instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1-9.
CN201911043193.8A 2019-10-28 2019-10-28 Data transmission method and related equipment Active CN112737970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911043193.8A CN112737970B (en) 2019-10-28 2019-10-28 Data transmission method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911043193.8A CN112737970B (en) 2019-10-28 2019-10-28 Data transmission method and related equipment

Publications (2)

Publication Number Publication Date
CN112737970A true CN112737970A (en) 2021-04-30
CN112737970B CN112737970B (en) 2024-06-14

Family

ID=75589128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911043193.8A Active CN112737970B (en) 2019-10-28 2019-10-28 Data transmission method and related equipment

Country Status (1)

Country Link
CN (1) CN112737970B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230120869A1 (en) * 2020-03-05 2023-04-20 Nippon Telegraph And Telephone Corporation Network management systems, edge devices, network management devices, and programs
CN116614445A (en) * 2023-07-20 2023-08-18 苏州仰思坪半导体有限公司 Data transmission method and related device thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6424624B1 (en) * 1997-10-16 2002-07-23 Cisco Technology, Inc. Method and system for implementing congestion detection and flow control in high speed digital network
CN101043457A (en) * 2007-03-21 2007-09-26 华为技术有限公司 Packet wideband monitoring method and its apparatus, packet discarding probability tagging device
CN101478486A (en) * 2009-01-22 2009-07-08 华为技术有限公司 Method, equipment and system for switch network data scheduling
CN101599905A (en) * 2009-06-30 2009-12-09 中兴通讯股份有限公司 A kind of method, Apparatus and system of realizing that traffic shaping token adds
CN101729386A (en) * 2008-11-03 2010-06-09 华为技术有限公司 Flow control method and device based on token scheduling
CN102422671A (en) * 2009-05-12 2012-04-18 高通股份有限公司 Method and apparatus for managing congestion in a wireless system
CN104917692A (en) * 2015-06-26 2015-09-16 杭州华三通信技术有限公司 Method and device for distributing tokens
CN105791155A (en) * 2014-12-24 2016-07-20 深圳市中兴微电子技术有限公司 Congestion flow management method and apparatus
CN106843170A (en) * 2016-11-30 2017-06-13 浙江中控软件技术有限公司 Method for scheduling task based on token
US20190116122A1 (en) * 2018-12-05 2019-04-18 Intel Corporation Techniques to reduce network congestion

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6424624B1 (en) * 1997-10-16 2002-07-23 Cisco Technology, Inc. Method and system for implementing congestion detection and flow control in high speed digital network
CN101043457A (en) * 2007-03-21 2007-09-26 华为技术有限公司 Packet wideband monitoring method and its apparatus, packet discarding probability tagging device
CN101729386A (en) * 2008-11-03 2010-06-09 华为技术有限公司 Flow control method and device based on token scheduling
CN101478486A (en) * 2009-01-22 2009-07-08 华为技术有限公司 Method, equipment and system for switch network data scheduling
CN102422671A (en) * 2009-05-12 2012-04-18 高通股份有限公司 Method and apparatus for managing congestion in a wireless system
CN101599905A (en) * 2009-06-30 2009-12-09 中兴通讯股份有限公司 A kind of method, Apparatus and system of realizing that traffic shaping token adds
CN105791155A (en) * 2014-12-24 2016-07-20 深圳市中兴微电子技术有限公司 Congestion flow management method and apparatus
CN104917692A (en) * 2015-06-26 2015-09-16 杭州华三通信技术有限公司 Method and device for distributing tokens
CN106843170A (en) * 2016-11-30 2017-06-13 浙江中控软件技术有限公司 Method for scheduling task based on token
US20190116122A1 (en) * 2018-12-05 2019-04-18 Intel Corporation Techniques to reduce network congestion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230120869A1 (en) * 2020-03-05 2023-04-20 Nippon Telegraph And Telephone Corporation Network management systems, edge devices, network management devices, and programs
CN116614445A (en) * 2023-07-20 2023-08-18 苏州仰思坪半导体有限公司 Data transmission method and related device thereof
CN116614445B (en) * 2023-07-20 2023-10-20 苏州仰思坪半导体有限公司 Data transmission method and related device thereof

Also Published As

Publication number Publication date
CN112737970B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
US11799764B2 (en) System and method for facilitating efficient packet injection into an output buffer in a network interface controller (NIC)
US8929363B1 (en) Apparatus and method for allocating buffers of a memory including tracking a number of buffers used to store a received frame
US7630312B1 (en) Approximated per-flow rate limiting
US9185047B2 (en) Hierarchical profiled scheduling and shaping
US7295565B2 (en) System and method for sharing a resource among multiple queues
EP1654616A2 (en) Method and apparatus for bandwidth guarantee and overload protection in a network switch
CN111526095B (en) Flow control method and device
US10536385B2 (en) Output rates for virtual output queses
CN109417514A (en) A kind of method, apparatus and storage equipment of message transmission
US7142552B2 (en) Method and system for priority enforcement with flow control
US20200252337A1 (en) Data transmission method, device, and computer storage medium
CN113746743B (en) Data message transmission method and device
CN112737970B (en) Data transmission method and related equipment
US10764198B2 (en) Method to limit packet fetching with uncertain packet sizes to control line rate
CN109995608B (en) Network rate calculation method and device
CN116889024A (en) Data stream transmission method, device and network equipment
Rezaei et al. Resqueue: A smarter datacenter flow scheduler
EP4111671A1 (en) Method of managing data transmission for ensuring per-flow fair bandwidth sharing
CN114095431A (en) Queue management method and network equipment
CN113542152A (en) Method for processing message in network equipment and related equipment
US20130107891A1 (en) Target issue intervals
US20230254259A1 (en) System And Method For Using Dynamic Thresholds With Route Isolation For Heterogeneous Traffic In Shared Memory Packet Buffers
JP7251060B2 (en) Information processing device, information processing system and information processing program
US9325640B2 (en) Wireless network device buffers
US10075380B2 (en) Probabilistic metering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant