CN113162869B - Transmission flow control method, device and storage medium - Google Patents

Transmission flow control method, device and storage medium Download PDF

Info

Publication number
CN113162869B
CN113162869B CN202110547626.4A CN202110547626A CN113162869B CN 113162869 B CN113162869 B CN 113162869B CN 202110547626 A CN202110547626 A CN 202110547626A CN 113162869 B CN113162869 B CN 113162869B
Authority
CN
China
Prior art keywords
bandwidth
edge
transmission
computing node
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110547626.4A
Other languages
Chinese (zh)
Other versions
CN113162869A (en
Inventor
邓旻昊
徐家骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anxin Zhitong Technology Co ltd
Original Assignee
Beijing Anxin Zhitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anxin Zhitong Technology Co ltd filed Critical Beijing Anxin Zhitong Technology Co ltd
Priority to CN202110547626.4A priority Critical patent/CN113162869B/en
Publication of CN113162869A publication Critical patent/CN113162869A/en
Application granted granted Critical
Publication of CN113162869B publication Critical patent/CN113162869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a method and a device for controlling transmission flow and a storage medium. The method for controlling transmission traffic is configured to control a first edge computing node to transmit data to a second edge computing node within a predetermined time interval, where the first edge computing node transmits data to the second edge computing node through a plurality of edge transmission nodes respectively disposed on different transmission paths, and includes: determining a guidance bandwidth according to a transmission bandwidth of data transmitted from a first edge computing node to a second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; determining channel bandwidths distributed by a plurality of edge transmission nodes according to the guide bandwidths and the data indexes of the plurality of edge transmission nodes; and determining the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes.

Description

Transmission flow control method, device and storage medium
Technical Field
The present application relates to the field of data transmission technologies, and in particular, to a method and an apparatus for controlling transmission traffic, and a storage medium.
Background
In recent years, the rapid growth of live webcast video, VR/AR, sports games, video conferencing, big data and 4K high definition video is pushing the millisecond-level real-time audio and video communication technology to the beginning of the historical development. Users expect a "best" online experience, and now the user's experience and loyalty have not been measured in "minutes" and "seconds," but rather in "milliseconds. Therefore, the actual transmission effect is more and more required. On IP networks, software can control the selection of the lowest layer of protocols, only TCP and UDP. Due to the requirement of real-time performance, only the UDP protocol is the optimal choice in the above scenario. However, how to construct an application layer protocol and algorithm using the UDP protocol to ensure low latency, high stability and high quality of transmission is a problem that many manufacturers are continuously researching.
The network bandwidth is relatively limited, and if the instant packet sending is too much, network congestion can be caused, and data transmission is not facilitated. At present, real-time audio and video are transmitted to an opposite terminal in the second as much as possible, and under the condition of poor network, the transmission quantity of the next second can be reduced by controlling an output source to output a stream with poor quality. Therefore, it is necessary to control the transmission speed of the data transmitting end to transmit the traffic at a reasonable speed.
In view of the above technical problems in the prior art, how to use UDP to construct an application layer protocol to ensure low latency, high stability and high quality of data transmission, and how to calculate transmission traffic of data transmission under the protocol, no effective solution has been proposed at present.
Disclosure of Invention
Embodiments of the present disclosure provide a method and an apparatus for controlling transmission traffic, and a storage medium, to at least solve technical problems in the prior art how to use UDP to construct an application layer protocol to ensure low latency, high stability, and high quality of data transmission, and how to calculate transmission traffic of data transmission under the protocol.
According to an aspect of the embodiments of the present disclosure, there is provided a method for controlling a transmission flow of a first edge computing node transmitting data to a second edge computing node within a predetermined time interval, wherein the first edge computing node transmits the data to the second edge computing node through a plurality of edge transmission nodes respectively disposed on different transmission paths, the method including: determining a guidance bandwidth according to a transmission bandwidth of data transmitted from a first edge computing node to a second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; determining channel bandwidths distributed by a plurality of edge transmission nodes according to the guide bandwidths and the data indexes of the plurality of edge transmission nodes; and determining the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method of any one of the above is performed by a processor when the program is executed.
According to another aspect of the embodiments of the present disclosure, there is also provided a control apparatus for transmission traffic, including: a transmission flow for controlling a first edge computing node to transmit data to a second edge computing node within a predetermined time interval, wherein the first edge computing node transmits the data to the second edge computing node through a plurality of edge transmission nodes respectively disposed on different transmission paths, comprising: a first determining module, configured to determine, according to a transmission bandwidth of data transmitted by a first edge computing node to a second edge computing node, a guidance bandwidth, where the guidance bandwidth is used to indicate a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; the second determining module is used for determining the channel bandwidth distributed by the edge transmission nodes according to the guide bandwidth and the data indexes of the edge transmission nodes; and a third determining module, configured to determine the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, where the number of tokens is used to control transmission traffic of the plurality of edge transmission nodes.
According to another aspect of the embodiments of the present disclosure, there is also provided a control apparatus of transmission traffic, including a transmission traffic for controlling a first edge computing node to transmit data to a second edge computing node within a predetermined time interval, where the first edge computing node transmits the data to the second edge computing node through a plurality of edge transmission nodes respectively disposed on different transmission paths, including: a processor; and a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: determining a guidance bandwidth according to a transmission bandwidth of data transmitted from a first edge computing node to a second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; determining channel bandwidths distributed by a plurality of edge transmission nodes according to the guide bandwidths and the data indexes of the plurality of edge transmission nodes; and determining the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes.
Therefore, according to the embodiment, multi-port data transmission is realized through multiple paths, and low time delay, high stability and high quality of data transmission are ensured. And determining a transmission bandwidth of data transmitted to the second edge computing node by the first edge computing node as a guide bandwidth of the plurality of edge transmission nodes. And then the first edge computing node allocates channel bandwidth for the plurality of edge transmission nodes according to the data index of each edge transmission node, and finally, tokens are allocated for the plurality of edge transmission nodes according to the allocated channel bandwidth. Therefore, the appropriate number of tokens are distributed to each edge transmission node through the mode, and the technical effect of controlling the transmission flow of the edge transmission nodes through the number of the tokens is achieved. Therefore, the technical problems that how to use UDP to construct an application layer protocol to ensure low time delay, high stability and high quality of data transmission and how to calculate the transmission flow of data transmission under the protocol in the prior art are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a hardware configuration block diagram of a computing device for implementing the method according to embodiment 1 of the present disclosure;
fig. 2 is a schematic diagram of a system for multipath data transmission according to embodiment 1 of the present disclosure;
fig. 3 is a schematic flow chart of a control method of transmission traffic according to the first aspect of embodiment 1 of the present disclosure;
fig. 4 is a schematic diagram of a control device for transmission flow according to embodiment 2 of the present disclosure; and
fig. 5 is a schematic diagram of a control device for transmission flow according to embodiment 3 of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the described embodiments are merely exemplary of some, and not all, of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some of the nouns or terms appearing in the description of the embodiments of the present disclosure are applicable to the following explanations:
ECU (edge computing node): a service platform constructed near the network edge of a user provides resources such as storage, calculation, network and the like, and sinks part of key service application to the edge of an access network so as to reduce width and time delay loss caused by network transmission and multistage forwarding.
ERU (edge transfer node): the Relay service oriented to transmission, relay, is one of the sub-services, which is used to process and Relay transmission data, and is used to establish a connection node and a channel for end-to-end data transmission.
Ascending of the ECU: refers to the flow emitted by the ECU or the path direction of the emitted flow
Descending of the ECU: refers to the flow received by the ECU or the path direction of the received flow
Channel coding: unlike source coding (e.g. audio mp3/opus, video H264, H265) which mainly aims at compressing the source data, channel coding aims at countering the instability of the transmission channel
RTT: (Round Trip Time) loop back Time.
NASMT, neuVision asymmetry SimultaneousUslyMultipath Transmission. The sixth patent application, which is incorporated herein by reference.
Example 1
According to the present embodiment, an embodiment of a method for controlling transmission traffic is provided, it should be noted that the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown or described herein.
The method embodiments provided by the present embodiment may be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computing device for implementing a control method of transmission traffic. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory for storing data, and a transmission device for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the disclosed embodiments, the data processing circuit acts as a processor control (e.g., selection of variable resistance termination paths connected to the interface).
The memory may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the control method of transmission traffic in the embodiments of the present disclosure, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, implements the control method of transmission traffic of the application program. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory may further include memory located remotely from the processor, which may be connected to the computing device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by communication providers of the computing devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted here that in some alternative embodiments, the computing device shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that FIG. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in a computing device as described above.
Fig. 2 is a schematic diagram of a system for multipath data transmission according to the present embodiment. Referring to fig. 2, the system includes: a plurality of edge computing nodes ECU a and ECU B (wherein the edge computing nodes are not limited to two, and are only examples here) and a plurality of edge transmission nodes ERU R1 to Rn, wherein the edge computing nodes correspond to terminal nodes. It should be noted that the above hardware structure can be applied to both the edge computing node and the edge transmission node in the system.
In addition, as shown in fig. 2, N edge computing nodes need to transmit data (such as video) to each other, they need to negotiate a group of ERUs (edge transmitting nodes) and then use these ERUs to communicate simultaneously. Under the protocol, the flow is automatically distributed under a plurality of ERUs, and the flow is not only distributed along one path at the same time, but also the data to be transmitted is not copied into N parts, and each path transmits the same data. Fig. 2 illustrates a case of N =2, but the transmission technique/algorithm set forth in this application does not limit the number of terminals, that is, supports a multi-person conference, which requires multiple persons to transmit sound and video to each other, and the only requirement is that the participating N parties need to negotiate a list of ERUs for use in a consistent manner.
Under the above operating environment, according to the first aspect of the present embodiment, a method for controlling transmission traffic is provided, and the method is implemented by the edge computing node and the edge transmission node shown in fig. 2. Fig. 3 shows a flow diagram of the method, which, with reference to fig. 3, comprises:
s302: determining a guidance bandwidth according to a transmission bandwidth of data transmitted from a first edge computing node to a second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes;
s304: determining channel bandwidths distributed by a plurality of edge transmission nodes according to the guide bandwidths and the data indexes of the plurality of edge transmission nodes; and
s306: determining the number of tokens of a plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes.
As described in the background, software can control the selection of the lowest layer of protocols on an IP network, only TCP and UDP. Due to the requirement of real-time performance, only the UDP protocol is the optimal choice in the above scenario. However, how to construct an application layer protocol and algorithm using the UDP protocol to ensure low latency, high stability and high quality of transmission is a problem that many manufacturers are continuously researching. The network bandwidth is relatively limited, and if the packets are sent too much instantly, network congestion is caused, and data transmission is not facilitated. At present, real-time audio and video are to send audio and video streams generated every second to an opposite end in the second as much as possible, and under the condition that a network is poor, the sending quantity of the next second can be reduced by controlling an output source to output the streams with poor quality. Therefore, it is necessary to control the transmission speed of the data transmitting end to transmit the traffic at a reasonable speed.
In view of this, referring to fig. 2 and fig. 3, according to the method for controlling transmission traffic provided in the embodiment of the present application, for example, a first edge computing node a transmits data to a second edge computing node B through a plurality of edge transmission nodes (ERU R1 and ERU R2, which are not limited to two shown as examples), and the method performs control on transmission traffic of the plurality of edge transmission nodes transmitting the data. The method provided by the present application is not limited to controlling the transmission flow rate of the data transmitted from the first edge computing node a to the second edge computing node B, and may also control the transmission flow rate of data transmission between other edge computing nodes, and the transmission flow rate needs to be controlled at intervals, that is, the transmission flow rate is controlled once every preset time period, where the preset time period may be, but is not limited to, 300ms.
In a case where the first edge computing node a transmits data to the second edge computing node B through the plurality of edge transmission nodes within a predetermined time and it is necessary to control traffic of the plurality of edge transmission nodes, first, the first edge computing node a may determine a guidance bandwidth for instructing a total bandwidth for allocating bandwidths to the plurality of edge transmission nodes, according to a transmission bandwidth of the data from the first edge computing node to the second edge computing node (S302).
In particular, bandwidth is allocated to a plurality of edge transfer nodes by directing the bandwidth. The first edge computing node a may first determine the transmission bandwidth to the second edge computing node B within the predetermined time period. The first edge computing node a then determines a directive bandwidth for allocating bandwidth for the plurality of edge transmission nodes by the transmission bandwidth. The bandwidth is not limited to be used as a guide bandwidth after the transmission bandwidth is adjusted, so that the bandwidth is allocated to a plurality of edge transmission nodes.
Further, the first edge computing node a may determine the allocated channel bandwidth of the plurality of edge transmission nodes according to the instruction bandwidth and the data indexes of the plurality of edge transmission nodes (S304).
Specifically, the plurality of edge transmission nodes may be virtual transmission channels, and data indicators of each edge transmission node are different, for example, data indicators of each edge transmission node, such as transmission delay and packet loss rate, are different. The first edge computing node a may then allocate the guiding bandwidth to each edge transfer node according to the data index of each edge transfer node. Wherein the sum of the bandwidths allocated by the plurality of edge transfer nodes does not exceed the guide bandwidth. Therefore, by the mode, the bandwidth is distributed according to the data index of each edge transmission node, and different edge transmission nodes in the later period can conveniently transmit data of the type which cannot be data.
Further, according to the channel bandwidth, the number of tokens of the plurality of edge transmission nodes is determined, wherein the number of tokens is used for controlling the transmission traffic of the plurality of edge transmission nodes (S306).
In particular, where the application controls traffic through a token bucket algorithm, here may be a negative feedback sliding window token bucket. Then, the first edge computing node a may compute the number of tokens corresponding to each edge transmission node according to the channel bandwidth allocated to each edge transmission node, so as to control the traffic of the edge transmission node according to the number of tokens corresponding to each edge transmission node.
Therefore, multi-end data transmission is realized through multiple paths, and low time delay, high stability and high quality of data transmission are guaranteed. And determining a transmission bandwidth of data transmitted to the second edge computing node B through the first edge computing node A as a guide bandwidth of the plurality of edge transmission nodes. And then the first edge computing node A allocates channel bandwidth for the plurality of edge transmission nodes according to the data index of each edge transmission node, and finally allocates tokens for the plurality of edge transmission nodes according to the allocated channel bandwidth. Therefore, the method realizes that the proper number of tokens are distributed to each edge transmission node, and achieves the technical effect of controlling the transmission flow of the edge transmission node through the number of the tokens. And further, the technical problems of how to use UDP to construct an application layer protocol to ensure low time delay, high stability and high quality of data transmission and how to calculate the transmission flow of data transmission under the protocol in the prior art are solved.
In addition, data among the edge computing nodes in the application can be shared, so that each edge computing node can take the data of other edge computing nodes.
In addition, the fountain codes can be used for channel coding in data transmission, and therefore data transmission is carried out.
Optionally, before determining the operation of directing the bandwidth according to the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node, the method further includes: determining the total uplink bandwidth of the first edge computing node relative to the plurality of edge transmission nodes according to the relevant information of the data packet received by the plurality of edge transmission nodes from the first edge computing node within a preset time period; determining the total downlink bandwidth of the second edge computing node relative to the plurality of edge transmission nodes according to the relevant information of the data packets received by the second edge computing node from the plurality of edge transmission nodes within a preset time period; and determining the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node according to the total uplink bandwidth and the total downlink bandwidth.
In the case of needing to calculate the real-time transmission bandwidth of data transmitted between edge computing nodes (for example, a first edge computing node a transmits data to a second edge computing node B), referring to fig. 2, first, the first edge computing node a may determine the total uplink bandwidth of the first edge computing node a relative to a plurality of edge transmitting nodes according to the relevant information of the data packets received by the plurality of edge transmitting nodes from the first edge computing node a within a predetermined time period (S302).
Specifically, referring to fig. 2, a first edge computing node a transmits data to a second edge computing node B through a plurality of edge transmission nodes, so that a data packet transmitted by the first edge computing node a needs to pass through the plurality of edge transmission nodes before being sent to the second edge computing node B. And the plurality of edge transmission nodes can feed back the relevant information of the received data packet to the first edge computing node a, and then the first edge node a computes the total uplink bandwidth thereof according to the feedback. In addition, data among a plurality of edge computing nodes are shared, and any edge computing node can compute the bandwidth of other nodes as long as the edge computing node has data information. Thus, in the above manner, the first edge computing node a may compute the total uplink bandwidth in the multi-path (multiple edge transmission nodes) data transmission process.
The first edge computing node a may perform channel coding on an application layer data (audio, video, or custom data) to be transmitted according to the NASMT procedure to generate a plurality of data packets. The packet allocation ratio for each ERU is then determined according to the procedures specified by NASMT. The ERU used by each packet is then determined.
Further, the second edge computing node B may determine a total downstream bandwidth of the second edge computing node with respect to the plurality of edge transfer nodes according to the information about the data packets received by the second edge computing node B from the plurality of edge transfer nodes within the predetermined time period (S304).
Specifically, referring to fig. 2, a plurality of edge transfer nodes transmit a packet received from a first edge computing node a to a second edge computing node B. And then the second edge computing node B computes the total downlink bandwidth according to the relevant information of the received data packet. Therefore, the second edge computing node B achieves the effect of determining the total downlink bandwidth of the second edge computing node B under the multi-path transmission data protocol through the relevant information of the data packets received from the plurality of edge transmission nodes.
Further, the first edge computing node a may determine, according to the total uplink bandwidth and the total downlink bandwidth, a transmission bandwidth for the first edge computing node to transmit data to the second edge computing node (S306).
Specifically, after the second edge computing node B determines its total downlink bandwidth, the obtained total downlink bandwidth may be fed back to the first edge computing node a. And then the first edge computing node A computes the total downlink bandwidth of the node B according to the computed total uplink bandwidth and the received total downlink bandwidth of the second edge computing node B, and determines the transmission bandwidth of the first edge computing node A for transmitting data to the second edge computing node B.
Optionally, the determining, according to a transmission bandwidth of data transmitted by the first edge computing node to the second edge computing node, an operation of the guidance bandwidth includes: determining the transmission time delay of data transmitted from a first edge computing node to a second edge computing node; and determining the guide bandwidth according to the transmission delay and the transmission bandwidth.
Specifically, the transmission bandwidth is affected by the transmission delay, so that the transmission bandwidth is adjusted by the transmission delay from the first edge computing node a to the second edge computing node B, so that the adjusted bandwidth is convenient for flow control in the later data transmission process.
And adjusting the transmission bandwidth by the transmission delay RTT to determine the guide bandwidth. For example, it may be: when RTT <200, the guide bandwidth = transmission bandwidth; RTT =500, guide bandwidth = transmission bandwidth 90%. RTT =1000, guide bandwidth = transmission bandwidth 80%, RTT =2000, guide bandwidth = transmission bandwidth 60%. This is merely an example and does not limit other adjustment scenarios.
Therefore, the guiding bandwidth adjusted by the transmission time delay RTT is used, the guiding significance is provided for the multi-channel data transmission, and the technical effect of accurately controlling the transmission flow of each edge transmission node of the multi-channel transmission is achieved.
Optionally, the operation of determining a transmission delay of the data transmitted from the first edge computing node to the second edge computing node includes: respectively determining a plurality of first time delays from a first edge computing node to a plurality of edge transmission nodes according to a first detection packet received by the edge transmission nodes from the first edge computing node and first time information recorded by the first edge computing node and returned by the relevant time information of the first detection packet; respectively determining a plurality of second time delays from the second edge computing node to the plurality of edge transmission nodes according to second time information recorded by the second edge computing node and received by the plurality of edge transmission nodes from the second edge computing node and the relevant time information of the second detection packet returned to the second edge computing node; and determining the transmission time delay of the data transmitted from the first edge computing node to the second edge computing node according to the plurality of first time delays and the plurality of second time delays.
Specifically, referring to fig. 2, the first edge computing node a may send a plurality of first probe packets to a plurality of edge transmitting nodes, respectively, and then the plurality of edge transmitting nodes return the time information related to the received first probe packets to the first edge computing node a, respectively, for example, the time information related to the first probe packets may include time stamps of all nodes in the transmission process, that is, the first time information. Then, the first edge computing node a may record the first time information according to the relevant time information of the received first probe packet turned back, and then determine a plurality of first time delays for sending data to the plurality of edge transmission nodes, respectively. Furthermore, by the scheme, the first time delay of data transmission between the first edge computing node A and the edge transmission nodes under the multipath can be determined.
As shown in fig. 2, the first probe packet may be a data packet transmitted from the first edge computing node a to other edge computing nodes, for example, the second edge computing node B. And the edge transmission node receives the first probe packet and then sends the first probe packet to an edge computing node needing transmission, such as a second edge computing node B.
For example, referring to fig. 2, ECU a sends a first probe packet to ERU R1 and ERU R2, and then, after ERU R1 and ERU R2 receive the first probe packet, the relevant time information of the first probe packet is returned to ECU a. The first edge computing node a may thus calculate a first time delay for data transfer between the ECU a and the plurality of ERUs based on the recorded first time information.
In addition, when the first edge computing node a and the second edge computing node B perform real-time communication, the plurality of edge computing nodes may transmit the data packet (when the multi-end edge computing node performs data transmission, the data packet may be transmitted to the first edge computing node a from any direction) sent by the second edge computing node B to the first edge transmitting node a, carrying the relevant time information of the first probe packet, to the first edge computing node a, so that the plurality of edge transmitting nodes are not required to transmit the time information of the first probe packet. Furthermore, the method is not limited to data transmission between the first edge computing node a and the second edge computing node B, but may be data transmission between a plurality of edge computing nodes.
Further, the second edge computing node B may determine a plurality of second time delays from the second edge computing node to the plurality of edge transfer nodes, respectively, according to the second probe packets received by the plurality of edge transfer nodes from the second edge computing node and the second time information recorded by the second edge computing node and returned by the time information related to the second probe packets (S304).
Specifically, referring to fig. 2, the second edge computing node B may send a plurality of second detection packets to the plurality of edge transfer nodes, respectively, and then the plurality of edge transfer nodes return the time information related to the second detection packets to the second edge computing node B, respectively. And the second probe packet carries timestamps of all nodes in the transmission process, i.e., the second time information. Then, the second edge computing node B may determine a plurality of second time delays from the second edge computing node B to the plurality of edge transmission nodes, respectively, according to the second time information carried in the received second probe packet returned by the second edge computing node B. Furthermore, by the scheme, the second time delay of data transmission between the second edge computing node B and the edge transmission nodes under the multipath can be determined.
For example, referring to fig. 2, ECU B sends the second probe packet to ERU R1 and ERU R2, and then, after ERU R1 and ERU R2 receive the second probe packet, the time information about the second probe packet is returned to ECU B. The second edge computing node B may thus compute the first time delay for data transmission between the ECU B and the plurality of ERUs based on the recorded second time information.
Further, the first edge computing node a may determine a transmission delay of the data transmitted from the first edge computing node to the second edge computing node according to the plurality of first delays and the plurality of second delays (S306).
Specifically, the first edge computing node a may determine a transmission delay of data transmission between it and the second edge computing node B according to the determined plurality of first delays and the second delay. For example, referring to fig. 2, for example, the first delay includes a first delay from the first edge computing node a to the ERU R1 and the ERU R2, and the second delay includes a second delay from the second edge computing node B to the ERU R1 and the ERU R2. And determining the transmission time delay of data transmitted between the first edge computing node A and the second edge computing node B according to the plurality of first time delays and the plurality of second time delays corresponding to the first time delays. The first edge computing node a is not limited to transmit data to the second edge computing node B through the ERU R1 and the ERU R2, and may also transmit data through other paths, which is not limited herein.
Optionally, the data metrics include: the method comprises the following steps of transmitting data types, channel packet loss rates and channel time delays, and determining channel bandwidths distributed by a plurality of edge transmission nodes according to guide bandwidths and data indexes of the edge transmission nodes, wherein the operation comprises the following steps: determining channel packet loss rates corresponding to a plurality of edge transmission nodes respectively; determining channel time delays corresponding to a plurality of edge transmission nodes respectively; and determining the channel bandwidth according to the transmission data types, the channel packet loss rates and the channel time delays corresponding to the edge transmission nodes respectively.
Specifically, first, the first edge computing node a may determine a packet loss rate and a channel delay of each of the edge transmission nodes. During the data transmission process, the data can be divided into a plurality of sending queues according to the content of the sent data, and then the corresponding sending queues are sent through the corresponding edge transmission nodes. For example, the following are divided into: audio, video, general data, and signaling, among others. The first edge computing node a may then allocate bandwidth to each edge transfer node according to its data metrics. For example, the edge transport nodes corresponding to the audio and video transmission have relatively large allocated channel bandwidth, or the edge transport nodes having small channel delay have large allocated channel bandwidth, and the edge transport nodes having large channel delay have small allocated channel bandwidth.
Therefore, by the mode, different channel bandwidths are allocated to the edge transmission nodes with different data indexes, and the purpose of ensuring the transmission flow of data of all the edge transmission nodes can be achieved.
In addition, as described above, the channel delay may be determined as follows: for example, if the first edge calculates a first delay from node a to ERU R1 as R1 (RTT 1) and the second edge calculates a second delay from node B to ERU R1 as R1 (RTT 2), the channel delay of ERU R1 is R1 (RTT 1) + R1 (RTT 2). In addition, the channel delay of other edge transmission nodes refers to ERU R1, which is not described in detail herein.
In addition, the channel packet loss rate is determined as follows: the data packets transmitted by the first edge computing node a to the plurality of edge transmission nodes need to be transmitted together in a pair, that is, the packet transmitted to the ERU Rx needs to wait for the next packet transmitted to the ERU Rx, and then the packets are transmitted together in a pair. And the transmitted paired transmission data packets need to be numbered, for example, two data packets in the paired transmission data packets share one number, such as t _ seq. Then the number of the next pair of paired transmission packets to be transmitted is t _ seq +1. The calculation formula of the channel packet loss rate of each edge transmission node is as follows: loss (i) = 1-pair logarithm/(max t _ seq-min t _ seq + 1).
In addition, according to different timeliness, packet loss tolerance characteristics and delay requirements of different edge transmission nodes (virtual channels) on transmission data, the guidance bandwidth is distributed to each edge transmission node, and the sum of the guidance bandwidth does not exceed the guidance bandwidth. This allocation needs to refer to the RTT, packet loss rate, network bandwidth attribute of each actual edge transmission node, and actually used traffic at the same time. The edge transmission nodes are prioritized and divided. I.e. the transmitted data is classified. Although the different classifications physically use bandwidth on all paths, the bandwidth of the different classifications is not shared, logically. This provides different guarantees for different required data.
Optionally, the operation of determining the number of tokens for a plurality of edge transfer nodes according to the channel bandwidth includes: determining the number of tokens according to the following formula: n = Bdi/8/1000, where N is the number of tokens to be released, bdi is the channel bandwidth allocated by the edge transmission node, and the unit of the channel bandwidth is Bps,8 is 8 bytes, and 1000 is the conversion rate between seconds and milliseconds.
Specifically, taking one of the edge transmission nodes as an example, for example, the channel bandwidth allocated to this time by the edge transmission node is Bdi, the unit of the channel bandwidth is Bps, and the bit per second. The channel bandwidth is divided by 8 to get the number of bytes allocated to the edge transfer node. And then divided by 1000 to get the number of bytes allocated per millisecond. Therefore, by the mode, the number of the tokens distributed by each edge transmission node is determined, and the technical effect of controlling the transmission flow of a plurality of edge transmission nodes is achieved.
Furthermore, the flow control algorithm may loop once (e.g., 300 ms) for a predetermined period of time, and upon re-looping, the unused tokens placed last two times are retained and the tokens placed earlier are discarded. I.e. the token is put in, it is valid for a maximum of 3 cycles. The edge transfer node consumes the earliest token for every byte sent. When the token is consumed completely, the token pre-consumption is allowed to be requested once per second, the pre-consumption amount is 20% of the channel bandwidth of the current cycle (the pre-consumption amount is not limited to 20% and can be set according to the actual situation), and the token is added into the channel as a temporary token. When the token and the allowed pre-consumption are consumed, the edge transfer node cannot continue to send data (but may continue to send NACK response packets because of NACK) until a new token is placed. In actual transmission, one udp packet is transmitted as a whole, so if the number of tokens is not enough, the entire udp packet needs to be transmitted directly after the number of tokens is enough.
Optionally, the method further comprises: when the data transmitted last time exceeds the byte number corresponding to the instruction bandwidth calculated last time, the instruction bandwidth is adjusted through the following formula: b = B-x 0 *1.1, wherein B is the guide bandwidth and x0 is the last overtired byte; and/or the condition that the data transmitted at this time exceeds the byte number corresponding to the instruction bandwidthNext, the next guidance bandwidth is: b _ next = B _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, and x1 is the byte of this super transmission.
Specifically, the flow control algorithm used in the present application may cycle once every 300ms (here, not limited to 300ms, and the algorithm cycle time may be determined according to actual conditions).
If the total actual transmission amount of the last transmission data exceeds the instruction bandwidth, the negative feedback reduces the instruction bandwidth of this time. For example: and if the last time the byte of x0 is overflowed, reducing the bandwidth of the current guidance by x0 x 1.1 bytes. Namely, the current guidance bandwidth: b = B-x 0 *1.1。
If the instruction bandwidth is negative after being reduced, y (y) is set<0) If the current super transmission is x1 byte, the next guidance bandwidth is B _ next = B _ next-x 1 *1.1+ y 0.8 (note y)<0)。
The instruction bandwidth is subject to overrule negative feedback in this application, which may affect the instruction bandwidth twice below (typically only once). When the data is not overflowed, the negative feedback of the last bandwidth is not needed for the guiding bandwidth. Therefore, the control of the transmission flow is realized through the mode, and the quality of data transmission is further ensured.
Therefore, according to the control method of the transmission flow, the flow control index can be updated in a limited burst and fast mode. Through the design of the temporary pre-consumption token, the communication delay can be reduced by exceeding a certain limit when actual needs exist, and meanwhile, a negative feedback mechanism is arranged to ensure that excessive oversending cannot occur. The token guarantees that the token issuance is synchronized with the current network state in substantially real time after 3 cycles have expired.
Further, referring to fig. 1, according to a second aspect of the present embodiment, there is provided a storage medium. The storage medium comprises a stored program, wherein the method of any of the above is performed by a processor when the program is run.
Therefore, according to the embodiment, multi-port data transmission is realized through multiple paths, and low time delay, high stability and high quality of data transmission are ensured. And determining a transmission bandwidth of data transmitted to the second edge computing node B through the first edge computing node A as a guide bandwidth of the plurality of edge transmission nodes. And then the first edge computing node A allocates channel bandwidth for the plurality of edge transmission nodes according to the data index of each edge transmission node, and finally allocates tokens for the plurality of edge transmission nodes according to the allocated channel bandwidth. Therefore, the method realizes that the proper number of tokens are distributed to each edge transmission node, and achieves the technical effect of controlling the transmission flow of the edge transmission node through the number of the tokens. And further, the technical problems of how to use UDP to construct an application layer protocol to ensure low time delay, high stability and high quality of data transmission and how to calculate the transmission flow of data transmission under the protocol in the prior art are solved.
1; the algorithm is re-cycled every 300 milliseconds, corresponding to a 300 millisecond sliding window.
2; the transmission queue is divided into a plurality of virtual transmission channels (queues) according to transmission contents. For example, the following are divided into: audio, video, general data, signaling, etc. V
3; the total guide bandwidth. The calculation mainly takes the calculated effective bandwidth (at this time, the influence caused by the packet loss rate is deducted), and then virtual deduction is performed according to the corresponding RTT (the actual deduction is not disclosed here, but the idea is that the higher the RTT is, the more the deduction percentage is). This results in the guide bandwidth B.
4; if the total actual transmission amount of the last time exceeds the instruction bandwidth, the negative feedback reduces the instruction bandwidth of the current time. And if the last time the byte of x0 is overflowed, reducing the bandwidth of the current guidance by x0 x 1.1 bytes. If the guiding bandwidth is negative after being reduced, y (y < 0) is set, and the super transmission is x1 byte, the guiding bandwidth of the next time is Bfinal = B-x1 × 1.1+ y 0.8. (Note that y < 0)
5; and distributing the guidance bandwidth to each virtual channel according to different requirements of different virtual channels on timeliness, packet loss tolerance characteristics and delay. The sum of which does not exceed the guide bandwidth. This allocation needs to refer to the RTT, the packet loss rate, the network bandwidth attribute of each actual link, and the actually used traffic at the same time. If the instruction bandwidth is negative, it is still allocated to a certain bandwidth, for example, to the instruction channel.
6; for a certain channel a, the channel bandwidth allocated this time is set as Bd (a) i. This means that in this loop Bd (A) i/8/1000 tokens are put for this channel every millisecond (the unit of bandwidth is Bps, i.e. bits per second, so dividing by 8 yields bytes, then dividing by 1000 yields every millisecond). And when the token is recycled, the unused tokens put in the last two times are kept, and the tokens put in the earlier times are discarded. I.e. the token is put in, it is valid for a maximum of 3 cycles. The channel consumes the earliest token for every byte sent. When the token consumption is finished, the token pre-consumption is allowed to be requested once per second, the pre-consumption amount is 20% of Bd (A) i of the current cycle, and the token is used as a temporary token and added into the channel. When both the token and the allowed pre-consumption are exhausted, the channel cannot continue to send data (but may continue to send NACK response packets because of NACK) until a new token is placed. In actual transmission, one udp packet is transmitted as a whole, so if the number of tokens is not enough, the entire udp packet needs to be transmitted directly after the number of tokens is enough.
Therefore, the flow control method provided by the application has the following beneficial effects: 1. and using the guide bandwidth adjusted by the packet loss rate and the RTT value, and considering the traffic distribution weight. The bandwidth thus obtained is of guiding interest for the multiplexing (hence the name guiding bandwidth). If the traffic is not weighted, the obtained bandwidth value is obviously far from the actual value (for example, although the bandwidth of a channel is high, the delay is large, so that the allocated traffic is few, and the result of directly adding without weighting when the bandwidth is added is problematic). 2. Prioritized and partitioned virtual channels. I.e. the transmitted data is classified. Although the different classifications physically use bandwidth on all paths, the bandwidth of the different classifications is not shared, logically. This provides different guarantees for different required data. 3. And the indexes are updated rapidly in a limited burst mode. Through the design of the temporary pre-consumption token, the communication delay can be reduced by exceeding a certain limit when actual needs exist, and meanwhile, a negative feedback mechanism is arranged to ensure that excessive oversending cannot occur. The token guarantees that the token issuance is synchronized with the current network state in substantially real time after 3 cycles have expired.
Determining the transmission bandwidth: 1. for an application layer data (audio, video, or custom data) to be transmitted, channel coding is performed according to the steps of NASMT (transmission technology) to generate a plurality of data packets. The allocation ratio for each ERU is then determined according to the procedures specified by NASMT. The ERU used by each packet is then determined.
2. The data packets sent to the ERU need to be sent together in pairs. I.e. the first packet to ERU Rx, needs to wait until the next packet to ERU Rx and then transmit them together. The packets sent together share the same t _ seq sequence number, as well as the transmission timestamp. Then the next pair t _ seq +1. Similarly, packets sent by the ERU to ECU B also require a pair-to-pair transmission, each pair using the same timestamp, with the id of the ERU on the time scale to distinguish which ERU sent.
And 3, collecting the time stamp of each data packet received by the ECU B and the ERU, and the network transmission size (including udp header length and ip header length, and not including other layer header lengths below the network layer) of each data packet. The transmission bandwidth is calculated every 300ms or when enough data is accumulated.
ECU B calculates its own downstream bandwidth (the bandwidth to receive data from the ERU)
4.1. And pairing according to the t _ seq and the ERU id. If the pairing success rate is too small, the data is considered to be unreliable, and the value of the last bandwidth estimation is used;
4.2. for each ERU id, obtaining the maximum t _ seq and the minimum t _ seq of the calculation period;
4.3. calculating the transmission Size (two added) Size (Pi) and the receiving Time interval (difference between receiving Time stamps) Time (Pi) of each pair of data according to each pair of data;
4.4. and according to the average value of the receiving time intervals, rejecting the data with abnormally long time intervals. For each ERU id, if the maximum t _ seq or the minimum t _ seq belongs to abnormal data, subtracting 1 from the corresponding maximum t _ seq, and adding 1 to the minimum t _ seq;
4.5. the preliminary bandwidth is estimated as
Figure GDA0003973840890000161
4.6. Total downlink packet loss ratio of ERU is
Figure GDA0003973840890000162
Note that here is not the average of Loss (i);
4.7. the effective downlink transmission bandwidth is Band _ Eff = Band _ Raw (1-P (Loss));
4.8. a fast high-pass filtering is performed, for example, the simplest is:
when Band _ Eff > last Band _ Down: band _ Down (B) = (4 × last Band _ Down + Band _ Eff)/5;
otherwise Band _ Down (B) = (2 × last Band _ Down + Band _ Eff)/3.
5; ERU calculates the ascending of ECU A respectively
Figure GDA0003973840890000163
Denoted as Sum _ Size and
Figure GDA0003973840890000164
record as Sum _ Time, and the corresponding packet loss rate.
6; the ECU A receives the feedback of all ERUs within a period of time, and calculates the uplink bandwidth of the ECU:
computing
Figure GDA0003973840890000165
Calculating the effective uplink bandwidth as Band _ Eff = Band _ Raw (1-P (Loss));
and performing high-pass filtering once to obtain the final Band _ Up (A).
7;Band(A->B)=min(Band_Up(A),Band_Down(B))。
The determination process of the transmission delay comprises the following steps:
NASMT (Transmission technology) requires the ECU to send all data with the ECU's local timestamp at the time of transmission, noted t1.
2. Every fixed time, when ERUA forwards data to ECU B, it will take the last timestamp received from ECU a and the time difference (denoted as d 1) from the time of receiving the timestamp to the time of forwarding the data. Note that the data forwarded in this step is completely independent of the data for which the timestamp was received in step 1.
And 3, when the ECU A receives t1 and d1 forwarded back by the ERU R1, recording a received time stamp t2. The RTT between ECU a and ERU R1 can be calculated as RTT (a-R1) = t2-t1-d1.
4. Similarly ECU a may calculate the RTT between it and all ERUs. This value is synchronized to the other ECUs. Similarly, ECU a receives RTTs calculated by other ECUs.
5. ECU B also calculates the RTT between ECU B and all ERUs as described in 4. Then the RTT from ECU a to ECU B passing through ERU Rx may be defined as RTTx = RTT (a- > x) + RTT (x- > B).
6. Suppose that when A sends data to B at present, the data traffic ratios allocated to ERU R1, R2, \8230, rn are W1, W2, \8230, wn, respectively. And the largest three Wi are denoted as Wx, wy, wz. The RTT of the signaling is defined as: RTT1 (A->B) = (Wx × RTTx + Wy × RTTy + Wz + RTTz)/(Wx + Wy + Wz). The RTT of a data transmission is defined as:
Figure GDA0003973840890000171
since the use of RTT is generally in terms of signaling processing, RTT in the general sense is RTT1.
In addition, the traffic distribution algorithm of the edge transmission nodes is as follows:
and comprehensively evaluating the priority rating of relay from 0 to 100 according to the grades of the ERU by the user and the ERU by other people obtained in the flow control information updating. 100 is the best and 0 is the worst. For ordinary data traffic, in forwarding according to a group of two, one ERU is randomly selected as a forwarding purpose by taking the score at the time as a weight. That is, for ERUi, let its score be Pi, then the probability of picking ERU is:
Figure GDA0003973840890000172
for audio traffic and video key frames, in the forwarding according to two groups, one ERU is randomly selected as a forwarding purpose according to the score as a weight in the 3 ERUs with the highest score. For the data packet, after selecting the target ERU and sending, it will evaluate whether the packet size meets the current packet condition. This condition may vary depending on the current available bandwidth. If the packet condition is met, the method tries to find an ERU which is not forwarded any traffic by the current ECU within 1 second. If so, the packet is also forwarded to the ERU. This ensures that each ERU is correctly evaluated for the relevant data. For NACK packets, forward to all ERUs. The ERU will buffer the latest about 200K packets that may need NACK for each user according to its own memory usage. If ERU finds itself has the needed packet specified by NACK, then it is sent directly to NACK requester. If not, further forwarding is performed.
In addition, the channel coding algorithm used for data transmission in the present application is as follows:
first, suitable parameters are calculated, which mainly include a payload size after encoding and the number of repairs. Size of the encoded payload: the maximum is not more than 1024 bytes, and the original frame is at least divided into two parts. Repairing quantity: and (3) performing certain upward floating determination according to the average packet loss rate of the link, and rounding up (for example, the packet loss rate is 20%, 100 original loads exist after encoding, the number of the repaired loads is 20, and 2-3 more repaired loads are obtained after certain upward floating). The upper layer transmission frame is encoded using a fountain code. And buffered to a queue. And when Nack comes, taking out the corresponding frame and the coding result from the queue, and selecting to send some parts of the coding result according to the Nack information or generating and sending a new repair packet.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
Fig. 4 shows a control device 400 of the transmission flow according to the present embodiment, the device 400 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 4, the apparatus 400 includes: a first determining module 410, configured to determine a guidance bandwidth according to a transmission bandwidth of data transmitted by a first edge computing node to a second edge computing node, where the guidance bandwidth is used to indicate a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; a second determining module 420, configured to determine, according to the guidance bandwidth and the data indicators of the multiple edge transmission nodes, channel bandwidths allocated to the multiple edge transmission nodes; and a third determining module 430, configured to determine the number of tokens of the plurality of edge transfer nodes according to the channel bandwidth, where the number of tokens is used to control the transfer traffic of the plurality of edge transfer nodes.
Optionally, before determining the operation of directing the bandwidth according to the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node, the apparatus 400 further includes: a fourth determining module, configured to determine, according to information about a data packet received by the plurality of edge transmission nodes from the first edge computing node within a predetermined time period, a total uplink bandwidth of the first edge computing node relative to the plurality of edge transmission nodes; a fifth determining module, configured to determine, according to relevant information of a data packet received by the second edge computing node from the multiple edge transmission nodes within a predetermined time period, a total downlink bandwidth of the second edge computing node relative to the multiple edge transmission nodes; and a sixth determining module, configured to determine, according to the total uplink bandwidth and the total downlink bandwidth, a transmission bandwidth for the first edge computing node to transmit data to the second edge computing node.
Optionally, the first determining module 410 includes: the first determining submodule is used for determining the transmission time delay of data transmitted from the first edge computing node to the second edge computing node; and the second determining submodule is used for determining the guide bandwidth according to the transmission delay and the transmission bandwidth.
Optionally, the first determining sub-module includes: a first determining unit, configured to determine, according to first time information recorded by the first edge computing node and first probe packets received by the plurality of edge transfer nodes from the first edge computing node, and first time information recorded by the first edge computing node and corresponding to time information of the first probe packets, a plurality of first time delays from the first edge computing node to the plurality of edge transfer nodes, respectively; a second determining unit, configured to determine, according to second time information recorded by the second edge computing node and second probe packets received by the plurality of edge transmission nodes from the second edge computing node and relevant time information of the second probe packets returned to the second edge computing node, a plurality of second time delays from the second edge computing node to the plurality of edge transmission nodes, respectively; and a second determining unit, configured to determine, according to the plurality of first time delays and the plurality of second time delays, a transmission time delay for the first edge computing node to transmit data to the second edge computing node.
Optionally, the data metrics include: the transmission data type, the channel packet loss rate, and the channel delay, and the second determining module 420 includes: the second determining submodule is used for determining channel packet loss rates corresponding to the plurality of edge transmission nodes respectively; determining channel time delays corresponding to a plurality of edge transmission nodes respectively; and determining the channel bandwidth according to the transmission data types, the channel packet loss rates and the channel time delays corresponding to the edge transmission nodes respectively.
Optionally, the third determining module 430 includes: determining the number of tokens according to the following formula: n = Bdi/8/1000, where N is the number of tokens to be released, bdi is the channel bandwidth allocated by the edge transmission node, and the unit of the channel bandwidth is Bps,8 is 8 bytes, and the conversion rate is between 1000 seconds and milliseconds.
Optionally, the apparatus 400 further comprises: when the data transmitted last time exceeds the byte number corresponding to the instruction bandwidth calculated last time, the instruction bandwidth is adjusted through the following formula: b = B-x 0 *1.1, wherein B is the instruction bandwidth, and x0 is the last overtransmitted byte; and/or under the condition that the data transmitted at this time exceeds the byte number corresponding to the guidance bandwidth, the guidance bandwidth at the next time is as follows: b _ next = B _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, and x1 is the byte of this super transmission.
Therefore, according to the embodiment, multi-port data transmission is realized through multiple paths, and low time delay, high stability and high quality of data transmission are ensured. And determining a guide bandwidth of the plurality of edge transmission nodes by the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node. And then the first edge computing node allocates channel bandwidth for the plurality of edge transmission nodes according to the data index of each edge transmission node, and finally, tokens are allocated for the plurality of edge transmission nodes according to the allocated channel bandwidth. Therefore, the appropriate number of tokens are distributed to each edge transmission node through the mode, and the technical effect of controlling the transmission flow of the edge transmission nodes through the number of the tokens is achieved. Therefore, the technical problems that how to use UDP to construct an application layer protocol to ensure low time delay, high stability and high quality of data transmission and how to calculate the transmission flow of data transmission under the protocol in the prior art are solved.
Example 3
Fig. 5 shows a control device 500 of the transmission flow according to the present embodiment, the device 500 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 5, the apparatus 500 includes: a processor 510; and a memory 520 coupled to the processor 510 for providing instructions to the processor 510 to process the following process steps: determining a guidance bandwidth according to a transmission bandwidth of data transmitted from a first edge computing node to a second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to a plurality of edge transmission nodes; determining channel bandwidths distributed by a plurality of edge transmission nodes according to the guide bandwidths and the data indexes of the plurality of edge transmission nodes; and determining the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes.
Optionally, before determining the operation of directing the bandwidth according to the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node, the memory 520 is further configured to provide the processor 510 with instructions for processing the following processing steps: determining the total uplink bandwidth of the first edge computing node relative to the plurality of edge transmission nodes according to the relevant information of the data packet received by the plurality of edge transmission nodes from the first edge computing node within a preset time period; determining the total downlink bandwidth of the second edge computing node relative to the plurality of edge transmission nodes according to the relevant information of the data packets received by the second edge computing node from the plurality of edge transmission nodes within a preset time period; and determining the transmission bandwidth of the data transmitted from the first edge computing node to the second edge computing node according to the total uplink bandwidth and the total downlink bandwidth.
Optionally, the determining, according to a transmission bandwidth of data transmitted from the first edge computing node to the second edge computing node, an operation of the guidance bandwidth includes: determining the transmission time delay of data transmitted from a first edge computing node to a second edge computing node; and determining the guide bandwidth according to the transmission delay and the transmission bandwidth.
Optionally, the operation of determining a transmission delay of the data transmitted from the first edge computing node to the second edge computing node includes: respectively determining a plurality of first time delays from a first edge computing node to a plurality of edge transmission nodes according to a first detection packet received by the edge transmission nodes from the first edge computing node and first time information recorded by the first edge computing node and returned by the relevant time information of the first detection packet; respectively determining a plurality of second time delays from the second edge computing node to the plurality of edge transmission nodes according to second time information recorded by the second edge computing node and received by the plurality of edge transmission nodes from the second edge computing node and the relevant time information of the second detection packet returned to the second edge computing node; and determining the transmission time delay of the data transmitted from the first edge computing node to the second edge computing node according to the plurality of first time delays and the plurality of second time delays.
Optionally, the data index comprises: the method comprises the following steps of transmitting data types, channel packet loss rates and channel time delays, and determining channel bandwidths distributed by a plurality of edge transmission nodes according to guide bandwidths and data indexes of the edge transmission nodes, wherein the operation comprises the following steps: determining channel packet loss rates corresponding to a plurality of edge transmission nodes respectively; determining channel time delays corresponding to a plurality of edge transmission nodes respectively; and determining the channel bandwidth according to the transmission data types, the channel packet loss rates and the channel time delays corresponding to the edge transmission nodes respectively.
Optionally, the operation of determining the number of tokens for a plurality of edge transfer nodes according to the channel bandwidth includes: determining the number of tokens according to the following formula: n = Bdi/8/1000, where N is the number of tokens to be released, bdi is the channel bandwidth allocated by the edge transmission node, and the unit of the channel bandwidth is Bps,8 is 8 bytes, and the conversion rate is between 1000 seconds and milliseconds.
Optionally, the memory 520 is further configured to provide the processor 510 with instructions to process the following process steps: when the data transmitted last time exceeds the byte number corresponding to the instruction bandwidth calculated last time, the instruction bandwidth is adjusted through the following formula: b = B-x 0 *1.1, wherein B is the instruction bandwidth, and x0 is the last overtransmitted byte; and/or under the condition that the data transmitted at this time exceeds the byte number corresponding to the guidance bandwidth, the guidance bandwidth at the next time is as follows: b _ next = B _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, and x1 is the byte of this super transmission.
Therefore, according to the embodiment, multi-port data transmission is realized through multiple paths, and low time delay, high stability and high quality of data transmission are ensured. And determining a transmission bandwidth of data transmitted to the second edge computing node by the first edge computing node as a guide bandwidth of the plurality of edge transmission nodes. And then the first edge computing node distributes channel bandwidth for the plurality of edge transmission nodes according to the data index of each edge transmission node, and finally, the first edge computing node distributes tokens for the plurality of edge transmission nodes according to the distributed channel bandwidth. Therefore, the appropriate number of tokens are distributed to each edge transmission node through the mode, and the technical effect of controlling the transmission flow of the edge transmission nodes through the number of the tokens is achieved. And further, the technical problems of how to use UDP to construct an application layer protocol to ensure low time delay, high stability and high quality of data transmission and how to calculate the transmission flow of data transmission under the protocol in the prior art are solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (9)

1. A method for controlling a transmission flow of data transmitted from a first edge computing node to a second edge computing node within a predetermined time interval, wherein the first edge computing node transmits the data to the second edge computing node via a plurality of edge transmission nodes respectively disposed on different transmission paths, the method comprising:
determining a guidance bandwidth according to a transmission bandwidth of data transmitted from the first edge computing node to the second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to the plurality of edge transmission nodes;
determining channel bandwidths distributed by the edge transmission nodes according to the guide bandwidths and the data indexes of the edge transmission nodes; and
determining the number of tokens of the plurality of edge transmitting nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmitting nodes, and
the method further comprises the following steps: when the data transmitted last time exceeds the byte number corresponding to the instruction bandwidth calculated last time, the instruction bandwidth is adjusted through the following formula: b-x 0 *1.1, wherein B is the instruction bandwidth and x0 is the last super-sent byte; and/or under the condition that the data transmitted at this time exceeds the byte number corresponding to the guidance bandwidth, the guidance bandwidth at the next time is as follows: b _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, x1 is the super transmission byte of this time, and y is the value when the instruction bandwidth is negative after being reduced.
2. The method of claim 1, wherein prior to determining the directive bandwidth based on the transmission bandwidth for the first edge computing node to transmit data to the second edge computing node, further comprising:
determining a total uplink bandwidth of the first edge computing node relative to the plurality of edge transmission nodes according to relevant information of data packets received by the plurality of edge transmission nodes from the first edge computing node within a predetermined time period;
determining a total downlink bandwidth of the second edge computing node relative to the plurality of edge transmission nodes according to the relevant information of the data packets received by the second edge computing node from the plurality of edge transmission nodes within the preset time period; and
and determining the transmission bandwidth for the first edge computing node to transmit data to the second edge computing node according to the total uplink bandwidth and the total downlink bandwidth.
3. The method of claim 1, wherein determining, from a transmission bandwidth of the first edge computing node to transmit data to the second edge computing node, a directive bandwidth comprises:
determining a transmission delay of the first edge computing node to the second edge computing node; and
and determining the guidance bandwidth according to the transmission delay and the transmission bandwidth.
4. The method of claim 2, wherein determining a transmission delay for the first edge computing node to transmit data to the second edge computing node comprises:
determining a plurality of first time delays from the first edge computing node to the plurality of edge transmission nodes respectively according to first time information recorded by the first edge computing node after the plurality of edge transmission nodes receive a first detection packet from the first edge computing node and time information related to the first detection packet is returned to the first edge computing node;
determining a plurality of second time delays from the second edge computing node to the plurality of edge transmission nodes respectively according to second time information recorded by the second edge computing node after the plurality of edge transmission nodes receive a second detection packet from the second edge computing node and return the relevant time information of the second detection packet to the second edge computing node; and
and determining the transmission delay of the data transmitted from the first edge computing node to the second edge computing node according to the plurality of first delays and the plurality of second delays.
5. The method of claim 1, wherein the data metrics comprise: transmitting data types, channel packet loss rates and channel time delays, and determining channel bandwidths allocated to the plurality of edge transmission nodes according to the guidance bandwidths and the data indexes of the plurality of edge transmission nodes, wherein the operation comprises:
determining the channel packet loss rates corresponding to the plurality of edge transmission nodes respectively;
determining the channel time delay corresponding to each of the plurality of edge transmission nodes; and
and determining the channel bandwidth according to the transmission data types, the channel packet loss rates and the channel time delays respectively corresponding to the edge transmission nodes.
6. The method of claim 1, wherein determining the number of tokens for the plurality of edge transfer nodes based on the channel bandwidth comprises:
determining the number of tokens according to the formula:
N=Bdi/8/1000
wherein N is the number of tokens to be released, bdi is the channel bandwidth allocated by the edge transmission node, and the unit of the channel bandwidth is Bps,8 bytes is 8 bytes, and 1000 is a conversion rate between seconds and milliseconds.
7. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 6 is performed by a processor when the program is run.
8. A traffic control apparatus for controlling a traffic of a first edge computing node transmitting data to a second edge computing node in a predetermined time interval, wherein the first edge computing node transmits data to the second edge computing node via a plurality of edge transmission nodes respectively provided on different transmission paths, the apparatus comprising:
a first determining module, configured to determine, according to a transmission bandwidth of data transmitted by the first edge computing node to the second edge computing node, a guidance bandwidth, where the guidance bandwidth is used to indicate a total bandwidth of bandwidths allocated to the plurality of edge transmission nodes;
a second determining module, configured to determine, according to the guidance bandwidth and the data indicators of the multiple edge transmission nodes, channel bandwidths allocated to the multiple edge transmission nodes; and
a third determining module, configured to determine the number of tokens for the plurality of edge transfer nodes according to the channel bandwidth, where the number of tokens is used to control the transfer traffic of the plurality of edge transfer nodes, and
the device still includes: when the data transmitted last time exceeds the byte number corresponding to the instruction bandwidth calculated last time, the instruction bandwidth is adjusted through the following formula: b-x 0 *1.1, wherein B is the instruction bandwidth and x0 is the last super-sent byte; and/or under the condition that the data transmitted at this time exceeds the byte number corresponding to the guidance bandwidth, the guidance bandwidth at the next time is as follows: b _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, x1 is the super transmission byte of this time, and y is the value when the instruction bandwidth is negative after being reduced.
9. A traffic control apparatus for controlling a traffic of a first edge computing node transmitting data to a second edge computing node in a predetermined time interval, wherein the first edge computing node transmits data to the second edge computing node via a plurality of edge transmission nodes respectively provided on different transmission paths, the apparatus comprising:
a processor; and
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps:
determining a guidance bandwidth according to a transmission bandwidth of data transmitted from the first edge computing node to the second edge computing node, wherein the guidance bandwidth is used for indicating a total bandwidth of bandwidths allocated to the plurality of edge transmission nodes;
determining channel bandwidths distributed by the edge transmission nodes according to the guide bandwidths and the data indexes of the edge transmission nodes; and
determining the number of tokens of the plurality of edge transmission nodes according to the channel bandwidth, wherein the number of tokens is used for controlling the transmission flow of the plurality of edge transmission nodes, and
the device still includes: when the data transmitted last time exceeds the byte number corresponding to the guiding bandwidth calculated last time, the guiding bandwidth is adjusted by the following formula: b-x 0 *1.1, wherein B is the guide bandwidth and x0 isLast super-sent byte; and/or under the condition that the data transmitted at this time exceeds the byte number corresponding to the guidance bandwidth, the guidance bandwidth at the next time is as follows: b _ next-x 1 *1.1+ y 0.8, B _nextis the next instruction bandwidth, x1 is the super transmission byte of this time, and y is the value when the instruction bandwidth is negative after being reduced.
CN202110547626.4A 2021-05-19 2021-05-19 Transmission flow control method, device and storage medium Active CN113162869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110547626.4A CN113162869B (en) 2021-05-19 2021-05-19 Transmission flow control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110547626.4A CN113162869B (en) 2021-05-19 2021-05-19 Transmission flow control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113162869A CN113162869A (en) 2021-07-23
CN113162869B true CN113162869B (en) 2023-03-28

Family

ID=76876575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110547626.4A Active CN113162869B (en) 2021-05-19 2021-05-19 Transmission flow control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113162869B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1878134A (en) * 2006-07-10 2006-12-13 武汉理工大学 Time-delay constrained multipath routing method for Ad hoc network
JP2008278207A (en) * 2007-04-27 2008-11-13 Nec Corp Available bandwidth estimation system, stream data distribution system, method, and program
CN103780515A (en) * 2014-02-12 2014-05-07 华为技术有限公司 Method and controller for announcing bandwidth of cluster system
CN105450536A (en) * 2015-11-12 2016-03-30 北京交通大学 Data distribution method and data distribution device
CN106982238A (en) * 2016-01-18 2017-07-25 华为技术有限公司 A kind of method, policy control center and main frame for distributing network path resource
WO2021057447A1 (en) * 2019-09-27 2021-04-01 华为技术有限公司 Method for determining required bandwidth for data stream transmission, and devices and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9379956B2 (en) * 2014-06-30 2016-06-28 Nicira, Inc. Identifying a network topology between two endpoints

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1878134A (en) * 2006-07-10 2006-12-13 武汉理工大学 Time-delay constrained multipath routing method for Ad hoc network
JP2008278207A (en) * 2007-04-27 2008-11-13 Nec Corp Available bandwidth estimation system, stream data distribution system, method, and program
CN103780515A (en) * 2014-02-12 2014-05-07 华为技术有限公司 Method and controller for announcing bandwidth of cluster system
CN105450536A (en) * 2015-11-12 2016-03-30 北京交通大学 Data distribution method and data distribution device
CN106982238A (en) * 2016-01-18 2017-07-25 华为技术有限公司 A kind of method, policy control center and main frame for distributing network path resource
WO2021057447A1 (en) * 2019-09-27 2021-04-01 华为技术有限公司 Method for determining required bandwidth for data stream transmission, and devices and system

Also Published As

Publication number Publication date
CN113162869A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
US9998338B2 (en) System and method for dynamic effective rate estimation for real-time video traffic
EP2320580B1 (en) System for measuring transmission bandwidth for media streaming and method for same
EP2563034B1 (en) Dynamic Bandwidth Re-Allocation
US20080084821A1 (en) Method and devices for adapting the transmission rate of a data stream when there is interference
WO2003015355A2 (en) Method for supporting non-linear, highly scalable increase-decrease congestion control scheme
KR20060115216A (en) Apparatus and method for transmitting multimedia streaming
US11057299B2 (en) Real-time video transmission method for multipath network
US20220217089A1 (en) Path traffic allocation method, network device, and network system
CN113381905B (en) Data information synchronization method and device and storage medium
Hou et al. QoE-optimal scheduling for on-demand video streams over unreliable wireless networks
US9590912B2 (en) Data transfer method for efficiently transferring bulk data
CN104767591B (en) A kind of data transmission method for uplink and device
US9602413B2 (en) Bandwidth control device and method to reduce difference in pass bandwidth
EP3560152B1 (en) Determining the bandwidth of a communication link
CN109995608B (en) Network rate calculation method and device
CN113162869B (en) Transmission flow control method, device and storage medium
CN112714081B (en) Data processing method and device
US10298475B2 (en) System and method for jitter-aware bandwidth estimation
EP4262313A1 (en) Method, apparatus and system for scheduling service flow
CN113347086B (en) Method, device and storage medium for transmitting data
AU2014303900A1 (en) Information processing system, information processing apparatus, and program
CN113141277B (en) Method and device for determining transmission bandwidth and storage medium
CN107302504B (en) Multipath transmission scheduling method and system based on virtual sending queue
Yaqoob et al. A Priority-aware DASH-based multi-view video streaming scheme over multiple channels
Liao et al. STOP: Joint send buffer and transmission control for user-perceived deadline guarantee via curriculum guided-deep reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant