CN108092888B - Transmission method, gateway and transmission system based on Overlay network - Google Patents

Transmission method, gateway and transmission system based on Overlay network Download PDF

Info

Publication number
CN108092888B
CN108092888B CN201711062264.XA CN201711062264A CN108092888B CN 108092888 B CN108092888 B CN 108092888B CN 201711062264 A CN201711062264 A CN 201711062264A CN 108092888 B CN108092888 B CN 108092888B
Authority
CN
China
Prior art keywords
gateway
path
buffer queue
rate
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711062264.XA
Other languages
Chinese (zh)
Other versions
CN108092888A (en
Inventor
黄伊
夏寅贲
曹玮
于文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711062264.XA priority Critical patent/CN108092888B/en
Publication of CN108092888A publication Critical patent/CN108092888A/en
Application granted granted Critical
Publication of CN108092888B publication Critical patent/CN108092888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a transmission method, a gateway and a transmission system based on an Overlay network, which are used for adjusting the sending rate of the gateway in real time and improving the utilization rate of network resources. The method in the embodiment of the application comprises the following steps: when a first data packet of the target data stream enters the first gateway, the first gateway calculates at least two Overlay network alternative paths according to first measurement data; the first gateway selects an alternative path with the minimum total path delay from the alternative paths of the Overlay network as a first path of the target data stream; the first gateway sends the data message of the target data stream to the second gateway through the first path; the first gateway acquires second measurement data on the first path; and the first gateway adjusts the sending rate of the data message according to the second measurement data.

Description

Transmission method, gateway and transmission system based on Overlay network
Technical Field
The present application relates to the field of communications, and in particular, to a transmission method, a gateway, and a transmission system based on an Overlay network.
Background
The conventional wan transmission is mainly based on an Underlay bearer (Underlay), and performs routing forwarding through an intra-network domain routing Protocol (for example, protocols such as Open Shortest Path First (OSPF)) and an inter-domain routing Protocol (for example, Border Gateway Protocol (BGP)) according to the forwarding hop count in the path. However, the number of hops forwarded by the route is not equal to the transmission performance of the path forwarding. The routing strategy based on the hop count cannot correspond to good transmission performance due to the reasons of network equipment congestion/paralysis, network distribution imbalance, dynamic flow change and the like. For the problem, a software defined wide area Network (SD-WAN) transmission adopts an Overlay Network (Overlay Network) mode, and a logical Network is constructed on a physical Network for transmission. In this way, the data transmission path is no longer determined by the physical topological shape of the router, but is determined by variables such as network transmission performance that can directly measure the transmission effect, thereby effectively improving the quality of service (QoS) of the data stream and improving the effective utilization rate of the network.
In the existing scheme, a Sure Route technology is adopted, and the Overlay effect is realized by using application layer forwarding among network nodes. Aiming at one-time network browsing or file downloading operation of a user from a server data source, the data source forwards contents through application layers of a plurality of nodes and then reaches the user. Unlike direct transmission with an Underlay network, the nodes through which the transmission passes are optimized paths selected based on a set of measurement data. The data source server is required to place some test files on the source server, and each forwarding node in the network can directly download the contents through the Underlay network, so as to obtain the transmission performance of the data source to the node. Meanwhile, all forwarding nodes can measure the transmission performance between the forwarding nodes adjacent to the forwarding nodes, and the forwarding nodes through which the access point closest to the user passes can obtain the highest transmission performance after calculation. In this way, the transmission path from the data source to the user avoids the congested and unavailable nodes, and achieves the acceleration effect, as shown in fig. 1.
In the existing scheme, although the network congestion situation is considered and congestion is avoided as much as possible, a test file needs to be placed on a data source server, the size of the test file affects the transmission performance of a transmission path, the sending rate of a gateway cannot be adjusted in real time according to the actual situation of the network, and the utilization rate of network resources is low.
Disclosure of Invention
The embodiment of the application provides a transmission method, a gateway and a transmission system based on an Overlay network, which are used for adjusting the sending rate of the gateway in real time and improving the utilization rate of network resources.
A first aspect of the present application provides a transmission method based on an Overlay network, including: when a first data packet of a target data stream enters a first gateway, the first gateway calculates at least two Overlay network alternative paths according to first measurement data, the Overlay network alternative paths are used for forwarding the target data stream from the first gateway to a second gateway, and the first measurement data comprise information used for indicating transmission delay between any network nodes; the first gateway selects an alternative path with the minimum total path delay from the alternative paths of the Overlay network as a first path of the target data stream; the first gateway sends the data message of the target data stream to the second gateway through the first path; the first gateway acquires second measurement data on the first path; and the first gateway adjusts the sending rate of the data message according to the second measurement data. In the embodiment of the present application, the target data stream is transmitted through the Overlay path with the minimum path delay, that is, the first path. On the first path, when the network is not continuously congested, the target data stream can be sent as soon as possible to increase the throughput; when the first path has continuous congestion, the first gateway can adjust the sending rate of the target data stream on the first path to a value as large as possible in time on the premise of not causing congestion, so that the transmission performance is improved. The utilization rate of network resources is improved.
In one possible design, in a first implementation manner of the first aspect of the embodiment of the present application, before the Overlay network further includes a controller, and the first gateway calculates at least two Overlay network alternative paths according to the first measurement data, the method further includes: the first gateway periodically acquires the first measurement data sent by the controller, where the first measurement data includes information indicating transmission delay between any network nodes. The method and the device for acquiring the first measurement data have the advantages that the process of acquiring the first measurement data is added, and the steps of the method and the device for acquiring the first measurement data are more complete.
In a possible design, in a second implementation manner of the first aspect of the embodiment of the present application, before the first gateway sends the data packet of the target data flow to the second gateway through the first path, the method further includes: the first gateway establishes a first output buffer queue and a first routing information block corresponding to the input buffer queue for the first path; and the first gateway determines a first initial generation rate according to the initial sending rate of the first path, and generates a first token for the first path according to the first initial generation rate, wherein the first token is used for indicating a data packet arriving at the input buffer queue to enter the first output buffer queue. The method and the device for establishing the output buffer queue and the routing information block for the first path are added, and the implementation mode of the method and the device for establishing the output buffer queue and the routing information block for the first path is increased.
In a possible design, in a third implementation manner of the first aspect of the embodiment of the present application, the sending, by the first gateway, the data packet of the target data flow to the second gateway through the first path includes: the first gateway obtains a corresponding number of data packets from the input buffer queue according to the number of the first tokens; and the first gateway encapsulates and forwards the acquired data packets with the corresponding quantity through the first output buffer queue. In the embodiment of the application, the specific process of sending the data message is refined, and the implementation modes of the embodiment of the application are increased.
In a possible design, in a fourth implementation manner of the first aspect of the embodiment of the present application, after the first gateway obtains the second measurement data on the first path, the method further includes: and the first gateway calculates the current sending rate BW and the current path Delay according to the second measurement data, and updates the first routing information block, wherein the first routing information block comprises a highest sending rate maxBW, a lowest path Delay minDelay, a congestion judgment time Delta, a congestion judgment coefficient a and a rate adjustment coefficient b, wherein a is more than 1, and b is less than 1. The embodiment of the application explains the process of following the new routing information block, and increases the realizability and operability of the embodiment of the application.
In a possible design, in a fifth implementation manner of the first aspect of the embodiment of the present application, the calculating, by the first gateway, a current sending rate BW and a path Delay according to the second measurement data, and updating the first routing information block includes: if the current sending rate BW is greater than the highest sending rate maxBW, the first gateway updates the highest sending rate; if the current path Delay is smaller than the lowest path Delay minDelay, the first gateway updates the lowest path Delay; if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway determines that the first path is congested and calculates a new sending rate newBW, where the new sending rate newBW is the product of the maximum sending rate maxBW and the rate adjustment coefficient b. In the embodiment of the present application, a specific manner of updating the first routing block is described, so that the embodiment of the present application is more complete in step.
In a possible design, in a sixth implementation manner of the first aspect of the embodiment of the present application, the method further includes: if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway generates the first token according to a new generation rate, and transmits the target data stream according to the number of the first tokens, wherein the new generation rate is determined according to the new transmission rate; if the current path Delay is less than or equal to the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway recovers the initial generation rate of the first path and sends the target data stream according to the number of the first tokens. According to the embodiment of the application, the process of adjusting the rate of generating the first token according to the congestion condition is added, and the implementation mode of the embodiment of the application is added.
In a possible design, in a seventh implementation manner of the first aspect of the embodiment of the present application, after the first gateway adjusts the sending rate of the data packet according to the second measurement data, the method further includes: the first gateway judges whether the number of the data packets of the input buffer queue is greater than a preset threshold Th or not, and the duration exceeds the congestion judgment time Delta; if the number of the data packets of the input buffer queue is greater than or equal to a preset threshold Th and the duration exceeds the congestion judgment time Delta, the first gateway selects an alternative path with the minimum total path delay from the remaining alternative paths of the Overlay network as a second path of the target data stream; and the first gateway sends the data message of the target data stream to the second gateway through the first path and the second path. According to the embodiment of the application, when the first path does not meet the transmission requirement, the second path is added to send the data message of the target data stream, and the implementation mode of the embodiment of the application is increased.
In a possible design, in an eighth implementation manner of the first aspect of the embodiment of the present application, before the gateway sends the data packet of the target data flow to the second gateway through the first path and the second path, the method further includes: the first gateway establishes a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path; and the first gateway determines a second initial generation rate according to the initial sending rate of the second path, and generates a second token for the second path according to the second initial generation rate, wherein the second token is used for indicating the data message reaching the input buffer queue to enter the second output buffer queue, and the initial sending rate of the second path is the difference value of the sending rate of the sending end and the initial sending rate of the first path. In the embodiment of the application, a process of establishing a second output buffer path and a second routing information block for a second path is added, so that the embodiment of the application is more complete in steps.
In a possible design, in a ninth implementation manner of the first aspect of the embodiment of the present application, after the first gateway determines a second initial generation rate according to an initial sending rate of the second path and generates a second token for the second path according to the second initial generation rate, the method further includes: if the number of the data packets of the input buffer queue is smaller than the Th, the first gateway stops generating the second token for the second path, so that the target data stream is transmitted from the first path; if the number of the data packets in the input buffer queue is still greater than or equal to Th, the first gateway continues to select a new path from the remaining Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted. In the embodiment of the present application, it is determined again whether the number of data packets input into the buffer queue is greater than the preset threshold, if so, a new path transmission target data stream is added, and if not, generation of the second token is stopped, so that the implementation manner of the embodiment of the present application is increased.
In a possible design, in a tenth implementation manner of the first aspect of the embodiment of the present application, after the first gateway continues to select a new path from the remaining Overlay network alternative paths to carry the target data stream, the method further includes: the first gateway judges whether the number of the data packets of the input buffer queue reaches the maximum value which can be cached by the input buffer queue; if the number of the data packets of the input buffer queue reaches the maximum value, the first gateway performs packet loss processing by using different dynamic queue management (AQM) mechanisms, or the first gateway transmits a speed reduction instruction to the transmitting end by using a mechanism for displaying congestion notification (ECN), wherein the speed reduction instruction is used for indicating the first gateway to reduce the transmitting rate of the transmitting end. According to the embodiment of the application, whether the number of the data packets input into the buffer queue reaches the maximum value is judged again, if yes, packet loss processing is performed or the sending rate of a sending end is reduced, and the realizability and the operability of the embodiment are improved.
A second aspect of the present application provides a gateway, which is a first gateway, including: a calculating unit, configured to calculate at least two Overlay network alternative paths according to first measurement data when a first data packet of the target data stream enters the first gateway, where the Overlay network alternative paths are used to forward a data packet of the target data stream from the first gateway to the second gateway, and the first measurement data includes information used to indicate transmission delay between any network nodes; a selecting unit, configured to select, as a first path of the target data stream, an alternative path with a minimum total path delay from the alternative paths of the Overlay network; a sending unit, configured to send the data packet of the target data flow to the second gateway through the first path; an obtaining unit, configured to obtain second measurement data on the first path; and the processing unit is used for adjusting the sending rate of the data message according to the second measurement data. In the embodiment of the present application, the target data stream is transmitted through the Overlay path with the minimum path delay, that is, the first path. On the first path, when the network is not continuously congested, the target data stream can be sent as soon as possible to increase the throughput; when the first path has continuous congestion, the first gateway can adjust the sending rate of the target data stream on the first path to a value as large as possible in time on the premise of not causing congestion, so that the transmission performance is improved. The utilization rate of network resources is improved.
In one possible design, in a first implementation manner of the second aspect of the embodiment of the present application, the method includes: the obtaining unit is further configured to periodically obtain the first measurement data sent by the controller, where the first measurement data includes information indicating a transmission delay between any network nodes. The method and the device for acquiring the first measurement data have the advantages that the process of acquiring the first measurement data is added, and the steps of the method and the device for acquiring the first measurement data are more complete.
In a possible design, in a second implementation manner of the second aspect of the embodiment of the present application, the gateway further includes: the establishing unit is used for establishing a first output buffer queue and a first routing information block corresponding to the input buffer queue for the first path; the processing unit is further configured to determine a first initial generation rate according to the initial sending rate of the first path, and generate a first token for the first path according to the first initial generation rate, where the first token is used to indicate that a packet arriving at the input buffer queue enters the first output buffer queue. The method and the device for establishing the output buffer queue and the routing information block for the first path are added, and the implementation mode of the method and the device for establishing the output buffer queue and the routing information block for the first path is increased.
In a possible design, in a third implementation manner of the second aspect of the embodiment of the present application, the sending unit is specifically configured to: acquiring a corresponding number of data packets from the input buffer queue according to the number of the first tokens; and encapsulating and forwarding the acquired data packets with the corresponding quantity through the first output buffer queue. In the embodiment of the application, the specific process of sending the data message is refined, and the implementation modes of the embodiment of the application are increased.
In a possible design, in a fourth implementation manner of the second aspect of the embodiment of the present application, the processing unit is specifically configured to: and calculating the current sending rate BW and the current path Delay according to the second measurement data, and updating the first routing information block, wherein the first routing information block comprises a highest sending rate maxBW, a lowest path Delay minDelay, a congestion judgment time Delta, a congestion judgment coefficient a and a rate adjustment coefficient b, wherein a is more than 1, and b is less than 1. The embodiment of the application explains the process of following the new routing information block, and increases the realizability and operability of the embodiment of the application.
In a possible design, in a fifth implementation manner of the second aspect of the embodiment of the present application, the processing unit is further configured to: if the current sending rate BW is larger than the highest sending rate maxBW, updating the highest sending rate; if the current path Delay is smaller than the lowest path Delay minDelay, updating the lowest path Delay; if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, determining that the first path is congested, and calculating a new sending rate newBW, where the new sending rate newBW is the product of the maximum sending rate maxBW and the rate adjustment coefficient b. In the embodiment of the present application, a specific manner of updating the first routing block is described, so that the embodiment of the present application is more complete in step.
In a possible design, in a sixth implementation manner of the second aspect of the embodiment of the present application, the processing unit is further configured to: if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, generating the first token according to a new generation rate, and transmitting the target data stream according to the number of the first tokens, wherein the new generation rate is determined according to the new transmission rate; and if the current path Delay is less than or equal to the product of the minimum path Delay minDelay and the congestion judgment coefficient a, recovering the initial generation rate of the first path, and sending the target data stream according to the number of the first tokens. According to the embodiment of the application, the process of adjusting the rate of generating the first token according to the congestion condition is added, and the implementation mode of the embodiment of the application is added.
In a possible design, in a seventh implementation manner of the second aspect of the embodiment of the present application, the processing unit is further configured to: judging whether the number of the data packets of the input buffer queue is greater than a preset threshold Th or not, and the duration exceeds the congestion judgment time Delta; if the number of the data packets of the input buffer queue is greater than or equal to a preset threshold Th and the duration exceeds the congestion judgment time Delta, selecting an alternative path with the minimum total path delay from the rest alternative paths of the Overlay network as a second path of the target data stream; and sending the data message of the target data flow to the second gateway through the first path and the second path. According to the embodiment of the application, when the first path does not meet the transmission requirement, the second path is added to send the data message of the target data stream, and the implementation mode of the embodiment of the application is increased.
In one possible design, in an eighth implementation manner of the second aspect of the embodiment of the present application, the method includes: the establishing unit is further configured to establish a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path; the processing unit is further configured to determine a second initial generation rate according to the initial sending rate of the second path, and generate a second token for the second path according to the second initial generation rate, where the second token is used to indicate that the data packet arriving at the input buffer queue enters the second output buffer queue, and the initial sending rate of the second path is a difference value between the sending rate of the sending end and the initial sending rate of the first path. In the embodiment of the application, a process of establishing a second output buffer path and a second routing information block for a second path is added, so that the embodiment of the application is more complete in steps.
In a possible design, in a ninth implementation manner of the second aspect of the embodiment of the present application, the processing unit is further configured to: and if the number of the data packets of the input buffer queue is smaller than the Th, stopping generating the second token for the second path, so that the target data stream is transmitted from the first path. If the number of the data packets of the input buffer queue is still greater than or equal to Th, continuing to select a new path from the remaining Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted. In the embodiment of the present application, it is determined again whether the number of data packets input into the buffer queue is greater than the preset threshold, if so, a new path transmission target data stream is added, and if not, generation of the second token is stopped, so that the implementation manner of the embodiment of the present application is increased.
In a possible design, in a tenth implementation manner of the second aspect of the embodiment of the present application, the processing unit is further configured to: judging whether the number of the data packets of the input buffer queue reaches the maximum value which can be cached by the input buffer queue; and if the number of the data packets of the input buffer queue reaches the maximum value, performing packet loss processing by using different dynamic queue management (AQM) mechanisms, or sending a speed reduction instruction to the sending end by using an ECN (event-triggered network) mechanism for displaying congestion notification, wherein the speed reduction instruction is used for indicating the first gateway to reduce the sending rate of the sending end. According to the embodiment of the application, whether the number of the data packets input into the buffer queue reaches the maximum value is judged again, if yes, packet loss processing is performed or the sending rate of a sending end is reduced, and the realizability and the operability of the embodiment are improved.
A third aspect of the present application provides a gateway, comprising: the device comprises a memory, a transceiver and at least one processor, wherein instructions are stored in the memory, the transceiver and the at least one processor are interconnected through lines, and the at least one processor calls the instructions to execute the method of any one of the first aspect.
A fourth aspect of the present application provides a transmission system based on an Overlay network, which includes: a gateway, a forwarding node and a controller; the gateway is configured to perform the method of any of the above first aspects; the gateway is also used for measuring the transmission delay between adjacent network nodes; the forwarding node is used for realizing the forwarding of the data packet and the measurement of the transmission delay between adjacent network nodes; the controller is used for collecting and distributing measurement data of any network node.
A fifth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the above-described aspects.
A sixth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspects.
Drawings
FIG. 1 is a schematic diagram of a prior art process for forwarding a data flow;
FIG. 2 is a schematic diagram of a network architecture applied in the embodiment of the present application;
fig. 3 is a schematic diagram of a measurement and forwarding process of a gateway in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a gateway in an embodiment of the present application;
fig. 5 is a schematic diagram of an embodiment of a transmission method based on an Overlay network in the embodiment of the present application;
fig. 6 is a schematic diagram of forwarding of a data flow in a gateway in an embodiment of the present application;
FIG. 7 is a schematic diagram of an application scenario according to an embodiment of the present application;
fig. 8 is a schematic diagram of another embodiment of a transmission method based on an Overlay network in the embodiment of the present application;
FIG. 9 is a schematic diagram of another application scenario according to an embodiment of the present application;
fig. 10 is a schematic diagram of another embodiment of the gateway in the embodiment of the present application;
fig. 11 is a schematic diagram of another embodiment of the gateway in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a transmission method, a gateway and a transmission system based on an Overlay network, which are used for adjusting the sending rate of the gateway in real time and improving the utilization rate of network resources.
In order to make the technical field better understand the scheme of the present application, the following description will be made on the embodiments of the present application with reference to the attached drawings.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the present application may be applied to a network architecture as shown in fig. 2, where the network architecture includes a gateway node (GW) located at an edge of a network, a forwarding node located inside the network, and a Controller (Controller), where the main functions of the gateway node (gateway) are: the method realizes data traffic forwarding, transmission performance measurement among nodes, data traffic aggregation, rate control, load balancing decision and the like; the main functions of the forwarding node are: the data packet forwarding, the transmission performance measurement among nodes and the like are realized; the main functions of the controller are: and network topology management, distribution of performance measurement data and the like are realized. The direct connections between the gateway and the forwarding node and between the forwarding nodes represent an Overlay path, each Overlay path needs a network tunnel to carry data traffic, for example, generic routing encapsulation protocol (GRE), Virtual Private Network (VPN), etc., and an end-to-end forwarding path (i.e., an Overlay path) is formed by a plurality of nodes and logical connections therebetween.
For the above network architecture, the operation of the whole network architecture is mainly divided into two processes: and measuring and forwarding, as shown in fig. 3. For the measurement process, all nodes bearing data forwarding functions, mainly including gateway nodes and forwarding nodes, perform transmission performance measurement between any two adjacent nodes (tunnel connection is established) periodically (a settable time period), the measured data is information of transmission delay, the measured data is reported to a global controller from a measurement initiating node, and the global controller collects the measurement information and then distributes the collected information to each node, so that each node can obtain the transmission performance information between each node in the network in real time. For the forwarding process, each data stream is converged to an access gateway (Ingress gateway, Ingress GW) closest to the user after being generated, and the forwarding behavior of this data stream is controlled by the gateway, including determining an end-to-end forwarding path according to the calculated path delay, controlling a real-time sending rate (current sending rate), selecting an alternative path for offloading, and reducing the overall flow rate. When one or more Overlay paths of a data stream are determined, data messages on the data stream are sent to a gateway adjacent to a receiving end through one to a plurality of forwarding nodes according to the determined paths and are finally received by the receiving end. In the data stream sending process, the gateway can adjust the sending rate of the data stream forwarded from the gateway according to the real-time network condition, and selects a plurality of Overlay paths to forward the data message of the same data stream when detecting that the flow rate entering the gateway is higher than the forwarding rate. The measurement process and the forwarding process are performed simultaneously and relatively independently, the measurement process is directed to an Overlay path between each node, and the forwarding process is directed to each data flow.
Fig. 4 is a physical structure diagram of a gateway provided in an embodiment of the present application, where the gateway specifically includes: the main control board 410 comprises a central processing unit 411, an Overlay routing table entry storage 412 and an Overlay network performance measurement storage 413. Interface board 420 comprises central processor 421, network processor 422, physical interface card 423 and routing table entry storage 424; interface board 430 includes a central processor 431, a network processor 432, a physical interface card 433, and a routing table entry memory 434. The operation of each hardware will be described below.
The central processor 411 is specifically configured to: after receiving an Overlay routing establishment request sent by the central processing unit 421, reading information of the Overlay network performance measurement storage 413 and calculating a plurality of Overlay paths, where each Overlay path includes a plurality of Overlay routing table entries, and storing the Overlay routing table entries in the Overlay routing table entry storage 412; calculating the Overlay routing change of a certain data stream on the interface board 420, reading the changed Overlay routing table entry from the Overlay routing table entry memory 412, and then sending the changed Overlay routing table entry to the Overlay forwarding table entry memory 424 of the interface board 420; after receiving the forwarding request sent by the central processing unit 431, the IP packet is delivered to the central processing unit 421 for processing. The Overlay routing table entry storage 412 is specifically configured to: all Overlay routing entries for a data flow are stored, including the preferred path and the alternate path. The Overlay network performance measurement memory 413 is specifically configured to: measurement information between network nodes is stored.
The central processing unit 421 is specifically configured to: for an IP packet to be forwarded by an Overlay, a forwarding Routing table is searched to obtain a data flow route corresponding to the IP packet, an Overlay forwarding package is added to the packet in a network tunnel (e.g., GRE) or source Routing (Segment Routing) manner, and the packet is sent to the network processor 422; for IP packets without Overlay forwarding, the routing table is searched, and after internal adaptation processing, the routing table is sent to the network processor 422 for processing. The Overlay forwarding entry storage 424 is specifically configured to: and storing the forwarding exit table entry of the Overlay path and the hop-by-hop forwarding node address of the Overlay route. Network processor 422 is specifically configured to: and according to the output/interface information, after the link layer encapsulation is completed, the IP message is sent out from the physical interface card 423. Physical interface card 423 is specifically used for: the IP message received from network processor 422 is sent.
Central processor 431 is specifically configured to: for the received message, the forwarding table entry storage 434 is queried, if the local machine is found to be the end point on the Overlay path of the message, the outer-layer Overlay routing information carried by the message is stripped, and the inner-layer IP data message is handed over to the central processing unit 411 for processing; if the quintuple corresponding to the data packet is not found in the Overlay forwarding entry storage 434, the quintuple information is included in the Overlay path establishment request and sent to the central processing unit 411 for processing. The network processor 432 is specifically configured to: when the Overlay data stream message is received and found to be a local message, the Overlay data stream message is reported to the central processing unit 431 for processing. The physical interface card 433 is specifically configured to: the Overlay data stream message is received on the network, and after being subjected to the relevant verification of the outer layer protocol, the Overlay data stream message is submitted to the network processor 432 for processing.
It should be noted that, in this embodiment of the application, a gateway near a sending end is a first gateway, a gateway near a receiving end is a second gateway, a preferred path selected from all Overlay network alternative paths is a first path, and other Overlay network alternative paths are a second path, a third path, and the like in sequence according to a selection order, which is not described herein again specifically. In the embodiment of the application, the data stream reaches the receiving end from the transmitting end through the first gateway, the at least one forwarding node and the second gateway. The embodiment of the present application takes a transmission process of a target data stream as an example for explanation.
For convenience of understanding, an embodiment of a transmission method based on an Overlay network provided in the present application is described below, please refer to fig. 5, where the embodiment may be specifically executed by a gateway, and the gateway may be the gateway described in fig. 2, 3, or 4, and the method in the embodiment may specifically include:
501. first measurement data sent by the controller is periodically acquired.
The first gateway periodically acquires first measurement data sent by the controller. The controller collects the measurement information reported by each node and then distributes the collected information to each node, so that each node can obtain the transmission performance information between each node in the network in real time, and the transmission performance comprises transmission delay. Specifically, a network connection is established between a first gateway and a controller, the controller encapsulates collected first measurement data in a data packet and sends the first measurement data to the first gateway according to a preset time interval, the first measurement data includes information of transmission delay between nodes, and the first gateway decapsulates the received data packet to obtain the information of transmission delay between any network nodes included in the first measurement data.
It should be noted that, there is no specific order between step 501 and step 502, and step 501 may be before step 502, may be after step 502, or may be executed simultaneously with step 502, which is not limited herein.
502. A data stream is obtained from a sender.
And the first gateway acquires the data stream from the sending end, and the first gateway is the gateway node closest to the sending end. The processing of the data stream in the first gateway can be referred to as shown in fig. 6. The first gateway has an input buffer queue, at least one output buffer queue, and routing information blocks corresponding to each output buffer queue one to one, the output buffer queues are sequentially named as a first output buffer queue, a second output buffer queue, a third output buffer queue, and the like according to an established order, and the routing information blocks corresponding to the output buffer queues are sequentially named as a first routing information block, a second routing information block, a third routing information block, and the like according to a corresponding order, which is not described herein again specifically.
It should be noted that, data packets in the data flows enter the input buffer queue after reaching the first gateway, each data flow corresponds to an input buffer queue with a maximum queue length, and the input buffer queue is configured with a preset threshold Th. In the embodiment of the present application, a target data flow is taken as an example for explanation, and when the number of data packets buffered in the input buffer queue exceeds a threshold Th, the first gateway assigns more forwarding paths for the target data flow to perform concurrent transmission so as to avoid long-term packet loss and congestion; if the queuing congestion condition of the input buffer queue still cannot be relieved at this time, the first gateway limits the traffic entering the first gateway through a conventional packet dropping (for example, tail dropping or random packet dropping) or a congestion backpressure mechanism, for example, an indication of congestion notification (ECN) or other mechanisms.
Each incoming data stream has at least one output buffer queue, the number of the output buffer queues depends on the number of forwarding paths after the data stream is output, each Overlay path corresponds to one output buffer queue, each output buffer queue corresponds to a Token (Token), the number of the tokens is changed in real time, a first gateway can generate tokens with corresponding number according to the number of data packets arriving at the input buffer queue per second, each output buffer queue consumes one Token when sending one data packet, and the number of the tokens generated by the first gateway for each output buffer queue represents the number of the data packets which can be extracted from the input buffer queue and sent in the output buffer queue. And when the Token value corresponding to the output buffer queue is not empty, the output buffer queue takes out the data packets with the corresponding number indicated by the Token from the input buffer queue and sends the data packets to the corresponding Overlay path.
It is understood that each routing information block corresponds to a unique output buffer queue of a data stream, and also uniquely corresponds to an Overlay path through which the data stream passes. The routing information block includes: the method comprises the steps of Overlay path identification, data stream identification, maximum sending rate (max band width, maxBW) of a corresponding path, minimum path delay (minDelay), congestion judgment time length Delta, a congestion judgment coefficient a and a rate adjustment coefficient b (Delta > 2. minDelay, a >1, b < 1). The maxBW is a highest sending rate of a data stream counted by the first gateway in a fixed time window on the Overlay path, the minDelay is a minimum transceiving delay of the Overlay path calculated by the first gateway according to the received measurement data within a preset time duration, and the preset time duration may be 1 second or 10 seconds, and the specific details are not limited herein.
503. And when a first data packet of the target data flow enters the first gateway, calculating at least two Overlay network alternative paths according to the first measurement data.
When a first data packet of a target data stream enters a first gateway, the first gateway calculates at least two Overlay network alternative paths according to first measurement data, each Overlay network alternative path is used for forwarding the target data stream from the first gateway to a second gateway, and the first measurement data comprises information used for indicating transmission delay between any network nodes. Specifically, the first gateway determines transmission delay between any two nodes between the first gateway and the second gateway according to the first measurement data, and calculates at least two Overlay network alternative paths according to the connection relationship between the network nodes and the transmission delay between any two nodes, where the Overlay network alternative paths may send the target data stream from the first gateway to the second gateway. The first measurement data is measurement data acquired from the controller before the alternative path of the Overlay network is determined.
It should be noted that the network node includes a gateway and a forwarding node, and the measurement data is obtained in step 502, and is performed simultaneously with and relatively independent of the forwarding process.
504. And selecting the alternative path with the minimum total path delay from the alternative paths of the Overlay network as the first path of the target data stream.
The first gateway selects an alternative path with the minimum total path delay from the at least two alternative paths of the Overlay network as a first path of the target data stream. Specifically, the first gateway adds the transmission delays between all nodes on each Overlay network alternative path to obtain a corresponding total path delay, compares the total path delays corresponding to each Overlay network alternative path, and selects the Overlay network alternative path with the smallest total path delay as the first path, i.e., the preferred path.
505. A first output buffer queue and a first routing information block corresponding to the input buffer queue are established.
The first path establishes a first output buffer queue and a first routing information block corresponding to the input buffer queue.
It should be noted that the first output buffer queue corresponds to the first routing information block, and the first output buffer queue and the first routing information block correspond to the same data stream. The first path is a preferred path for forwarding the target data stream.
506. And determining a first initial generation rate according to the initial sending rate of the first path, and generating a first token for the first path according to the first initial generation rate.
And the first gateway determines a first initial generation rate according to the initial sending rate of the first path, and generates a first token for the first path according to the first initial generation rate, wherein the first token is used for indicating a data packet arriving at the input buffer queue to enter the first output buffer queue.
507. And acquiring a corresponding number of data packets from the input buffer queue according to the number of the first tokens.
And the first gateway acquires the data packets with the corresponding quantity from the input buffer queue according to the quantity of the first tokens.
It should be noted that, each time the input buffer queue receives a data packet, the first gateway generates a first Token (first Token) for the first output buffer queue, which indicates that the data packet may be sent to the first output buffer queue immediately for transmission, so that the number of the first tokens (i.e., the first Token value) is the number of data packet arrivals in the input buffer queue.
It is to be understood that the first token is used to indicate that the token generated by the first gateway corresponds to the first output buffer queue, and similarly, the second token is used to indicate that the token generated by the first gateway corresponds to the second output buffer queue, and there may also be a third token, etc., which is not limited herein. Note that the expressions such as the first token and the first token are used herein to indicate the output buffer queue corresponding to the generated token and to distinguish the attributes of the tokens, and the number of tokens is not limited, and there may be a plurality of first tokens and a plurality of second tokens.
508. And encapsulating and forwarding the acquired data packet through the first output buffer queue.
And the first gateway encapsulates and forwards the acquired data packets with the corresponding quantity through the first output buffer queue.
It should be noted that, after the first output buffer queue encapsulates and forwards the data packet, the data packet reaches the second gateway through the forwarding node on the first path, and the second gateway sends the data packet to the receiving end, so as to implement transmission of the data packet of the target data stream from the sending end to the receiving end.
509. And acquiring second measurement data on the first path.
The first gateway obtains second measurement data on the first path. The specific steps are similar to step 501, and are not described herein again.
510. And calculating the current sending rate, the current path delay and the current information of the first routing information block according to the second measurement data.
And the first gateway calculates the current sending rate (BW), the current path Delay (Delay) and the current information of the first routing information block according to the second measurement data, wherein the information of the current routing information block comprises the highest sending rate maxBW, the lowest path Delay minDelay, congestion judgment duration Delta, congestion judgment coefficient a and rate adjustment coefficient b, wherein a is more than 1, and b is less than 1.
511. And updating the first routing information block and forwarding the target data stream.
And the first gateway updates the first routing information block according to the current sending rate, the current path delay and the current information of the first routing information block, and forwards the target data stream according to the information contained in the updated first routing information block.
Specifically, if the current sending rate BW of the first gateway is greater than the highest sending rate maxBW of the first gateway, that is, the condition BW > maxBW is satisfied, the first gateway updates the highest sending rate maxBW, and takes the current sending rate as a new highest sending rate; if the current path Delay of the first gateway is greater than the lowest path Delay minDelay, namely the condition Delay is met, namely the Delay is greater than minDelay, the first gateway updates the lowest path Delay minDelay and takes the current path Delay as the new lowest path Delay; if the current path Delay of the first gateway is greater than the product of the lowest path Delay and the congestion judgment coefficient, that is, the condition Delay > a min Delay is satisfied (where a is the congestion judgment coefficient), the first gateway determines that the first path is congested, and calculates a new sending rate newBW, where the new sending rate is the product of the highest sending rate and a rate adjustment coefficient, that is, the newBW satisfies the condition newBW ═ b maxBW (where b is the rate adjustment coefficient).
It should be noted that after generating a new sending rate and before the new sending rate is updated by the later calculated sending rate, the first gateway periodically (i.e. according to the new generating rate) generates a new first token according to the time length T, and sends the target data flow according to the number of the new token, where T is 1/newBW;
after determining that a first path is congested and limiting the speed of a target data stream, if a current path Delay is smaller than or equal to the product of a minimum path Delay minDelay and a congestion judgment coefficient a, namely the Delay meets the condition that Delay is not greater than a minDelay, a first token is set by a first gateway according to the number of data packets input into a buffer queue, namely the congestion is considered to be eliminated by the first gateway, the initial generation rate of the first path is recovered by the first gateway, and the target data stream is sent according to the number of the first token.
In the embodiment of the present application, the target data stream is transmitted through the Overlay path with the minimum path delay, that is, the first path. On the first path, when the network is not continuously congested, the target data stream can be sent as soon as possible to increase the throughput; when the first path has continuous congestion, the first gateway can adjust the sending rate of the target data stream on the first path to a value as large as possible in time on the premise of not causing congestion, so that the transmission performance is improved. The utilization rate of network resources is improved.
The above process is described below with reference to specific application scenarios. As shown in fig. 7. The initial state when a target data stream of a certain user is transmitted to a second gateway (gateway 2) through a first gateway (gateway 1), and is transmitted through the gateway 1. When a first data packet of a target data stream enters the gateway 1, the gateway 1 selects two alternative paths { gateway 1, forwarding node 4, forwarding node 3, gateway 2} and { gateway 1, forwarding node 5, forwarding node 6, gateway 2} of the Overlay network according to the transmission delay information of each network node obtained from the controller. Wherein, the transmission delay between the gateway 1 and the forwarding node 1 is 10ms, the transmission delay between the forwarding node 1 and the forwarding node 4 is 5ms, the transmission delay between the forwarding node 4 and the forwarding node 3 is 5ms, and the transmission delay between the forwarding node 3 and the gateway 2 is 10 ms; the transmission delay between the gateway 1 and the forwarding node 5 is 10ms, the transmission delay between the forwarding node 5 and the forwarding node 6 is 20ms, the transmission delay between the forwarding node 6 and the gateway 2 is 20ms, and the total delay of the two Overlay network alternative paths is 30ms and 50ms respectively, so that the Overlay network alternative paths { gateway 1, forwarding node 4, forwarding node 3, gateway 2} with smaller total path delay are determined as the preferred paths. The target data stream is sent through the preferred path, and at this time, in a Route Information Block (RIB) corresponding to the preferred path: delta is 100ms, a is 1.2 and b is 0.8. After a period of time, the transmission delay between the gateway 1 and the forwarding node 4 and the forwarding node 3 is changed from 5ms to 10ms, and the time length of the delay increased to 10ms exceeds Delta, then the generation speed of Token is calculated to be 4K/s (the size of the data packet is 1KB) according to the highest sending rate maxBW being 5MBps at the moment, and the sending rate on the path is forced to be 4 MBps. After 50ms, the gateway 1 finds that the transmission delay from the forwarding node 4 to the forwarding node 3 is recovered to 5ms, and then the gateway 1 stops reducing the speed and transmits the target data stream according to the speed of the target data stream entering the gateway 1.
Referring to fig. 8, another embodiment of a transmission method based on an Overlay network in the embodiment of the present application includes:
801. first measurement data sent by the controller is periodically acquired.
802. A data stream is obtained from a sender.
803. And when a first data packet of the target data flow enters the first gateway, calculating at least two Overlay network alternative paths according to the measurement data.
804. And selecting the alternative path with the minimum total path delay from the alternative paths of the Overlay network as the first path of the target data stream.
805. A first output buffer queue and a first routing information block corresponding to the input buffer queue are established.
806. And determining a first initial generation rate according to the initial sending rate of the first path, and generating a first token for the first path according to the first initial generation rate.
807. And acquiring a corresponding number of data packets from the input buffer queue according to the number of the first tokens.
808. And encapsulating and forwarding the acquired data packet through the first output buffer queue.
809. And acquiring second measurement data on the first path.
810. And calculating the current sending rate, the current path delay and the current information of the first routing information block according to the second measurement data.
811. And updating the first routing information block and forwarding the target data stream.
Steps 801 to 811 are similar to steps 501 to 511, and detailed description thereof is omitted.
812. And judging whether the number of the data packets input into the buffer queue is greater than a preset threshold Th or not, wherein the duration exceeds the congestion judgment duration Delta.
The first gateway judges whether the number of the data packets input into the buffer queue is larger than a preset threshold Th or not, and the duration exceeds the congestion judgment time Delta. If the number of the data packets input into the buffer queue is greater than or equal to the preset threshold Th and the duration exceeds the congestion determination duration Delta, step 813 is executed. The threshold Th may be set according to practical situations, and is not limited herein,
813. and selecting one alternative path with the minimum total path delay from the rest alternative paths of the Overlay network as a second path of the target data stream.
If the number of the data packets input into the buffer queue is greater than or equal to a preset threshold Th, the first gateway selects an alternative path with the minimum total path delay from the remaining alternative paths of the Overlay network as a second path of the target data stream, and establishes a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path.
It should be noted that, in the embodiment of the present application, after the congestion of the first path is determined, the sending rate is not adjusted, and other alternative paths are directly added to perform the shunting. Specifically, the steps 809 to 811 may not be executed.
814. And determining a second initial generation rate according to the initial sending rate of the second path, and generating a second token for the second path according to the second initial generation rate.
815. And acquiring a corresponding number of data packets from the input buffer queue according to the number of the second tokens.
816. And encapsulating and forwarding the acquired data packet through a second output buffer queue.
Steps 814 to 816 are similar to steps 806 to 808, and detailed description thereof is omitted here.
817. And judging whether the number of the data packets input into the buffer queue is greater than or equal to Th.
The first gateway judges whether the number of data packets input into the buffer queue is larger than or equal to Th. If the number of packets in the input buffer queue is greater than or equal to Th, the first gateway performs step 818; if the number of data packets in the input buffer queue is smaller than Th, go to step 820.
818. And continuing to select a new path from the rest of the Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted.
If the number of the data packets input into the buffer queue is greater than or equal to Th, the first gateway continues to select a new path from the remaining Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted.
819. And judging whether the number of the data packets input into the buffer queue reaches the maximum value which can be buffered by the input buffer queue and processing.
The first gateway judges whether the number of the data packets of the input buffer queue reaches the maximum value which can be cached by the input buffer queue; if the number of data packets input into the buffer queue reaches the maximum value, the first gateway performs packet loss processing by using different dynamic queue management (AQM) mechanisms, or the first gateway transmits a speed reduction instruction to the transmitting end by using an Explicit Congestion Notification (ECN) mechanism, where the speed reduction instruction is used to instruct the first gateway to reduce the transmitting rate of the transmitting end.
820. And stopping generating the second token for the second path so that the target data stream is transmitted from the first path.
And if the number of the data packets input into the buffer queue is less than Th, the first gateway stops generating the second token for the second path, so that the target data stream is transmitted from the first path.
In the embodiment of the application, the first gateway can utilize more network paths for concurrent transmission as much as possible under the condition that a single forwarding path cannot meet the transmission requirement of the data stream, so that the QoS requirement of the data stream is met as much as possible, and meanwhile, the rate of a source end can be controlled by utilizing a traditional AQM mechanism under the condition that the whole network resources are insufficient, so that more congestion is not caused to the network.
The above process is described below with reference to a specific application scenario embodiment provided by the present application. As shown in fig. 9. In this example, a data stream is transmitted through a gateway 1 and then transmitted to a receiving end after passing through a gateway 2. After a first data packet of a data stream enters the gateway 1, the gateway 1 calculates to obtain two Overlay network alternative paths, and the corresponding path delays are 10ms and 20ms respectively. The gateway 1 selects the first path as the preferred path for transmission. Since the bottleneck bandwidth of the first path is 3MBps, and the data flow of the user is 9MBps, the forwarding rate of the preferred path is limited to be below 3MBps after a period of time, and then the input buffer queue of the gateway 1 observes that the buffered data packet exceeds Th and lasts for Delta time, and at the moment, the gateway selects the remaining alternative path (second path) of the Overlay network for simultaneous transmission. Because the sum of the bandwidths of the two paths cannot meet the sending requirement of the data stream, the gateway 1 starts to lose the packet, so that the sending end is forced to reduce the speed, and finally the transmission rate of the data stream is close to 8 MBps. In the application scenario, a gateway (gateway 2) adjacent to the receiving end is required to reorder the packets on the same data stream, so as to solve the problem of data stream disorder caused by different transmission delays of multiple paths.
In the above description of the transmission method based on the Overlay network in the embodiment of the present application, referring to fig. 10, a gateway in the embodiment of the present application is described below, where the gateway is a first gateway, the gateway in the embodiment of the present application performs related functions and steps of the gateway in fig. 2 to 9, and the gateway may specifically include:
a calculating unit 1001, configured to calculate at least two Overlay network alternative paths according to first measurement data when a first data packet of the target data stream enters the first gateway, where the Overlay network alternative paths are used to forward a data packet of the target data stream from the first gateway to the second gateway, and the first measurement data includes information used to indicate transmission delay between any network nodes;
a selecting unit 1002, configured to select, as a first path of the target data stream, an alternative path with a minimum total path delay from the alternative paths of the Overlay network;
a sending unit 1003, configured to send the data packet of the target data flow to the second gateway through the first path;
an obtaining unit 1004, configured to obtain second measurement data on the first path;
a processing unit 1005, configured to adjust a sending rate of the data packet according to the second measurement data.
In this embodiment of the present application, the gateway executes the steps of the above method embodiment, and the target data stream is transmitted through the Overlay path with the minimum path delay, that is, the first path. On the first path, when the network is not continuously congested, the target data stream can be sent as soon as possible to increase the throughput; when the first path has continuous congestion, the first gateway can adjust the sending rate of the target data stream on the first path to a value as large as possible in time on the premise of not causing congestion, so that the transmission performance is improved. The utilization rate of network resources is improved.
Referring to fig. 11, in another embodiment of the gateway in the embodiment of the present application, the gateway is a first gateway, and includes:
a calculating unit 1101, configured to calculate at least two Overlay network alternative paths according to first measurement data when a first packet of the target data stream enters the first gateway, where the Overlay network alternative paths are used to forward a data packet of the target data stream from the first gateway to the second gateway, and the first measurement data includes information used to indicate transmission delay between any network nodes;
a selecting unit 1102, configured to select, as a first path of the target data stream, an alternative path with a minimum total path delay from the alternative paths of the Overlay network;
a sending unit 1103, configured to send the data packet of the target data stream to the second gateway through the first path;
an obtaining unit 1104, configured to obtain second measurement data on the first path;
a processing unit 1105, configured to adjust a sending rate of the data packet according to the second measurement data.
In a possible implementation, the gateway may further include:
the obtaining unit 1104 is further configured to periodically obtain the first measurement data sent by the controller, where the first measurement data includes information indicating a transmission delay between any network nodes.
In a possible implementation, the gateway may further include:
an establishing unit 1106, configured to establish a first output buffer queue and a first routing information block corresponding to the input buffer queue for the first path;
the processing unit 1105 is further configured to determine a first initial generation rate according to the initial sending rate of the first path, and generate a first token for the first path according to the first initial generation rate, where the first token is used to indicate that a data packet arriving at the input buffer queue enters the first output buffer queue.
In a possible implementation, the sending unit 1103 is specifically configured to:
acquiring a corresponding number of data packets from the input buffer queue according to the number of the first tokens;
and encapsulating and forwarding the acquired data packets with the corresponding quantity through the first output buffer queue.
In a possible implementation, the processing unit 1105 is specifically configured to:
and calculating the current sending rate BW and the current path Delay according to the second measurement data, and updating the first routing information block, wherein the first routing information block comprises a highest sending rate maxBW, a lowest path Delay minDelay, a congestion judgment time Delta, a congestion judgment coefficient a and a rate adjustment coefficient b, wherein a is more than 1, and b is less than 1.
In a possible implementation, the processing unit 1105 is further configured to:
if the current sending rate BW is larger than the highest sending rate maxBW, updating the highest sending rate;
if the current path Delay is smaller than the lowest path Delay minDelay, updating the lowest path Delay;
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, determining that the first path is congested, and calculating a new sending rate newBW, where the new sending rate newBW is the product of the maximum sending rate maxBW and the rate adjustment coefficient b.
In a possible implementation, the processing unit 1105 is further configured to:
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, generating the first token according to a new generation rate, and transmitting the target data stream according to the number of the first tokens, wherein the new generation rate is determined according to the new transmission rate;
and if the current path Delay is less than or equal to the product of the minimum path Delay minDelay and the congestion judgment coefficient a, recovering the initial generation rate of the first path, and sending the target data stream according to the number of the first tokens.
In a possible implementation, the processing unit 1105 is further configured to:
judging whether the number of the data packets of the input buffer queue is greater than a preset threshold Th or not, and the duration exceeds the congestion judgment time Delta;
if the number of the data packets of the input buffer queue is greater than or equal to a preset threshold Th and the duration exceeds the congestion judgment time Delta, selecting an alternative path with the minimum total path delay from the rest alternative paths of the Overlay network as a second path of the target data stream;
and sending the data message of the target data flow to the second gateway through the first path and the second path.
In a possible implementation, the gateway may further include:
the establishing unit 1106 is further configured to establish a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path;
the processing unit 1105 is further configured to determine a second initial generating rate according to the initial sending rate of the second path, and generate a second token for the second path according to the second initial generating rate, where the second token is used to indicate that the data packet arriving at the input buffer queue enters the second output buffer queue, and the initial sending rate of the second path is a difference value between the sending rate of the sending end and the initial sending rate of the first path.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (23)

1. A transmission method based on an Overlay network, the Overlay network including a first gateway, a forwarding node and a second gateway, wherein the first gateway receives a data packet of a target data stream sent from a sending end, and forwards the data packet to the second gateway through the forwarding node, the transmission method comprising:
when a first data packet of the target data stream enters the first gateway, the first gateway calculates at least two Overlay network alternative paths according to first measurement data, the Overlay network alternative paths are used for forwarding a data packet of the target data stream from the first gateway to the second gateway, and the first measurement data comprise information used for indicating transmission delay between any network nodes;
the first gateway selects an alternative path with the minimum total path delay from the alternative paths of the Overlay network as a first path of the target data stream;
the first gateway sends the data message of the target data stream to the second gateway through the first path;
the first gateway acquires second measurement data on the first path;
the first gateway adjusts the sending rate of the data message according to the second measurement data;
after the first gateway adjusts the sending rate of the data message according to the second measurement data, the first gateway judges whether the number of data packets input into a buffer queue is greater than a preset threshold Th or not, and the duration exceeds the congestion judgment duration Delta;
if the number of the data packets of the input buffer queue is greater than or equal to a preset threshold Th and the duration exceeds the congestion judgment time Delta, the first gateway selects an alternative path with the minimum total path delay from the remaining alternative paths of the Overlay network as a second path of the target data stream;
and the first gateway sends the data message of the target data stream to the second gateway through the first path and the second path.
2. The transmission method according to claim 1, wherein the Overlay network further comprises a controller, and before the first gateway calculates at least two Overlay network alternative paths according to the first measurement data, the method further comprises:
the first gateway periodically acquires the first measurement data sent by the controller.
3. The transmission method according to claim 1 or 2, wherein before the first gateway sends the datagram of the target data flow to the second gateway through the first path, the method further comprises:
the first gateway establishes a first output buffer queue and a first routing information block corresponding to the input buffer queue for the first path;
and the first gateway determines a first initial generation rate according to the initial sending rate of the first path, and generates a first token for the first path according to the first initial generation rate, wherein the first token is used for indicating a data packet arriving at the input buffer queue to enter the first output buffer queue.
4. The transmission method according to claim 3, wherein the sending, by the first gateway, the data packet of the target data flow to the second gateway through the first path includes:
the first gateway obtains a corresponding number of data packets from the input buffer queue according to the number of the first tokens;
and the first gateway encapsulates and forwards the acquired data packets with the corresponding quantity through the first output buffer queue.
5. The transmission method according to claim 3, wherein after the first gateway acquires the second measurement data on the first path, the method further comprises:
and the first gateway calculates the current sending rate BW and the current path Delay according to the second measurement data, and updates the first routing information block, wherein the first routing information block comprises a highest sending rate maxBW, a lowest path Delay minDelay, a congestion judgment time Delta, a congestion judgment coefficient a and a rate adjustment coefficient b, wherein a is more than 1, and b is less than 1.
6. The transmission method according to claim 5, wherein the first gateway calculates a current sending rate BW and a path Delay according to the second measurement data, and updating the first routing information block comprises:
if the current sending rate BW is greater than the highest sending rate maxBW, the first gateway updates the highest sending rate;
if the current path Delay is smaller than the lowest path Delay minDelay, the first gateway updates the lowest path Delay;
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway determines that the first path is congested and calculates a new sending rate newBW, where the new sending rate newBW is the product of the maximum sending rate maxBW and the rate adjustment coefficient b.
7. The transmission method according to claim 6, characterized in that the method further comprises:
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway generates the first token according to a new generation rate, and transmits the target data stream according to the number of the first tokens, wherein the new generation rate is determined according to the new transmission rate;
if the current path Delay is less than or equal to the product of the minimum path Delay minDelay and the congestion judgment coefficient a, the first gateway recovers the initial generation rate of the first path and sends the target data stream according to the number of the first tokens.
8. The transmission method according to any one of claims 1, 2 and 4 to 7, wherein before the gateway sends the data packets of the target data flow to the second gateway through the first path and the second path, the method further comprises:
the first gateway establishes a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path;
and the first gateway determines a second initial generation rate according to the initial sending rate of the second path, and generates a second token for the second path according to the second initial generation rate, wherein the second token is used for indicating the data message reaching the input buffer queue to enter the second output buffer queue, and the initial sending rate of the second path is the difference value of the sending rate of the sending end and the initial sending rate of the first path.
9. The transmission method according to claim 8, wherein after the first gateway determines a second initial generation rate according to an initial sending rate of the second path and generates a second token for the second path according to the second initial generation rate, the method further comprises:
if the number of the data packets of the input buffer queue is smaller than the Th, the first gateway stops generating the second token for the second path, so that the target data stream is transmitted from the first path;
if the number of the data packets in the input buffer queue is still greater than or equal to Th, the first gateway continues to select a new path from the remaining Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted.
10. The transmission method according to claim 9, wherein after the first gateway continues to select a new path among the remaining Overlay network alternative paths to carry the target data stream, the method further comprises:
the first gateway judges whether the number of the data packets of the input buffer queue reaches the maximum value which can be cached by the input buffer queue;
if the number of the data packets of the input buffer queue reaches the maximum value, the first gateway performs packet loss processing by using different dynamic queue management (AQM) mechanisms, or the first gateway transmits a speed reduction instruction to the transmitting end by using a mechanism for displaying congestion notification (ECN), wherein the speed reduction instruction is used for indicating the first gateway to reduce the transmitting rate of the transmitting end.
11. A gateway, wherein the gateway is a first gateway, comprising:
a calculating unit, configured to calculate at least two Overlay network alternative paths according to first measurement data when a first data packet of a target data stream enters the first gateway, where the Overlay network alternative paths are used to forward a data packet of the target data stream from the first gateway to a second gateway, and the first measurement data includes information used to indicate transmission delay between any network nodes;
a selecting unit, configured to select, as a first path of the target data stream, an alternative path with a minimum total path delay from the alternative paths of the Overlay network;
a sending unit, configured to send the data packet of the target data flow to the second gateway through the first path;
an obtaining unit, configured to obtain second measurement data on the first path;
the processing unit is used for adjusting the sending rate of the data message according to the second measurement data;
the processing unit is further to:
judging whether the number of data packets input into the buffer queue is greater than a preset threshold Th or not, and judging whether the duration exceeds a congestion judgment duration Delta or not;
if the number of the data packets of the input buffer queue is greater than or equal to a preset threshold Th and the duration exceeds the congestion judgment time Delta, selecting an alternative path with the minimum total path delay from the rest alternative paths of the Overlay network as a second path of the target data stream;
and sending the data message of the target data flow to the second gateway through the first path and the second path.
12. The gateway according to claim 11, comprising:
the acquiring unit is further configured to periodically acquire the first measurement data sent by the controller.
13. The gateway of claim 12, further comprising:
the establishing unit is used for establishing a first output buffer queue and a first routing information block corresponding to the input buffer queue for the first path;
the processing unit is further configured to determine a first initial generation rate according to the initial sending rate of the first path, and generate a first token for the first path according to the first initial generation rate, where the first token is used to indicate that a packet arriving at the input buffer queue enters the first output buffer queue.
14. The gateway according to claim 13, wherein the sending unit is specifically configured to:
acquiring a corresponding number of data packets from the input buffer queue according to the number of the first tokens;
and encapsulating and forwarding the acquired data packets with the corresponding quantity through the first output buffer queue.
15. The gateway according to claim 13, wherein the processing unit is specifically configured to:
and calculating the current sending rate BW and the current path Delay according to the second measurement data, and updating the first routing information block, wherein the first routing information block comprises a highest sending rate maxBW, a lowest path Delay minDelay, a congestion judgment time Delta, a congestion judgment coefficient a and a rate adjustment coefficient b, wherein a is more than 1, and b is less than 1.
16. The gateway of claim 15, wherein the processing unit is further configured to:
if the current sending rate BW is larger than the highest sending rate maxBW, updating the highest sending rate;
if the current path Delay is smaller than the lowest path Delay minDelay, updating the lowest path Delay;
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, determining that the first path is congested, and calculating a new sending rate newBW, where the new sending rate newBW is the product of the maximum sending rate maxBW and the rate adjustment coefficient b.
17. The gateway of claim 16, wherein the processing unit is further configured to:
if the current path Delay is greater than the product of the minimum path Delay minDelay and the congestion judgment coefficient a, generating the first token according to a new generation rate, and transmitting the target data stream according to the number of the first tokens, wherein the new generation rate is determined according to the new transmission rate;
and if the current path Delay is less than or equal to the product of the minimum path Delay minDelay and the congestion judgment coefficient a, recovering the initial generation rate of the first path, and sending the target data stream according to the number of the first tokens.
18. A gateway according to any of claims 13 to 17, comprising:
the establishing unit is further configured to establish a second output buffer queue and a second routing information block corresponding to the input buffer queue for the second path;
the processing unit is further configured to determine a second initial generation rate according to the initial sending rate of the second path, and generate a second token for the second path according to the second initial generation rate, where the second token is used to indicate that the data packet arriving at the input buffer queue enters the second output buffer queue, and the initial sending rate of the second path is a difference value between the sending rate of the sending end and the initial sending rate of the first path.
19. The gateway of claim 18, wherein the processing unit is further configured to:
if the number of the data packets of the input buffer queue is smaller than the Th, stopping generating the second token for the second path, so that the target data stream is transmitted from the first path;
if the number of the data packets of the input buffer queue is still greater than or equal to Th, continuing to select a new path from the remaining Overlay network alternative paths to carry the target data stream until the Overlay network alternative paths are exhausted.
20. The gateway of claim 19, wherein the processing unit is further configured to:
judging whether the number of the data packets of the input buffer queue reaches the maximum value which can be cached by the input buffer queue;
and if the number of the data packets of the input buffer queue reaches the maximum value, performing packet loss processing by using different dynamic queue management (AQM) mechanisms, or sending a speed reduction instruction to the sending end by using an ECN (event-triggered network) mechanism for displaying congestion notification, wherein the speed reduction instruction is used for indicating the first gateway to reduce the sending rate of the sending end.
21. A gateway, comprising: a memory, a transceiver, and at least one processor, the memory having instructions stored therein, the memory, the transceiver, and the at least one processor interconnected by a line, the at least one processor invoking the instructions to perform the method of any of claims 1-10.
22. A transmission system based on an Overlay network, comprising:
a gateway, a forwarding node and a controller;
the gateway is configured to perform the method of any one of claims 1-10;
the gateway is also used for measuring the transmission delay between adjacent network nodes;
the forwarding node is used for realizing the forwarding of the data message and the measurement of the transmission delay between adjacent network nodes;
the controller is used for collecting and distributing measurement data of any network node.
23. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1-10.
CN201711062264.XA 2017-10-31 2017-10-31 Transmission method, gateway and transmission system based on Overlay network Active CN108092888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711062264.XA CN108092888B (en) 2017-10-31 2017-10-31 Transmission method, gateway and transmission system based on Overlay network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711062264.XA CN108092888B (en) 2017-10-31 2017-10-31 Transmission method, gateway and transmission system based on Overlay network

Publications (2)

Publication Number Publication Date
CN108092888A CN108092888A (en) 2018-05-29
CN108092888B true CN108092888B (en) 2021-06-08

Family

ID=62171983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711062264.XA Active CN108092888B (en) 2017-10-31 2017-10-31 Transmission method, gateway and transmission system based on Overlay network

Country Status (1)

Country Link
CN (1) CN108092888B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112714072B (en) * 2019-10-25 2024-06-07 华为技术有限公司 Method and device for adjusting sending rate
CN110868747B (en) * 2019-11-28 2020-09-25 上海商米科技集团股份有限公司 Method for detecting delay and automatically switching multiple network modes
CN113132262B (en) * 2020-01-15 2024-05-03 阿里巴巴集团控股有限公司 Data stream processing and classifying method, device and system
CN114024856A (en) * 2020-07-17 2022-02-08 中兴通讯股份有限公司 Route optimization method, physical network device and computer readable storage medium
CN114205279B (en) * 2020-09-01 2023-07-21 中国移动通信有限公司研究院 Path selection policy configuration method, path selection device and storage medium
CN116866249A (en) * 2020-09-08 2023-10-10 超聚变数字技术有限公司 Communication system, data processing method and related equipment
CN112511452A (en) * 2020-12-15 2021-03-16 安徽皖通邮电股份有限公司 Method and system for automatically adjusting rate of L2VPN
CN112511456B (en) * 2020-12-21 2024-03-22 北京百度网讯科技有限公司 Flow control method, apparatus, device, storage medium, and computer program product
CN114697240A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Data transmission method, device, system and storage medium
CN113542371B (en) * 2021-06-29 2022-05-17 西南大学 Resource scheduling method and system based on edge gateway
CN114006856B (en) * 2021-12-30 2022-03-11 北京天维信通科技有限公司 Network processing method for realizing multi-path concurrent transmission based on HASH algorithm
CN115987794B (en) * 2023-03-17 2023-05-12 深圳互联先锋科技有限公司 Intelligent shunting method based on SD-WAN
CN115996195B (en) * 2023-03-23 2023-05-30 腾讯科技(深圳)有限公司 Data transmission method, device, equipment and medium
CN116614445B (en) * 2023-07-20 2023-10-20 苏州仰思坪半导体有限公司 Data transmission method and related device thereof
CN117041140B (en) * 2023-10-10 2024-01-30 腾讯科技(深圳)有限公司 Data message transmission method, related device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851441A (en) * 2017-01-13 2017-06-13 中国人民武装警察部队工程大学 The safe light path of multi-area optical network based on layering PCE sets up agreement

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9357472B2 (en) * 2010-04-27 2016-05-31 International Business Machines Corporation Adaptive wireless sensor network and method of routing data in a wireless sensor network
CN102404187A (en) * 2010-09-13 2012-04-04 华为技术有限公司 Congestion control method and system as well as network equipment
CN104253749B (en) * 2014-09-18 2018-04-13 华南理工大学 A kind of user terminal distribution route computational methods based on software defined network framework
CN105591912B (en) * 2015-07-21 2019-08-06 新华三技术有限公司 A kind of selection method and device of forward-path
CN106713141B (en) * 2015-11-18 2020-04-28 华为技术有限公司 Method and network node for obtaining a target transmission path

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851441A (en) * 2017-01-13 2017-06-13 中国人民武装警察部队工程大学 The safe light path of multi-area optical network based on layering PCE sets up agreement

Also Published As

Publication number Publication date
CN108092888A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108092888B (en) Transmission method, gateway and transmission system based on Overlay network
US11316795B2 (en) Network flow control method and network device
Rozhnova et al. An effective hop-by-hop interest shaping mechanism for ccn communications
EP3942758A1 (en) System and method for facilitating global fairness in a network
JP2020502948A (en) Packet transmission system and method
WO2020022209A1 (en) Network control device and network control method
WO2019157867A1 (en) Method for controlling traffic in packet network, and device
CN110890994B (en) Method, device and system for determining message forwarding path
US20180367464A1 (en) Method And Apparatus For Managing Network Congestion
CN113746748B (en) Explicit congestion control method in named data network
EP1698114A1 (en) Method and arrangement for adapting to variations in an available bandwidth to a local network
US12010025B2 (en) System and method for accelerating or decelerating a data transport network protocol based on real time transport network congestion conditions
US8264957B1 (en) Control of preemption-based beat-down effect
EP3278500B1 (en) Processing data items in a communications network
Kato et al. A congestion control method for named data networking with hop-by-hop window-based approach
JP2006197473A (en) Node
CN101438540A (en) Call admission control method
CN114095434A (en) Method and related apparatus for controlling network congestion
JP2010130329A (en) Communication apparatus and relay apparatus
Thibaud et al. An analysis of NDN Congestion Control challenges
Alparslan et al. AIMD-based online MPLS traffic engineering for TCP flows via distributed multi-path routing
Şimşek et al. A new packet scheduling algorithm for real-time multimedia streaming
Nguyen et al. Cache the Queues: Caching and Forwarding in ICN from a Congestion Control Perspective
CN114868370A (en) Method and system for reducing network latency
Gopalakrishnan et al. Robust router congestion control using acceptance and departure rate measures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant