WO2018133496A1 - 一种报负载分担方法及网络设备 - Google Patents

一种报负载分担方法及网络设备 Download PDF

Info

Publication number
WO2018133496A1
WO2018133496A1 PCT/CN2017/109132 CN2017109132W WO2018133496A1 WO 2018133496 A1 WO2018133496 A1 WO 2018133496A1 CN 2017109132 W CN2017109132 W CN 2017109132W WO 2018133496 A1 WO2018133496 A1 WO 2018133496A1
Authority
WO
WIPO (PCT)
Prior art keywords
tunnel
binding
network device
cache
reordering
Prior art date
Application number
PCT/CN2017/109132
Other languages
English (en)
French (fr)
Inventor
陈李昊
张民贵
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17892794.3A priority Critical patent/EP3562108B1/en
Publication of WO2018133496A1 publication Critical patent/WO2018133496A1/zh
Priority to US16/517,224 priority patent/US10999210B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2466Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/624Altering the ordering of packets in an individual queue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2212/00Encapsulation of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a load sharing method and a network device.
  • Hybrid Access (HA) network refers to bundling different access network connections for use by the same user.
  • Hybrid access networks enable users to experience high-speed networks.
  • two different access networks are Digital Subscriber Line (DSL) and Long Term Evolution (LTE).
  • DSL Digital Subscriber Line
  • LTE Long Term Evolution
  • GRE Generic Routing Encapsulation
  • a GRE tunnel is established on the WAN and WAN Wide Area Network (WAN) interfaces, and then the two tunnels are bound into one uplink access channel.
  • the message between the carrier-side network device and the user-side network device is encapsulated into a GRE packet format on the user-side network device (for example, HG (Home Gateway, HG)) and the carrier side.
  • Network devices for example, HAAP (Hybrid Access Aggregation Point, HAAP)
  • the carrier-side network device is used to bundle and connect different access networks to provide high-speed Internet access for users; the user-side network device can allow two different access networks to simultaneously access, for example, can simultaneously allow fixed broadband networks and mobiles. Network access.
  • load balancing is implemented based on a token bucket in a hybrid access network.
  • DSL and LTE access networks Take DSL and LTE access networks as an example.
  • Two tunnels are established between HAAP and HA, namely DSL tunnel and LTE tunnel.
  • the sending end uses the dyeing mechanism to determine the coloring of the packet according to the bandwidth of the DSL tunnel and the LTE tunnel, and determines whether the packet is sent along the DSL tunnel or the LTE tunnel according to the coloring.
  • the sender maintains two token buckets and a DSL token bucket (shown by a left slash in Figure 1)
  • the LTE token bucket (shown with a right slash in Figure 1).
  • the size of the two token buckets is determined according to the DSL and LTE tunnel bandwidth.
  • the packets entering the DSL token bucket are marked as green (shown by the left slash in Figure 1).
  • the packets exceeding the DSL token bucket receiving capability enter the LTE token bucket, and the data packets entering the LTE token bucket are received.
  • Marked in yellow (shown with a right slash in Figure 1). Finally, the green message is sent along the DSL tunnel and the yellow message is sent along the LTE tunnel.
  • the load balancing mechanism based on the token bucket cannot dynamically adjust the load sharing ratio of the traffic according to the actual conditions of the DSL tunnel and the LTE tunnel, and the transmission efficiency of the binding tunnel is reduced. For example, in the case of congestion and delay in the DSL tunnel, when the throughput of the DSL tunnel does not reach the subscription bandwidth of the DSL tunnel, the data packet is marked as green, and the traffic is still transmitted according to the DSL tunnel with poor condition. At this time, the available bandwidth of the LTE tunnel cannot be reasonably used by the user, causing the LTE tunnel to be idle and the delay of the DSL tunnel to increase, resulting in poor overall throughput of the HA network and easy DSL tunneling.
  • a large number of packets transmitted through the DSL can be reordered after the packets in the LTE tunnel are received, so that a large number of reordering buffers may be accumulated. If the packet is delivered, the overall throughput is reduced. The binding tunnel throughput is not as good as that of a single DSL tunnel. If the congestion is severe, the reordering buffer may overflow. As a result, packets are discarded and the application layer retransmission is triggered.
  • the present application provides a method and a network device for a load balancing method, which are used to improve the transmission efficiency of a bonded tunnel in a hybrid access network.
  • the present application provides a method of load sharing.
  • the method is applied to a first network device.
  • a first tunnel and a second tunnel are established between the first network device and the second network device, and the first tunnel and the second tunnel form a binding tunnel by hybrid port binding Hybrid bonding.
  • the second network device includes a binding tunnel reordering buffer, and the binding tunnel reordering buffer is used to sort packets that enter the binding tunnel reordering buffer.
  • the first network device may be, for example, an HG device in a hybrid access network, which may be, for example, a HAAP device in the hybrid access network.
  • the first network device may be, for example, a HAAP device in a hybrid access network
  • the second network device may be, for example, an HG device in the hybrid access network.
  • the first network device sends a plurality of data packets to the second network device, and receives an acknowledgment response sent by the second network device, which may be specifically for each of the multiple data packets. Send an acknowledgment response, an acknowledgment response for each of several messages, or a acknowledgment response sent at other set intervals.
  • the acknowledgment response can be considered as an acknowledgment response of the second network device for the plurality of data messages.
  • the first network device determines, according to the confirmation response, the usage of the cache space of the reordering binding tunnel reordering cache, and reorders the cached space usage and the set load according to the binding tunnel. And a load balancing policy, where the packet transmitted by the first network device to the second network device is load-balanced between the first tunnel and the second tunnel.
  • the first network device sends the multiple data packets to the second network device by using the first tunnel.
  • the first network device sends the multiple data packets to the second network device by using the second tunnel.
  • the first network device sends, by using the first tunnel, the first part of the data packet to the second network device, where the first network device passes the second The tunnel sends a second part of the data message to the second network device.
  • the usage of the cache space of the binding tunnel reordering cache may include the size of the used cache space in the binding tunnel reordering cache or the binding tunnel reordering cache. The size of the available cache space.
  • the first network device determines, according to the acknowledgment response, the usage of the cache space of the binding tunnel reordering cache, which specifically includes:
  • the first network device based on the acknowledgment of the first tunnel in response to determining the number of packets not correctly ordered complete F 1, the second tunnel is not correctly completed, the number of packets ordered F 2 and the binding
  • the number of messages in the queue determines the usage of the cache space of the bound tunnel reordering cache.
  • the second network device further includes a first tunnel reordering buffer and a second tunnel reordering buffer, where the first tunnel reordering buffer is used for the packet transmitted by using the first tunnel. Sorting, the second tunnel reordering buffer is used to sort the messages transmitted through the second tunnel.
  • the first network device receives a first acknowledgment response sent by the second network device for the first data packet in the first partial data packet, and determines, according to the first acknowledgment response, that the first acknowledgment has been entered.
  • the first network device receives a second acknowledgment response sent by the second network device for the second data packet in the second partial data packet, and determines, according to the second acknowledgment response, that the The number of packets in which the two tunnels reorder the cache and complete the correct sorting, and the number of packets that have entered the bound tunnel reordering buffer and complete the correct sorting.
  • the first network device obtains the number of packets that are not correctly sorted in the binding tunnel according to the larger value of M and N and the number of the multiple data packets sent by the first network device.
  • the first network device re-orders the cache according to the number of the second partial data packets sent to the second network device and determined according to the second acknowledgement response, and completes the correct sorting.
  • the number of packets is obtained by the number of packets F 2 in the second tunnel that are not correctly sorted.
  • each of the first part of the data packet includes a binding tunnel number of the packet and a first tunnel sequence number.
  • the first tunnel sequence number is used to indicate the transmission sequence of each packet in the first partial data packet in the first tunnel, and the packet included in each packet in the first partial data packet
  • the binding tunnel sequence number of the text is used to indicate the transmission sequence of the packet in the binding tunnel.
  • Each of the second part of the data packet includes a binding tunnel number of the packet and a second tunnel sequence number, where the second tunnel sequence number is used to represent each of the second part of the data packet. The order of transmission of the message in the second tunnel.
  • the binding tunnel sequence number of the packet included in each packet in the second part of the data packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the first acknowledgement response includes a first tunnel acknowledgement sequence number and a binding tunnel acknowledgement sequence number; and the second acknowledgement response includes a second tunnel acknowledgement sequence number and a binding tunnel acknowledgement sequence number.
  • the first network device determines, according to the first tunnel acknowledgement sequence number, the number of packets that have entered the first tunnel reordering buffer and completes the correct sorting, and confirms according to the binding tunnel included in the first acknowledgement response. Preface The number determines the number M of packets that have entered the binding tunnel reordering buffer and completed the correct ordering.
  • the first network device determines, according to the second tunnel acknowledgement sequence number, the number of packets that have entered the second tunnel reordering buffer and completes the correct sorting, and confirms according to the binding tunnel included in the second acknowledgement response.
  • the sequence number determines the number of packets that have entered the binding tunnel reordering buffer and completes the correct sorting.
  • the size of the used buffer space in the bound tunnel reordering cache or the buffer available in the binding tunnel reordering cache may be determined. The size of the space. And reporting, by the first network device, to the second network device, according to the size of the used buffer space in the binding tunnel reordering cache or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic load balancing between the first tunnel and the second tunnel can effectively reduce the network delay of the bound tunnel, and/or can significantly suppress the network congestion caused by the network. Bind tunnel reordering buffer traffic overflows, resulting in packet loss, triggering application layer retransmission.
  • the field number already defined by the existing protocol can be used to carry the message sequence number, thereby determining the number of packets in the tunnel reordering cache by using the message sequence number, thereby reducing the implementation of the method. the complexity.
  • each of the first part of the data packet includes a sequence number field for carrying the first tunnel sequence number and a binding sequence number for binding the bearer tunnel number. a Sequence Number field; each of the second part of the data packet includes a Sequence Number field for carrying the second tunnel sequence number and a Binding Sequence Number for carrying the bound tunnel sequence number Field.
  • the first acknowledgment response is a general routing encapsulation GRE data packet
  • the first tunnel acknowledgment sequence number is carried by using an acknowledgment number Acknowledgment Number field included in the GRE data packet
  • the Binding Acknowledgment Number field included in the GRE data packet carries the binding tunnel acknowledgement sequence number; or the first acknowledgment response is a GRE control packet, and the attribute type included in the GRE control packet is adopted.
  • the length value Attribute TLV field carries the first tunnel acknowledge sequence number and the binding tunnel acknowledge sequence number.
  • the second acknowledgment response is a GRE data packet
  • the acknowledgment number Acknowledgment Number field included in the GRE data packet is used to carry the second tunnel acknowledgment sequence number
  • the GRE datagram is used.
  • the binding confirmation number Bonding Acknowledgment Number field included in the text carries the tunnel binding acknowledgement sequence number.
  • the second acknowledgment response is a GRE control message
  • the second tunnel acknowledgment sequence number and the binding are carried by using an attribute type length value Attribute TLV field included in the GRE control message.
  • the tunnel confirms the serial number.
  • the first network device sends, according to the third data packet in the first partial data packet, the acknowledgement sent by the second network device to the third data packet.
  • the time interval of the response determines the round trip delay RTT of the first tunnel.
  • the first network device sends, according to the fourth data packet in the second partial data packet, the fourth data packet sent by the second network device.
  • the time interval of the response is determined, and the round trip delay RTT of the second tunnel is determined.
  • a single tunnel RTT can be determined at the same time. No need to separately send probe packets to count the RTT of a single tunnel, which effectively saves the network. Overhead.
  • the binding tunnel is reordered in the cache.
  • the size of the used cache space includes the number of packets in the bound tunnel reordering buffer, the length of the message queue in the bound tunnel reordering buffer, or the used cache in the bound tunnel reordering cache. The number of slices.
  • the binding tunnel reordering cache when the usage of the buffer space of the binding tunnel reordering cache includes the size of the buffer space available in the binding tunnel reordering buffer, the binding tunnel reordering cache is available.
  • the size of the cache space includes the length of the cache message queue available in the bound tunnel reorder buffer or the number of cache slices available in the bound tunnel reorder buffer.
  • the acknowledgement response received by the first network device is a GRE data packet
  • the GRE data packet includes a binding reorder buffer size, a Bonding Reorder Buffer Size field, and the Bonding Reorder Buffer.
  • the size field carries the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, and the available buffering message queue in the binding tunnel reordering buffer. The length, the number of used cache tiles in the bound tunnel reordering cache, or the number of cache tiles available in the bound tunnel reordering cache.
  • the acknowledgment response received by the first network device is a GRE control message
  • the GRE control message includes an attribute type length value Attribute TLV field
  • the Attribute TLV field includes a type T field.
  • a length L field and a value V field the V field carrying the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, and the binding tunnel reordering
  • the first network device is directed to the first network device by resizing the size of the used buffer space in the cache according to the binding tunnel or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic transmission load balancing between the first tunnel and the second tunnel is performed on the packets transmitted by the network device, which can effectively reduce the network delay of the bonded tunnel, and/or can be more obviously suppressed.
  • the packet tunneling reordering cache traffic overflows due to network congestion, which causes packet loss and triggers application layer retransmission.
  • the set load sharing strategy includes:
  • the tunnel having the small round trip delay RTT in the first tunnel and the second tunnel is selected, or the number of packets in the first tunnel and the second tunnel that are not correctly sorted is selected. Transmitting, by the tunnel, the packet sent by the first network device to the second network device.
  • the sequence of the plurality of consecutive packets that arrives at the second network device may be caused to be transmitted according to the set load balancing policy, for example, selecting a tunnel with a small RTT.
  • the tunnel in the single tunnel that does not complete the correct number of sorts is selected to transmit the sequence of the packet, and is not allocated to the tunnel with a large delay or a large number of packets in a single tunnel.
  • the present application provides a method for load sharing, which is applied to a second network device.
  • a first tunnel and a second tunnel are established between the second network device and the first network device, and the first tunnel and the second tunnel are bound to each other by a Hybrid bonding, and the second tunnel is formed.
  • the network device includes a binding tunnel reordering buffer, and the binding tunnel reordering buffer is used to sort the packets entering the binding tunnel reordering buffer.
  • the first network device may be, for example, an HG device in a hybrid access network, which may be, for example, a HAAP device in the hybrid access network.
  • the first network device may be, for example, a HAAP device in a hybrid access network
  • the second network device may be, for example, an HG device in the hybrid access network.
  • the second network device receives multiple data packets sent by the first network device.
  • the second network device acquires information about the usage of the cache space of the binding tunnel reordering cache.
  • an acknowledgment response is sent to the first network device.
  • the confirmation response includes information about the usage of the cache space of the binding tunnel reordering cache.
  • the information is used by the first network device to determine a usage of the cache space of the binding tunnel reordering cache, and reorder the buffered cache space usage according to the binding tunnel and set load sharing. And performing a load balancing between the first tunnel and the second tunnel for the packet that is sent by the first network device to the second network device.
  • the use of the cache space of the binding tunnel reordering cache includes: the size of the used cache space in the binding tunnel reordering cache or the available in the binding tunnel reordering cache. The size of the cache space.
  • the second network device further includes a first tunnel reordering buffer and a second tunnel reordering buffer, where the first tunnel reordering buffer is used for the packet transmitted by using the first tunnel. Sorting, the second tunnel reordering buffer is used to sort the messages transmitted through the second tunnel.
  • the second network device receives the first part of the data message in the plurality of data packets sent by the first network device by using the first tunnel.
  • Each packet in the first part of the data packet includes a binding tunnel sequence number of the packet and a first tunnel sequence number.
  • the first tunnel sequence number is used to indicate a transmission sequence of each packet in the first partial data packet in the first tunnel.
  • the binding tunnel sequence number of the packet included in each packet in the first part of the data packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the second network device receives a second partial data packet of the plurality of data packets sent by the first network device by using the second tunnel.
  • Each of the second part of the data packet includes a binding tunnel number of the packet and a second tunnel sequence number.
  • the second tunnel sequence number is used to indicate a transmission sequence of each packet in the second partial data packet in the second tunnel.
  • the binding tunnel sequence number of the packet included in each packet in the second part of the data packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the acknowledgment response includes a first acknowledgment response of the second network device for the first partial data message and a second acknowledgment response of the second network device for the second partial data message.
  • the binding tunnel reordering buffer Sending the first acknowledgement response to the latest bound tunnel sequence number in the correctly sequenced message.
  • the fixed tunnel sequence number determines the binding tunnel acknowledgement sequence number included in the first acknowledgement response.
  • the information about the usage of the buffer space of the binding tunnel reordering cache included in the first acknowledgment response includes the first tunnel acknowledgment sequence number and a binding tunnel acknowledgment sequence number included in the first acknowledgment response.
  • the second network device Obtaining, by the second network device, the second tunnel sequence number in the packet that has been correctly sorted in the second tunnel reordering cache that is the closest to the second acknowledgement response, and the binding tunnel reordering cache Sending the second acknowledgement response to the latest bound tunnel sequence number in the correctly sequenced message.
  • the fixed tunnel sequence number determines a binding tunnel acknowledgement sequence number included in the second acknowledgement response.
  • the information about the usage of the buffer space of the binding tunnel reordering buffer included in the first acknowledgment response includes the second tunnel acknowledgment sequence number and a binding tunnel acknowledgment sequence number included in the second acknowledgment response.
  • the first tunnel acknowledge sequence number, the second tunnel acknowledge sequence number, the binding tunnel acknowledge sequence number included in the first acknowledgement response, and the binding tunnel acknowledge sequence number included in the second acknowledgement response are the first
  • the network device is configured to determine the number of the packets of the binding tunnel reordering cache, and determine the usage of the buffer space of the binding tunnel reordering cache according to the number of the packets that are cached by the binding tunnel.
  • the size of the used buffer space in the binding tunnel reordering cache or the size of the buffer space available in the binding tunnel reordering buffer may be determined by determining the number of packets in the binding tunnel reordering buffer. And reporting, by the first network device, to the second network device, according to the size of the used buffer space in the binding tunnel reordering cache or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic load balancing between the first tunnel and the second tunnel can effectively reduce the network delay of the bound tunnel, and/or can significantly suppress the network congestion caused by the network. Bind tunnel reordering buffer traffic overflows, resulting in packet loss, triggering application layer retransmission.
  • the field number already defined by the existing protocol can be used to carry the message sequence number, thereby determining the number of packets in the tunnel reordering cache by using the message sequence number, thereby reducing the implementation of the method. the complexity.
  • the first acknowledgment response is a GRE data packet
  • the first tunnel acknowledgment sequence number is carried by using the acknowledgment number Acknowledgment Number field included in the GRE data packet, and the GRE datagram is used.
  • the binding confirmation number Bonding Acknowledgment Number field included in the text carries the tunnel binding acknowledgement sequence number.
  • the first acknowledgment response is a GRE control message
  • the first tunnel acknowledgment sequence number and the binding are carried by using an attribute type length value Attribute TLV field included in the GRE control message.
  • the tunnel confirms the serial number.
  • the second acknowledgment response is a GRE data packet
  • the acknowledgment number Acknowledgment Number field included in the GRE data packet is used to carry the second tunnel acknowledgment sequence number
  • the GRE datagram is used.
  • the binding confirmation number Bonding Acknowledgment Number field included in the text The binding tunnel confirmation sequence number is carried.
  • the second acknowledgment response is a GRE control message
  • the second tunnel acknowledgment sequence number and the binding are carried by using an attribute type length value Attribute TLV field included in the GRE control message.
  • the tunnel confirms the serial number.
  • the binding tunnel reordering the cache when the usage of the cache space of the binding tunnel reordering cache includes the size of the used cache space in the binding tunnel reordering cache, the binding tunnel reordering the cache
  • the information about the usage of the cache space includes the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, or the used cache in the binding tunnel reordering buffer. The number of slices.
  • the binding tunnel when the usage of the buffer space of the binding tunnel reordering cache includes the size of the buffer space available in the binding tunnel reordering buffer, the binding tunnel reorders the cache of the cache.
  • the information of the usage of the space includes the length of the cached message queue available in the bound tunnel reordering cache or the number of cached slices available in the bound tunnel reordering cache.
  • the acknowledgment response sent by the second network device is a general routing encapsulation GRE data packet
  • the GRE data packet includes a binding reordering buffer size Bonding Reorder Buffer Size field
  • the Bonding Reorder Buffer Size field carries the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, and the available buffering information in the binding tunnel reordering buffer.
  • the acknowledgment response sent by the second network device is a GRE control message, where the GRE control message includes an attribute type length value Attribute TLV field, and the Attribute TLV field includes a type T field. a length L field and a value V field, the V field carrying the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, and the binding tunnel reordering buffer The length of buffer queues available for buffering, the number of cached tiles used in the bound tunnel reordering cache, or the number of available cache tiles in the bound tunnel reordering cache.
  • the dynamic load balancing of the packet between the first tunnel and the second tunnel can effectively reduce the network delay of the bound tunnel, and/or can significantly suppress the network congestion caused by the network.
  • the binding tunnel reorders the buffer traffic overflow, which causes packet loss and triggers the application layer retransmission.
  • the embodiment of the present application provides a first network device, configured to perform the method in any one of the possible aspects of the first aspect or the first aspect.
  • the first network device comprises means for performing the method of any of the possible aspects of the first aspect or the first aspect.
  • the embodiment of the present application provides a second network device, where the method in any one of the possible aspects of the second aspect or the second aspect is performed.
  • the second network device comprises means for performing the method of any of the possible aspects of the second aspect or the second aspect.
  • the embodiment of the present application provides a first network device, where the first network device includes: an input interface, an output interface, a processor, and a memory.
  • the input interface, the output interface, the processor and the memory can be connected by a bus system.
  • the memory is for storing a program, an instruction or a code
  • the processor is for executing a program, an instruction or a code in the memory, completing any possibility of the first aspect or the first aspect The method in the design.
  • the embodiment of the present application provides a second network device, where the second network device includes: an input interface, an output interface, a processor, and a memory.
  • the input interface, the output interface, the processor and the memory can be connected by a bus system.
  • the memory is for storing a program, instruction or code for executing a program, instruction or code in the memory to perform the method of any of the possible aspects of the second aspect or the second aspect.
  • the embodiment of the present application provides a communication system, where the communication system includes the first network device of the third aspect or the fifth aspect, and the second network device of the fourth aspect or the sixth aspect.
  • the embodiment of the present application provides a computer readable storage medium or computer program product for storing a computer program for performing the first aspect, the second aspect, and any possible design of the first aspect. Or the instructions of the method in any possible design of the second aspect.
  • FIG. 1 is a schematic diagram of implementing load sharing based on a token bucket in the prior art
  • FIG. 2 is a schematic structural diagram of a hybrid access network network according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a load sharing method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of calculating the number of packets in a binding tunnel reordering cache according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a first network device according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a second network device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of hardware of a first network device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of hardware of a second network device according to an embodiment of the present disclosure.
  • the embodiment of the present application can be applied to a hybrid access network, where the hybrid access network includes a first network device and a second network device.
  • a first tunnel and a second tunnel are established between the first network device and the second network device.
  • the first tunnel and the second tunnel form a virtual binding tunnel through a binding connection, which is simply referred to as a binding tunnel in this application.
  • All packets transmitted between the first network device and the second network device are transmitted through the binding tunnel.
  • the all the packets include the packets transmitted by the first tunnel and the second tunnel respectively.
  • the binding tunnel may be, for example, a GRE tunnel, a point-to-point tunneling protocol (PPTP) tunnel, a user data protocol (UDP) tunnel, and the like. This is not specifically limited.
  • Terminals such as mobile phones, telephones, and laptops can be connected to HG devices through direct access to the network cable and wireless LAN (English: Wireless Fidelity, WiFi).
  • HG devices can access both DSL and LTE.
  • the HG device sends an LTE tunnel request and a DSL tunnel request to the HAPP device to establish a GRE tunnel (shown as an LTE GRE tunnel and a DSL GRE tunnel in FIG. 2), and binds the LTE tunnel and the DSL tunnel into a binding tunnel.
  • GRE tunnel shown as an LTE GRE tunnel and a DSL GRE tunnel in FIG. 2
  • binds the LTE tunnel and the DSL tunnel into a binding tunnel Called a logical GRE tunnel
  • access the HAPP device and access the public network (such as the Internet) via the HAPP device.
  • the message between the HG device and the HAAP device is encapsulated into a GRE packet format based on the GRE protocol and then forwarded.
  • the first network device may be the HG device, and the second network device is the HAAP device; or the first network device is a HAAP device, and the second network device is an HG device.
  • All packets sent by the first network device are globally numbered by using a binding tunnel sequence number (also referred to as a logical GRE tunnel sequence number).
  • the binding tunnel sequence number is used to indicate the sequence number of all the packets sent by the first network device in the binding tunnel, and is used to indicate the transmission sequence of all the packets in the binding tunnel.
  • the all the packets include the packets transmitted in the DSL tunnel and the packets transmitted in the LTE tunnel.
  • the second network device restores the sequence of all the packets according to the binding tunnel sequence number, thereby implementing a data transmission mechanism of the hybrid access network between the HG and the HAAP.
  • the HG receives the data stream to be sent.
  • the data stream includes six data packets, and each data packet includes a binding tunnel sequence number, which is 1, 2, 3, 4, 5, and 6, respectively.
  • the binding tunnel sequence number is used to identify the transmission sequence of the six data packets in the binding tunnel.
  • the first network device uses the DSL tunnel to send data packets with the tunnel number of 1-4. When the DSL has no available bandwidth, the LTE tunnel performs load balancing and transmits data packets with the tunnel number 5 and 6.
  • the HAAP device uses the binding tunnel reordering buffer to buffer the packets transmitted through the DSL tunnel and the LTE tunnel, and reorders the packets according to the bound tunnel number carried in each packet. If the packets with the tunnel number of 1, 2, 5, and 6 are bound to the binding tunnel reordering buffer, the packets with the tunneled tunnel numbers 3 and 4 are not rejoined due to congestion of the DSL tunnel. If the buffer is cached, the packets with the tunnel number 1 and 2 are correctly sorted and output to the network. The packets with the sequence number 5 and 6 are not correctly sorted. The packets with the sequence number 3 and 4 are bound to the binding tunnel reorder buffer. After the packets are correctly sorted, they can be output to the network.
  • the binding tunnel is implemented by binding the first tunnel and the second tunnel between the first network device and the second network device.
  • the binding tunnel can also be communicated between the network device and the second network device by binding more than two tunnels.
  • the application scenario in the hybrid access network shown in FIG. 2 is only an example.
  • the actual hybrid access network may also include other forms of structures, which are not limited in this application.
  • the load sharing method 100 provided in the embodiment of the present application is described in detail below with reference to the hybrid access network architecture shown in FIG.
  • the load sharing method is applied to the first network device and the second network device, where the A first tunnel and a second tunnel are established between a network device and the second network device.
  • the first tunnel and the second tunnel form a binding tunnel by hybrid port binding Hybrid bonding.
  • the second network device includes a binding tunnel reordering buffer, and the binding tunnel reordering buffer is used to sort packets that enter the binding tunnel reordering buffer.
  • the first network device may be the HG device shown in FIG. 2, and the second network device may be the HAAP device shown in FIG. 2; or the first network device may be as shown in FIG.
  • the illustrated HAAP device which may be the HG device shown in FIG. 2.
  • the first tunnel may be the DSL tunnel shown in FIG. 2, and the second tunnel may be the LTE tunnel shown in FIG. 2; or the first tunnel may be the LTE tunnel shown in FIG. 2, correspondingly,
  • the second tunnel may be the DSL tunnel shown in FIG. 2.
  • the provided load sharing method 100 provided by the embodiment of the present application includes the following parts:
  • the first network device sends multiple data packets to the second network device.
  • the first network device receives the multiple data packets, where the first network device is the HG, and the HG receives multiple data packets from the mobile phone or other terminal device.
  • the mobile phone or other terminal device connects to the HG through a network cable or WiFi, and sends the plurality of data packets to the HG.
  • the first network device After receiving the multiple data packets, the first network device performs load balancing according to the configured load balancing policy, for example, based on the token bucket, and sends the multiple data packets to the second network device. .
  • the first network device sends the multiple data packets to the second network device by using the first tunnel.
  • the first network device sends the multiple data packets to the second network device by using the second tunnel.
  • the first network device sends, by using the first tunnel, the first part of the plurality of data packets to the second network device; the first network device Transmitting, by the second tunnel, the second part of the plurality of data packets to the second network device.
  • the second network device receives the multiple data packets sent by the first network device.
  • the second network device receives the multiple data packets by using the first tunnel. In another specific implementation manner, the second network device receives the multiple data packets by using the second tunnel. In another specific implementation manner, the receiving, by the second network device, the multiple data packets sent by the first network device, specifically: the second network device receiving the first network device a first part of the data packets sent by the first tunnel, where the second network device receives the multiple data packets sent by the first network device by using the second tunnel The second part of the data message.
  • the second network device acquires information about usage of the cache space of the binding tunnel reordering cache.
  • the usage of the buffer space of the binding tunnel reordering cache may include, for example, a size of used buffer space in the binding tunnel reordering cache or available in the binding tunnel reordering cache.
  • the size of the cache space may include, for example, a size of used buffer space in the binding tunnel reordering cache or available in the binding tunnel reordering cache.
  • the binding tunnel reordering buffer when the usage of the buffer space of the binding tunnel reordering cache includes the size of the used cache space in the binding tunnel reordering cache, the binding tunnel reordering buffer
  • the information about the usage of the cache space includes the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, or the used in the binding tunnel reordering buffer. The number of cached slices.
  • the information about the usage of the buffering space of the binding tunnel reordering buffer includes: The length of the cached message queue available in the binding tunnel reordering cache or the number of cached slices available in the bound tunnel reordering cache.
  • the information about the usage of the buffer space of the binding tunnel reordering cache includes a single tunnel acknowledgement number and a binding tunnel acknowledgement sequence number.
  • the single tunnel in the present application refers to each tunnel in the binding tunnel, such as the first tunnel or the second tunnel in this embodiment.
  • the second network device sends an acknowledgement response to the first network device.
  • the second network device sends an acknowledgment response to the first network device, where the acknowledgment response includes information about a usage of the cache space of the binding tunnel reordering cache, where the information is a network device is configured to determine a usage of the cache space of the binding tunnel reordering cache, and reorder the buffered cache space usage according to the binding tunnel and a set load balancing policy, to the first
  • the packet transmitted by the network device to the second network device performs load sharing between the first tunnel and the second tunnel.
  • the set load balancing policy including but not limited to: the first network device determines that the size of the used buffer space in the bound tunnel reordering cache is greater than or equal to a first threshold or the bound tunnel weight After the size of the buffer space available in the sorting cache is less than or equal to the second threshold, selecting a tunnel having a small round trip delay RTT in the first tunnel and the second tunnel, or selecting the first tunnel and the first The tunnel in the second tunnel that does not complete the correctly sorted packets transmits the packet sent by the first network device to the second network device.
  • each time the second network device receives a message it returns an acknowledgement response to the first network device.
  • the second network device may be configured to periodically return the acknowledgement response to the first network device at a certain time interval.
  • the second network device may further send the acknowledgement response when receiving the request sent by the first network device or reaching a set early warning state.
  • the set alert status includes, but is not limited to, the size of the used cache space of the bound tunnel reorder buffer is greater than or equal to a set threshold, or the size of the available cache space of the bound tunnel reorder buffer is less than or equal to a set value. Threshold. This application does not specifically limit this.
  • the first network device receives an acknowledgement response sent by the second network device.
  • the first network device determines, according to the acknowledgment response, a usage of the cache space of the binding tunnel reordering cache.
  • the first network device accesses the first network device between the first tunnel and the second tunnel according to the usage of the cache space of the binding tunnel reordering cache and the configured load sharing policy. Performing load balancing on the packets transmitted by the second network device.
  • the packet transmitted by the first network device to the second network device is used in the first tunnel and the device according to the usage of the cached space of the binding tunnel reordering cache and the configured load sharing policy.
  • the load sharing between the second tunnels can be referred to the following detailed description.
  • the following describes in detail how to send an acknowledgment response in S105 and how to determine the usage of the cache space of the bound tunnel reordering cache according to the acknowledgment response in S106.
  • the first network device receives the acknowledgement response returned by the second network device by using the first tunnel or the second tunnel.
  • the acknowledgment response is a generic routing encapsulation GRE data packet, and the GRE data packet includes a binding reordering buffer size, a Bonding Reorder Buffer Size field, and the first network device is carried according to the Bonding Reorder Buffer Size field.
  • the content determines the usage of the cache space of the bound tunnel reordering cache.
  • the Bonding Reorder Buffer Size field can be carried in the GRE header.
  • the Bonding Reorder Buffer Size field may be, for example, 32 bits.
  • the acknowledgment response is a GRE control message, where the GRE control message includes an attribute type length value Attribute TLV field, where the Attribute TLV field includes a type T field, a length L field, and a value V field, the first network
  • the device determines, according to the content carried by the V field, the usage of the cache space of the binding tunnel reordering cache.
  • the content carried by the Bonding Reorder Buffer Size field includes the number of packets in the binding tunnel reordering buffer, and the packet queue in the binding tunnel reordering buffer. Length, the length of the cached message queue available in the binding tunnel reordering cache, the number of used cached slices in the bound tunnel reordering cache or available in the bound tunnel reordering cache The number of cache tiles.
  • the content carried by the V field includes the number of packets in the binding tunnel reordering cache, and the packet queue in the binding tunnel reordering buffer. Length, the length of the cached message queue available in the bound tunnel reordering cache, the number of used cached slices in the bound tunnel reordering cache or the available cache in the bound tunnel reordering cache The number of slices.
  • the format of the Attribute TLV field in the GRE control message is as follows:
  • the Attribute TLV field may be, for example, carried in a GRE Tunnel Notify message.
  • the value of the attribute type Attribute Type may be, for example, 36, which is used to indicate the space usage of the returning tunnel reordering cache.
  • the content carried by the attribute value Attribute Value is as described above and will not be described again.
  • the GRE data message or the control message described in the present application and the fields or formats thereof are merely exemplary and do not constitute a limitation of the present invention.
  • the number of the packets in the binding tunnel reordering cache in the foregoing implementation manner may be considered by those skilled in the art to use other fields or formats of the GRE data packet or the control packet.
  • the length of the packet queue in the binding tunnel reordering buffer, the length of the queue for re-encoding the packet available in the binding tunnel reordering cache, and the cached slice used in the binding tunnel reordering buffer The number of buffers or the number of cache tiles available in the binding tunnel reordering cache are all intended to be in this application and will not be repeated here.
  • the first network device determines, according to the length of the packet queue in the binding tunnel reordering cache, the usage of the buffer space bound to the tunnel reordering cache. For example, the minimum unit size of the buffer space is 1 byte, and the length of the message queue in the binding tunnel reordering buffer is 1518 bytes, and the size of the used buffer space for the binding tunnel reordering cache is 1518 bytes.
  • the maximum length of the cache queue configured by the binding tunnel reordering cache is subtracted from the length of the existing packet queue, so that the size of the buffer space available for binding the tunnel reordering cache can be obtained. In this case, the set number of bytes can be used as the set threshold as the basis for further load sharing.
  • the length of the packet queue in the binding tunnel reordering cache may be directly returned, that is, the size of the buffer space used by the binding tunnel reordering cache is returned.
  • the first network device may directly determine the size of the used cache space based on the binding tunnel reordering cache as the basis for determining the further load sharing policy.
  • the first network device may obtain, according to the size of the buffer space used by the binding tunnel reordering cache, the size of the buffer space available to the bound tunnel reordering cache, as a determination. The basis for further load sharing strategy.
  • the size of the buffer space available for binding the tunnel reordering cache may be directly returned, that is, the length of the queue for buffering the message queue available in the binding tunnel reordering cache is returned.
  • the first network device may directly resize the cache space available for the cache based on the binding tunnel, or obtain the bound tunnel reorder cache by binding the size of the cache space available for reordering the cache.
  • the size of the cache space used is used as the basis for determining the further load sharing strategy.
  • the first network device may determine, according to the number of used or available cache slices in the binding tunnel reordering cache, the usage of the cache space bound to the tunnel reordering cache. For example, the smallest unit of the cache space is a cache slice, and the cache resources in the bound tunnel reorder buffer are divided into multiple slices, and each slice can have a fixed size, for example, 256 bytes.
  • each packet is assigned one or more slices according to the length of each packet. Taking a single cache slice size of 256 bytes as an example, when the length of the packet queue in the binding tunnel reordering cache is 1518 bytes, the size of the used cache space of the bound tunnel reordering cache is 6 cache slices.
  • the size of the available cache space of the bound tunnel reordering cache is 14 cache slices.
  • the number of set cache slices can be used as the set threshold.
  • the number of cached tiles used in the binding tunnel reordering cache may be directly returned, that is, the size of the buffer space used by the binding tunnel reordering cache is returned.
  • the first network device may directly reorder the cache based on the bound tunnel. The size of the cache space is used as the basis for determining the further load sharing strategy.
  • the first network device may obtain, according to the size of the buffer space used by the binding tunnel reordering cache, the size of the buffer space available to the bound tunnel reordering cache, as a determination. The basis for further load sharing strategy.
  • the size of the cache space available for binding the tunnel reordering cache may be directly returned, that is, the number of available cache slices in the bound tunnel reordering cache is returned.
  • the first network device may directly resize the cache space available for the cache based on the binding tunnel, or obtain the bound tunnel reorder cache by binding the size of the cache space available for reordering the cache.
  • the size of the cache space used is used as the basis for determining the further load sharing strategy.
  • the first network device may further determine, according to the number of packets in the binding tunnel reordering cache, the usage of the cache space bound to the tunnel reordering cache.
  • the minimum unit of the buffer space is a message.
  • Bind tunnel reordering cache configuration The maximum number of storable packets is S, and the number of packets queued in the binding tunnel reordering cache is T. At this time, the bound tunnel reorders the cache used.
  • the size of the space is T messages. In this case, the set number of messages can be used as the set threshold. S is greater than 1, and T is greater than zero.
  • the number of the packets in the binding tunnel reordering cache may be directly returned, that is, the size of the buffer space used by the binding tunnel reordering cache is returned.
  • the first network device may directly determine the size of the used cache space based on the binding tunnel reordering cache as the basis for determining the further load sharing policy.
  • the first network device may obtain the size of the buffer space available in the bound tunnel reordering cache based on the size of the buffer space used by the binding tunnel reordering cache, thereby determining further The basis of the load sharing strategy.
  • the size of the buffer space available for binding the tunnel reordering cache may be directly returned, that is, the number of available packets that can be stored in the binding tunnel reordering cache is returned.
  • the first network device may directly re-sequence the size of the cache space available for the cache based on the binding tunnel, or obtain the bound tunnel reorder cache by binding the size of the cache space available for reordering the cache.
  • the size of the cache space used is used as the basis for determining the further load sharing strategy.
  • the message in the binding tunnel reordering cache refers to the packet that is not correctly sorted in the binding tunnel reordering cache, and the packets that have been sorted are not included in the binding tunnel. Reorder the messages in the cache.
  • the first network device is directed to the first network device by resizing the size of the used buffer space in the cache according to the binding tunnel or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic transmission load balancing between the first tunnel and the second tunnel is performed on the packets transmitted by the network device, which can effectively reduce the network delay of the bonded tunnel, and/or can be more obviously suppressed.
  • the packet tunneling reordering cache traffic overflows due to network congestion, which causes packet loss and triggers application layer retransmission.
  • the binding may also be performed according to the foregoing.
  • the number of correctly sorted packets in the tunnel reordering buffer is not completed.
  • the number of correctly sorted packets in the first tunnel and the number of correctly sorted packets in the second tunnel are determined to determine the bound tunnel reordering buffer.
  • the number of messages in the middle The following describes how to determine the usage of the binding tunnel reordering cache according to the number of packets that are not correctly sorted in each tunnel.
  • the packet may be saved before the second network device binds the tunnel reordering buffer to complete the correct sorting.
  • the packet in the following locations: in the transmission path of each tunnel in the binding tunnel (represented by tunnel i for convenience of explanation), in the reordering cache of tunnel i, and in the binding tunnel reordering buffer.
  • i is a positive integer greater than or equal to 1
  • tunnel 1 refers to the first tunnel described in this application
  • tunnel 2 refers to the second tunnel described in this application, and so on, and details are not described herein.
  • the message existing in any of the above locations is called the "uncompleted correctly ordered message" in the binding tunnel.
  • the message being transmitted in tunnel i and the message in the reordering buffer of tunnel i are called "incompletely sorted messages" in tunnel i.
  • the packets in the reordering buffer of tunnel i refer to those packets that have entered the reordering buffer of tunnel i but have not completed the correct ordering. For example, for the first tunnel, the total number of packets being transmitted in the first tunnel and the packets in the first tunnel reordering buffer is recorded as the number of packets that are not correctly sorted in the first tunnel. The total number of packets being transmitted in the second tunnel and the packets in the second tunnel reordering buffer is recorded as the number of packets that are not correctly sorted in the second tunnel.
  • the reordering buffer of the first tunnel referred to herein as the first tunnel reordering cache, and the reordering buffer of the second tunnel, is referred to herein as the second tunnel reordering cache.
  • the first tunnel reordering buffer is configured to correctly sort the packets transmitted by the first tunnel, and the packets that are correctly sorted by the first tunnel reordering cache enter the binding tunnel reordering buffer.
  • the second tunnel reordering buffer is used to correctly sort the packets transmitted by the second tunnel, and the packets that are correctly sorted by the second tunnel reordering cache enter the binding tunnel. Sort the cache.
  • the binding tunnel reordering buffer is configured to correctly sequence all the packets transmitted through the first tunnel and the second tunnel.
  • the first network device may directly determine, according to the determined number of packets B in the binding tunnel reordering buffer, a size of a buffer space used by the binding tunnel reordering cache, thereby As a basis for determining a further load sharing strategy.
  • the first network device may also obtain the size of the buffer space available for the bound tunnel reordering cache based on the size of the buffer space used by the binding tunnel reordering cache, as a basis for determining a further load sharing strategy.
  • the second network device further includes the foregoing first tunnel reordering cache and the second tunnel reordering buffer. And the sending, by the first network device, the multiple data packets to the second network device, where the first network device sends the multiple data to the second network device by using the first tunnel a first part of the data packet in the packet; the first network device sends the second part of the data message to the second network device by using the second tunnel.
  • the first network device Sending, by the second network device, a first acknowledgment response for the first partial data packet to the first network device, the first network device determining, according to the first acknowledgment response, that the first tunnel reordering has been entered The number of packets that have been properly sequenced and the number of packets that have entered the binding tunnel reordering cache and completed the correct sort.
  • the first network device may determine that the first tunnel has entered the first tunnel according to the sequence number of the packet that has entered the first tunnel reordering buffer and completes the correctly sorted packet.
  • the tunnel reorders the cache and completes the number of correctly ordered packets M.
  • the message sequence number of the packet that has entered the first tunnel reordering buffer and completes the correctly sorted packet may be the latest one that has been obtained when the first acknowledgment response is sent or previously acquired.
  • the tunnel reorders the cache and completes the message sequence number of the correctly sorted packets.
  • the message sequence number of the packet that has entered the binding tunnel reordering buffer and completes the correctly sorted packet may be the latest obtained in the binding tunnel reordering cache obtained before the first acknowledgement response is sent. Completes the message sequence number of the correctly sorted message.
  • the message that has entered the first tunnel reordering cache and completes the correct sorting may be directly carried in the first acknowledgment response.
  • the sequence number or the number of the packets that have entered the first tunnel reordering buffer and complete the correctly sorted message may also be obtained in the first tunnel reordering cache and completed in the set time or period. The sequence number or quantity of the correctly sorted message.
  • the foregoing embodiments may also be applicable to determining the number M of packets that have entered the binding tunnel reordering buffer and complete the correct sorting. And determining, according to the number of the packets that have entered the first tunnel reordering buffer and completing the correct sorting, in combination with the first network device, according to the first part that is sent to the second network device by using the first tunnel
  • the number of data packets can obtain the number of packets F 1 that are not correctly sorted in the first tunnel.
  • the number of the first part of the data packet sent by the first network device to the second network device may be subtracted from the number of the packets that have entered the first tunnel reordering buffer and are correctly sorted. to obtain the number of packets in the first tunnel is not completed correctly ordered F 1.
  • the second network device sends a second acknowledgment response to the second partial data packet to the first network device, where the first network device determines that the second acknowledgment has entered the second The number of packets in which the tunnel reorders the cache and completes the correct sorting, and the number of packets that have entered the bound tunnel reordering buffer and completes the correct sorting.
  • the foregoing determining, by the first network device, determining, according to the first acknowledgement response, that the number of packets that have entered the second tunnel reordering cache and completing the correct sorting, and the that has entered the bound tunnel reordering cache and completed The number of correctly ordered packets M is similar and will not be described here.
  • the first network device according to the first acknowledgment response, the number of packets that have entered the binding tunnel reordering buffer and completed correctly sorting, and the number of packets carried in the second acknowledgment response have entered the binding Determining the larger number of packets in the tunnel reordering buffer and completing the correct sorting, and the number of the plurality of data packets sent by the first network device determines that the bound tunnel reordering cache is not completed.
  • the number of correctly sorted messages F B Specifically, the number of the multiple data packets sent by the first network device may be subtracted from the number of the first acknowledgement response that has entered the binding tunnel reordering cache and completes the correct sorting.
  • the number of the number M and the larger value of the number of packets N that have entered the binding tunnel reordering buffer and complete the correct sorting carried in the second acknowledgment response are obtained by the number of packets in the bound tunnel that are not correctly sorted.
  • the number of documents F B are obtained by the number of documents F B .
  • the number of packets that have entered the binding tunnel reordering buffer and completes the correct sorting in the first acknowledgment response, and the location sent by the first network device is the number of packets that are not correctly sorted in the bound tunnel reordering cache.
  • the number of packets that are not correctly sorted in the bound tunnel reordering buffer is referred to herein.
  • a first value) and a number N of the second acknowledgment response that has entered the binding tunnel reordering buffer and completes the correct ordering, and the plurality of data packets sent by the first network device The number of packets that are not correctly sorted in the bound tunnel reordering buffer is obtained.
  • the number of packets that are not correctly sorted in the bound tunnel reordering buffer is referred to as the second value, and then compared.
  • the following is a detailed description of how the first network device determines that the first tunnel reordering buffer has entered the first tunnel reordering buffer according to the packet sequence number of the packet that has entered the first tunnel reordering buffer and completes the correctly sorted packet.
  • the number of correctly sorted packets is completed, and how to determine the number of packets that have entered the binding tunnel reordering buffer and complete the correctly sorted packets according to the first acknowledgment response has been determined to have entered the bound tunnel reordering.
  • each of the first part of the data packet includes a binding tunnel number of the packet and a first tunnel sequence number.
  • the first tunnel sequence number is used to indicate the sequence number of each packet in the first partial data packet in the first tunnel, that is, each packet in the first partial data packet is in the first The order of transmission in a tunnel.
  • the packet transmitted in the first tunnel may not reach the first tunnel reordering buffer according to the transmission sequence indicated by the first tunnel sequence number, and the first tunnel reordering buffer needs to be according to the first packet.
  • a tunnel sequence number is sorted, and the correctly sorted message is sent from the first tunnel reordering cache to the bound tunnel reordering buffer.
  • the binding tunnel sequence number of the packet included in the first part of the data packet is used to indicate the sequence number of the packet in the binding tunnel, that is, the packet is in the binding tunnel. Transmission order.
  • the packet may not arrive in the binding tunnel reordering buffer according to the transmission sequence indicated by the binding tunnel sequence number, and the binding tunnel reordering buffer needs to be sorted according to the binding tunnel sequence number of each packet.
  • the packets that have been correctly sorted are sent from the bound tunnel reordering buffer to other devices in the network.
  • Each of the second part of the data packet includes a binding tunnel number of the packet and a second tunnel sequence number.
  • the second tunnel sequence number is used to indicate the sequence number of each packet in the second partial data packet in the second tunnel, that is, each report in the second partial data packet.
  • the order of transmission in the second tunnel is used to indicate the sequence number of the packet in the binding tunnel, that is, the packet is in the bound tunnel. The order of transmission.
  • the first tunnel sequence number carried in the packet that has entered the first tunnel reordering buffer and completes the correct sorting which may be specifically according to when the acknowledgment response is sent or before or at the set time Or the latest tunnel number carried in the packet that enters the first tunnel reordering buffer and completes the correct sorting, and is carried as the first tunnel acknowledgement number in the first acknowledgement response.
  • the first tunnel acknowledgement sequence number herein may be the latest packet that enters the first tunnel reordering cache and completes the correct sorting when the acknowledgment response is sent or before or within a set time or period.
  • the sequence number of the first tunnel carried in the first tunnel may also be a sequence number that is mapped to the first tunnel sequence. For details, refer to the following example.
  • the second network device determines the bound tunnel sequence number carried in the packet that has entered the binding tunnel reordering buffer and completes the correct sorting, and may specifically be in accordance with the previous or And setting the binding tunnel sequence number carried in the packet that is obtained in the packet and re-ordering the packet in the correct sorting time, and carrying it in the first confirmation response and sending it to the The first network device is described.
  • the embodiment of the present application refers to a binding tunnel sequence carried in a packet that is carried in the first acknowledgment response and enters the binding tunnel reordering buffer and performs correct sorting. Confirm the serial number.
  • the first network device may determine, according to the first tunnel acknowledgement sequence number in the first acknowledgment response, the number of packets that have entered the first tunnel reordering buffer and complete the correct sorting. In a specific implementation manner, Obtaining, according to the first tunnel identifier sequence in the first part of the data packet sent by the first network device to the second network device, the first tunnel acknowledgement sequence carried in the first acknowledgement response The number of packets that have entered the first tunnel reordering buffer and complete the correct ordering.
  • the first network device may determine, according to the binding tunnel acknowledgement sequence number in the first acknowledgment response, the number of packets that have entered the binding tunnel reordering buffer and complete the correct sorting, and a specific implementation manner.
  • the binding tunnel acknowledgement in the first acknowledgement response may be subtracted according to a maximum bound tunnel sequence number of the plurality of data packets sent by the first network device to the second network device.
  • the sequence number can be used to obtain the number of packets that have entered the binding tunnel reordering cache and complete the correct sorting.
  • the second network device Determining, by the second network device, the second tunnel serial number carried in the packet that has entered the binding tunnel reordering buffer and completing the correct sorting, which may be specifically according to when the acknowledgment response is sent or before or at the set time Or the latest second tunnel number carried in the packet that enters the second tunnel reordering buffer and completes the correct sorting, and carries it in the second acknowledgement response to the first Internet equipment.
  • the second tunnel number carried in the packet that is carried in the second acknowledgment response and that is correctly entered in the second tunnel reordering buffer and is correctly sorted is referred to as a second tunnel acknowledgment. Serial number.
  • the second network device determines the bound tunnel sequence number carried in the packet that has entered the binding tunnel reordering buffer and completes the correct sorting, and may specifically be in accordance with the previous or And setting a binding tunnel sequence number carried in the packet that is received in the binding tunnel reordering buffer and completing the correct sorting, and carrying the packet in the second confirmation response to the second acknowledgment response
  • the first network device is described.
  • the embodiment of the present application refers to a binding tunnel sequence carried in a packet that is carried in the second acknowledgment response and enters the binding tunnel reordering buffer and performs correct sorting. Confirm the serial number.
  • the first network device may determine, according to the second tunnel acknowledgement sequence number in the second acknowledgment response, the number of packets that have entered the second tunnel reordering buffer and complete the correct sorting. In a specific implementation manner, And subtracting the second tunnel acknowledgement number carried in the second acknowledgement response according to the largest second tunnel sequence number in the second part of the data packet sent by the first network device to the second network device Obtaining the number of packets that have entered the second tunnel reordering buffer and completed the correct ordering.
  • the first network device may determine, according to the binding tunnel acknowledgement sequence number in the second acknowledgment response, the number of packets that have entered the binding tunnel reordering buffer and complete the correct sorting, and a specific implementation manner.
  • the binding tunnel acknowledgement in the second acknowledgment response may be subtracted according to a maximum bound tunnel sequence number of the plurality of data packets sent by the first network device to the second network device.
  • the sequence number can be used to obtain the number of packets that have entered the binding tunnel reordering cache and complete the correct sorting.
  • mapping relationship between the serial number of the tunnel and the single tunnel identification sequence number refers to the first tunnel serial number.
  • the single tunnel number refers to the second tunnel serial number.
  • a mapping relationship table may be saved in the second network device, where the mapping relationship table is used to save a mapping relationship between the first tunnel acknowledgement sequence number and the first tunnel sequence number, and between the second tunnel acknowledge sequence number and the second tunnel sequence number. Mapping off And the mapping relationship between the binding tunnel confirmation sequence number and the binding tunnel sequence number. Correspondingly, the mapping relationship is also saved in the first network device.
  • the mapping relationship may be established by, for example, the first tunnel number is an Arabic numeral 1, and the first tunnel acknowledge serial number is a letter A mapped to the number 1. It should be noted that the foregoing manner of establishing the mapping relationship is only an example, and may be implemented in a plurality of different manners. Any means that can be established by those skilled in the art to establish such a correspondence relationship is covered in the embodiment of the present application. In the mapping rules. The specific form of the mapping relationship table may be implemented in a plurality of different manners, and the corresponding relationship may be expressed in the form of a table or other manners, which is not limited in this application.
  • each of the multiple data packets sent by the first network device to the second network device includes two serial numbers, that is, the binding of the packet.
  • the tunnel number and the first tunnel number are determined.
  • only the bound tunnel sequence number is included in each packet, and the tunnel sequence number of each single tunnel is not included.
  • the second network device may enter the first tunnel reordering buffer and complete the correctly sorted message according to the latest obtained before or during the sending of the confirmation response or within a set time or period.
  • the bound tunnel sequence number carried in the bearer is carried in the first acknowledgement response and sent to the first network device.
  • the first network device is configured to: according to the binding tunnel sequence number in the first acknowledgment response, and a binding tunnel sequence number of each data packet in the first part of the data packet sent by the first network device to the second network device Determining the number of packets that have entered the first tunnel reordering buffer and completed the correct ordering. Similarly, the first network device, according to the binding tunnel sequence number in the second acknowledgment response, and each data packet in the second partial data packet sent by the first network device to the second network device The binding tunnel sequence number determines the number of packets that have entered the second tunnel reordering buffer and completes the correct sorting.
  • each of the first part of the data packet includes a sequence number field for carrying the first tunnel sequence number and a binding sequence number for carrying the bound tunnel sequence number.
  • Bonding Sequence Number field Each of the second part of the data packet includes a sequence number of the sequence number for carrying the second tunnel number and a binding sequence number of the binding sequence number of the binding tunnel number.
  • the Sequence Number field and the Bonding Sequence Number field are carried by a GRE data message.
  • the Sequence Number field and the Bonding Sequence Number field can be carried in the GRE header.
  • the Sequence Number field and the Bonding Sequence Number field may be, for example, 32 bits.
  • the first acknowledgment response includes a acknowledgment number Acknowledgment Number field and a binding acknowledgment number Bonding Acknowledgment Number field, where the Acknowledgment Number field is used to carry the first tunnel acknowledgment sequence number, and the Bonding Acknowledgment Number field is used to carry the tunnel tying Confirm the serial number.
  • the first acknowledgement response is a GRE data packet
  • the Acknowledgment Number field and the Bonding Acknowledgment Number field are carried by a GRE data packet.
  • the Acknowledgment Number field and the Bonding Acknowledgment Number field may be carried in a packet header of a GRE data packet.
  • the Acknowledgment Number field and the Bonding Acknowledgment Number field may be, for example, 32 bits.
  • the first acknowledgment response is a GRE control message
  • the first tunnel acknowledgment sequence number and the identifier are carried by using an attribute type length value Attribute TLV field included in the GRE control message. Bind the tunnel confirmation sequence number.
  • the encapsulation format of the Attribute TLV field in the GRE control packet is as follows: Show:
  • the Attribute TLV field may be carried in a GRE Tunnel Notify message.
  • the value of the attribute type Attribute Type may be, for example, 37, and the attribute value Attribute Value includes the first binding tunnel acknowledgement sequence number and the binding tunnel acknowledgement sequence number.
  • the format of the Attribute TLV field in the GRE control message may be as follows:
  • the second acknowledgement response includes an acknowledgement number Acknowledgment Number field and a binding acknowledgement number Bonding Acknowledgment Number field, where the Acknowledgment Number field is used to carry the second tunnel acknowledgement sequence number, and the Bonding Acknowledgment Number field is used to carry the tunnel tie Confirm the serial number.
  • the second acknowledgment response is a GRE data packet, and the Acknowledgment Number field and the Bonding Acknowledgment Number field are carried by a GRE data packet.
  • the Acknowledgment Number field and the Bonding Acknowledgment Number field may be carried in a packet header of a GRE data packet.
  • the Acknowledgment Number field and the Bonding Acknowledgment Number field may be, for example, 32 bits.
  • the second acknowledgment response is a GRE control message
  • the second tunnel acknowledgment sequence number and the identifier are carried by using an attribute type length value Attribute TLV field included in the GRE control message. Bind the tunnel confirmation sequence number.
  • Attribute TLV field in the second acknowledgment response refer to the description in the response to the first acknowledgment response, and details are not described herein again.
  • the Sequence Number field, the Bonding Sequence Number field, the Acknowledgment Number field, and the Bonding Acknowledgment Number field carried by the GRE data packet, or the Attribute TLV field carried by the GRE control message are used. You need to extend the existing protocol to determine the number of packets in the re-queue cache bound to the tunnel based on the content carried in the above fields.
  • GRE data message or control message and the fields or formats therein are merely exemplary descriptions, and do not constitute a limitation of the present invention.
  • a person skilled in the art can use other fields or formats of the GRE data packet or the control packet to carry the first tunnel serial number in the foregoing embodiment, the first tunnel confirmation serial number, and the binding.
  • the tunnel number, the second tunnel number, the second tunnel acknowledge sequence number, and the binding tunnel acknowledge sequence number are merely exemplary descriptions, and do not constitute a limitation of the present invention.
  • a person skilled in the art can use other fields or formats of the GRE data packet or the control packet to carry the first tunnel serial number in the foregoing embodiment, the first tunnel confirmation serial number, and the binding.
  • the tunnel number, the second tunnel number, the second tunnel acknowledge sequence number, and the binding tunnel acknowledge sequence number are merely exemplary descriptions, and do not constitute a limitation of the present invention.
  • the number of correctly sorted packets in the first tunnel and the second tunnel are not completed correctly.
  • An example of the number of sorted packets to determine the size of the cache space used to bind the tunnel reordering cache is illustrated.
  • the first network device receives six messages.
  • the first network device sends six data packets to the second network device, and each data packet carries a binding tunnel sequence number based on the binding tunnel.
  • the binding sequence number of the first data packet is 1
  • the binding tunnel number of the second data packet is 2, and so on.
  • the binding number of the sixth data packet is 6. .
  • the six messages are simply referred to as message 1, message 2, message 3, message 4, message 5, and message 6.
  • the packets 1 to 3 are allocated to the first tunnel for transmission, and the packets 4 to 6 are transmitted through the second tunnel.
  • the first network device allocates a first tunnel number based on the first tunnel to each of the packets 1 to 3.
  • the first tunnel number of the three packets is 1, 2, and 3.
  • the first network device allocates a second tunnel number based on the second tunnel to each of the packets 4 to 6.
  • the second tunnel number of the three packets is 1, 2, and 3.
  • the first network device When the first network device sends the foregoing packet through the first tunnel and the second tunnel, the first tunnel serial number, the second tunnel serial number, and the bound tunnel serial number are separately recorded.
  • the first network device may save the foregoing information in the form of a table, and may save the foregoing information in other forms, and the saved form is not limited.
  • the first network device sends a message 1 through the first tunnel, and records the first tunnel number of the packet 1 and the number of the tunnel to be bound, that is, after the packet 1 is sent, the first tunnel number recorded is 1
  • the binding tunnel number is 1.
  • the first network device sends the packet 2 through the first tunnel, and the first network device records the first tunnel number of the packet 2 and the binding tunnel sequence number, that is, after the packet 2 is sent, the first tunnel number recorded is 2.
  • the binding tunnel number is 2.
  • the first network device sends a packet to the second network device, and records the single tunnel sequence number and the binding tunnel sequence number corresponding to the packet.
  • the first network device may store a single tunnel sequence number corresponding to the recorded message and a list of the bound tunnel sequence numbers, where each entry is used to record the first tunnel serial number, the second tunnel serial number, and Bind the tunnel number.
  • the first network device may also save two lists, where one list is used to record the correspondence between the first tunnel sequence number and the bound tunnel sequence number, and another list is used to record the second tunnel sequence number and tie. The correspondence between the serial number of the tunnel.
  • the present application does not specifically limit the manner in which the first network device saves the single tunnel number corresponding to the message and the form of the bound tunnel sequence number.
  • the single tunnel number of the packet sent by the first tunnel is the first tunnel serial number
  • the single tunnel serial number of the packet sent by the second tunnel is the second tunnel serial number.
  • the first network device may select the previous one.
  • the entry of the entry is directly updated to obtain the entry record corresponding to the currently sent packet.
  • the entry record is that the first tunnel number is 1, and the binding tunnel number is 1.
  • the entry record when the message 1 is sent is In the previous entry, the first network device directly updates the first tunnel sequence number to 2, and the bound tunnel sequence number is directly updated to 2.
  • the first network device can create a single entry to record the sequence number of the single tunnel corresponding to the current packet and the number of the bound tunnel, and delete the record of the previous entry.
  • the first network device may also create a single entry, record the sequence number of the single tunnel corresponding to the current packet, and the sequence number of the bound tunnel, and keep the record of the previous entry or automatically aging the previous one after the aging time arrives.
  • the entry record is not limited in this application.
  • the second network device receives the six packets sent by the first network device.
  • the packets may need to be forwarded multiple times during the transmission process. If any of the routers are congested or lost, the packets may be out of order, or the physical signals of the packets may be interfered during the transmission. It may also result in packet loss and disorder.
  • the packet 1 and the packet 3 first reach the second network device, and the packet 2 is also in the transmission path of the first tunnel.
  • the message 4 and the message 6 first reach the second network device, and the message 5 is also in the transmission path of the second tunnel.
  • the second network device receives the packet 1 and the packet 3 through the first tunnel, and the first tunnel number carried in the packet 1 is 1, and therefore, the packet is considered to be a packet.
  • the packet is considered to be a packet. 1 Performing correct sorting in the first tunnel reordering cache, reordering the cache from the first tunnel into the bound tunnel reordering cache, and outputting the bound tunnel reordering buffer to other network devices.
  • the packet 3 arrives at the first tunnel reordering buffer before the packet 2, so it needs to wait for the packet 2 to reach the first tunnel reordering buffer, and complete the correct sorting before entering the binding tunnel reordering buffer.
  • the second network device receives the packet 4 and the packet 6 through the second tunnel, and the second tunnel number carried in the packet 4 is 1, and therefore, the packet 4 is considered to be correctly sorted in the second tunnel reordering buffer. Enter the bound tunnel reordering cache. Packet 4 arrives at the binding tunnel reordering cache before the packet 2 and the packet 3. Therefore, it is necessary to wait for the packet 2 and the packet 3 to arrive in the binding tunnel reordering buffer before the correct sorting can be completed.
  • the second network device returns the first tunnel number 1 of the message 1 that is correctly sorted as the first tunnel acknowledgement number to the first acknowledgement response that is replied to the first network device.
  • a network device replies to the first network device as the binding tunnel acknowledgement sequence number of the binding tunnel sequence number of the packet that has entered the binding tunnel reordering buffer and has been correctly sorted.
  • the second network device returns the second tunnel sequence number of the correctly sorted message 4 as the second tunnel acknowledgment number to the first network device, and enters the current time binding.
  • the number of the bound tunnel carried by the tunnel reordering buffer and the packets that have been correctly sorted is returned to the first network device as the binding tunnel acknowledgement sequence number.
  • the number of packets sent by the first network device through the first tunnel is three, and the first network device records that the first tunnel number of the packet recently sent by receiving the first acknowledgement response is 3: Determine, according to the received first confirmation response, that the first tunnel acknowledgement sequence number is 1.
  • the number of the packets sent by the first network device by using the second tunnel is 3, and the first network device records the second tunnel number of the packet recently sent by the second acknowledgement response to be 3; according to the received The second confirmation response determines that the second tunnel acknowledgement sequence number is 1.
  • the total number of packets in the binding tunnel sent by the first network device is 6, and the recorded binding tunnel sequence number is 6.
  • the maximum value of the confirmed binding tunnel acknowledgement sequence determined according to the first acknowledgment response and the second acknowledgment response is 1.
  • Time 1, time 2, and time 3 described in this example may refer to a time range, and may also refer to a specific time.
  • the following is a message about how the first network device transmits the message to the second network device according to the usage of the cache space of the binding tunnel reordering cache and the set load sharing policy.
  • the load sharing between a tunnel and the second tunnel is described in detail.
  • the set load balancing policy may specifically include the number of packets that are not correctly sorted in each single tunnel, the delay of each single tunnel, or the space usage of each single tunnel reordering cache.
  • Other load sharing strategies that are considered by those skilled in the art after reading the embodiments of the present application.
  • the implementation of load balancing based on the number of packets that have not been correctly sorted in each single tunnel is described below.
  • the first network device determines that the used space size of the bound tunnel reordering buffer is greater than or equal to a set first threshold, or the bound space of the bound tunnel reordering buffer is less than or equal to a setting
  • the first network device selects a tunnel that has a small number of correctly sorted packets in a single tunnel to perform load sharing on the packets transmitted by the first network to the second network device.
  • the single tunnel here refers to each tunnel in the binding tunnel, such as the first tunnel or the second tunnel in this embodiment.
  • the specific implementation manners of determining the number of packets that are not correctly sorted in the first tunnel and the number of packets that are not correctly sorted in the second tunnel are determined by using the specific implementation manner described above.
  • the number of packets that were not correctly sorted The following is a brief description of the number of packets in the first tunnel and the second tunnel that are not correctly sorted.
  • Each of the first part of the data packet includes a first tunnel number of the packet, and the first tunnel sequence number is used to indicate that each of the first part of the data packet is in the first The serial number in the tunnel.
  • Each of the second partial data packets includes a second tunnel serial number of the packet, and the second tunnel serial number is used to indicate that each of the second partial data packets is in the The serial number in the second tunnel.
  • the first network device receives a first acknowledgement response sent by the second network device for the first data packet in the first partial data packet.
  • the first network device receives a second acknowledgement response sent by the second network device for the second data packet in the second partial data packet.
  • the first acknowledgement response includes a first tunnel acknowledgement sequence number
  • the second acknowledgement response includes a second tunnel acknowledgement sequence number.
  • the first network device determines, according to the first tunnel acknowledgement sequence number, the number of packets that have entered the first tunnel reordering buffer and completes correct sorting.
  • the first network device determines, according to the second tunnel acknowledgement sequence number, the number of packets that have entered the second tunnel reordering buffer and completes the correct sorting.
  • the first network device according to the number of the first partial data packets sent by the first network device, and the report that has entered the first tunnel reordering cache and completes correct sorting according to the first confirmation response.
  • the number of packets obtains the number of packets that are not correctly sorted in the first tunnel.
  • Determining, by the first network device, the second partial data packet sent by the first network device, and the second tunnel reordering cache determined according to the second acknowledgement response, and performing correct sorting The number of packets is incorrectly sorted in the second tunnel. The number of messages.
  • the first network device determines that the used space size of the bound tunnel reordering buffer is greater than or equal to a set first threshold, or the bound space of the bound tunnel reordering buffer is less than or equal to a setting And the first network device performs load sharing on the packets transmitted by the first network device to the second network device between the multiple single tunnels according to the usage of the buffer space of the single tunnel reordering buffer.
  • the single tunnel in the binding tunnel refers to each tunnel in the binding tunnel, such as the first tunnel or the second tunnel in this embodiment; the single tunnel reordering cache refers to each component in the bound tunnel.
  • the reordering cache of the tunnel such as the first tunnel reordering cache or the second tunnel reordering buffer in this embodiment; the usage of the buffer space of the single tunnel reordering cache may include, for example, a single tunnel reordering cache used.
  • the first network device determines that the used space size of the bound tunnel reordering cache is greater than or equal to a set first threshold, or the bound tunnel reordering buffer When the available space size is less than or equal to a set second threshold, the first network device selects a tunnel with less cache space used in the single tunnel reordering cache or a buffer space available in the single tunnel reordering cache.
  • the tunnel performs load sharing on multiple consecutive packets received by the first network device.
  • the first network device determines, according to the length of the packet queue in the single tunnel reordering cache, the usage of the buffer space of the single tunnel reordering cache; or the first network device reorders the buffer according to the single tunnel. Determining the usage of the buffer space of the single tunnel reordering cache by using the number of available cache tiles; or the first network device determining the buffer space of the single tunnel reordering cache according to the number of packets in the single tunnel reordering buffer Usage. For details, refer to the previous description of how to bind the cache space in the tunnel reordering cache, and details are not described here.
  • the acknowledgment response may be returned to the first network device by the second network device.
  • the first acknowledgment response returned by the first tunnel is used to determine the usage of the buffer space of the first tunnel reordering cache
  • the second acknowledgment response returned by the second tunnel is used to determine The second tunnel reorders the cache's cache space usage.
  • the acknowledgment response is a GRE data packet
  • the GRE data packet includes a Reorder Buffer Size field
  • the usage of the cache space of the single tunnel reordering cache is determined according to the content carried by the Reorder Buffer Size field.
  • the Reorder Buffer Size field can be carried in the GRE header.
  • the Reorder Buffer Size field may be, for example, 32 bits.
  • the content of the packet includes the number of packets in the single tunnel reordering buffer, the length of the packet queue in the single tunnel reordering buffer, and the length of the queue queue buffer available in the single tunnel reordering buffer. The number of used cache tiles in the single tunnel reordering cache or the number of cache tiles available in the single tunnel reordering cache.
  • the acknowledgment response is a GRE control message, where the GRE control report includes an attribute type length value Attribute TLV field, where the Attribute TLV field includes a type T field, a length L field, and a value V field, and the first network device is configured according to the The content carried by the V field determines the usage of the cache space of the binding tunnel reordering cache.
  • the Attribute TLV field in the GRE control packet is as shown above, and is not described here.
  • the value of the attribute type Attribute Type may be, for example, 38, which is used to indicate the space usage of the returning single-track reordering cache.
  • the content carried by the attribute value Attribute Value is as described above and will not be described again.
  • GRE data message or control message and the fields or formats therein are merely exemplary descriptions, and do not constitute a limitation of the present invention.
  • a person skilled in the art can use other fields or formats of the GRE data packet or the control packet to carry the number of the packets in the single tunnel reordering cache in the foregoing embodiment.
  • the length of the message queue in the single tunnel reordering buffer, the length of the queue for buffering available in the single tunnel reordering buffer, and the number or the number of used cached slices in the single tunnel reordering buffer The number of cache tiles available in the single tunnel reordering cache is the meaning of this application, and will not be repeated here.
  • the first network device determines that the used space size of the bound tunnel reordering buffer is greater than or equal to a set first threshold, or the bound space of the bound tunnel reordering buffer is less than or equal to a setting
  • the first network device selects a tunnel with a small RTT in the single tunnel to perform load sharing on the packet transmitted by the first network to the second network device.
  • the first network device sends, according to the third data packet in the first partial data packet, the third data packet sent by the second network device Determining a time interval of the response, determining a round-trip delay RTT of the first tunnel, where the third data packet includes a first tunnel sequence number and a first one included in an acknowledgement response for the third data packet
  • the tunnel confirmation sequence number is the corresponding relationship. Determining, by the first network device, according to a time interval between sending the fourth data packet in the second partial data packet and receiving an acknowledgment response sent by the second network device to the fourth data packet a round-trip delay RTT of the second tunnel, where the second tunnel number included in the fourth data packet is corresponding to the second tunnel acknowledgement number included in the acknowledgment response for the second packet.
  • the third data packet when the first network device sends the third data packet by using the first tunnel, the third data packet carries a first tunnel sequence number corresponding to the third data packet.
  • the first network device collects a time interval from the sending of the third data packet to the receipt of the acknowledgment response, thereby determining the first tunnel. Round trip delay.
  • the first network device determines the round trip delay of the second tunnel based on the same principle.
  • a single tunnel RTT can be determined at the same time. There is no need to send probe packets separately to count the RTT of a single tunnel, which saves network overhead. It should be noted that, in addition to the foregoing method for determining the RTT provided by the embodiment of the present application, any existing method in the prior art may be used to determine the RTT of the tunnel, for example, separately sending the detection packet for detecting the RTT. No longer.
  • a plurality of consecutive message sequences that arrive next to the second network device may be caused according to a set load sharing policy, for example, selecting a small RTT.
  • the tunnel transmits the sequence of the packet, or selects a tunnel in the single tunnel that does not complete the correct number of sorts to transmit the sequence of the packet, and is no longer allocated to the tunnel with a large delay or a large number of packets in a single tunnel.
  • the usage of the cache space bound to the tunnel reordering cache is determined by the acknowledgment response returned by the second network device, and according to the usage of the cache space and the set load sharing policy, Dynamically load balancing packets transmitted by the first network device to the second network device between a tunnel and the second tunnel. It can effectively improve the transmission efficiency of the bonded tunnel.
  • FIG. 5 is a schematic diagram of a first network device 500 according to an embodiment of the present application.
  • the first network device 500 can be a HAAP device or an HG device in FIG. 2, and can be used to perform the method shown in FIG.
  • a first tunnel and a second tunnel are established between the first network device 500 and the second network device, where the first tunnel and the second tunnel are bound by a hybrid port to form a binding tunnel.
  • the second network device includes a binding tunnel reordering buffer, and the binding tunnel reordering buffer is used to sort the packets entering the binding tunnel reordering buffer.
  • the first network device 500 includes: a sending module 501, a receiving module 502, and a processing module 503.
  • the sending module 501 is configured to send multiple data packets to the second network device.
  • the sending module 501 sends the multiple data packets to the second network device by using the first tunnel. In another specific implementation manner, the sending module 501 sends the multiple data packets to the second network device by using the second tunnel. In another specific implementation manner, the sending module 501 sends the first part of the data message to the second network device by using the first tunnel, where the sending module 501 passes the The second tunnel sends a second part of the data message to the second network device.
  • the receiving module 502 is configured to receive an acknowledgment response sent by the second network device.
  • the acknowledgment response includes information about the usage of the cache space of the binding tunnel reordering cache
  • the first network device determines, according to the information about the usage of the cache space of the bound tunnel reordering cache.
  • the usage of the cache space of the binding tunnel reordering cache, and the second network device to the second network device according to the usage of the cache space of the binding tunnel reordering buffer and the set load sharing policy The packet transmitted by the network device performs load sharing between the first tunnel and the second tunnel.
  • the processing module 503 is configured to determine, according to the confirmation response, a usage of the cache space of the binding tunnel reordering cache, and reorder the cached space usage according to the binding tunnel and a set load balancing policy. And performing load sharing between the first tunnel and the second tunnel on the packet that is sent by the first network device to the second network device.
  • the usage of the buffer space of the binding tunnel reordering cache may include, for example, a size of a used buffer space in the binding tunnel reordering cache or an available in the binding tunnel reordering cache.
  • the size of the cache space may include, for example, the number of packets that are not correctly sorted in each single tunnel, the delay of each single tunnel, or the space usage of each single tunnel reordering cache.
  • Other load sharing strategies that are considered by those skilled in the art after reading the embodiments of the present application.
  • the usage of the cache space bound to the tunnel reordering cache is determined by the acknowledgment response returned by the second network device, and according to the usage of the cache space and the set load sharing policy,
  • the dynamic load balancing of the packets transmitted by the first network device to the second network device between the tunnel and the second tunnel can effectively improve the transmission efficiency of the bonded tunnel.
  • the network delay of the bound tunnel can be effectively reduced, and/or the packet tunnel reordering buffer traffic overflow may be caused by the network congestion, which may result in packet loss and trigger application.
  • the problem of layer retransmission is determined by the acknowledgment response returned by the second network device, and according to the usage of the cache space and the set load sharing policy.
  • the following describes how to use the cache space of the binding tunnel reordering cache according to the specific confirmation response.
  • the binding tunnel reordering cache when the usage of the cache space of the binding tunnel reordering cache includes the size of the used cache space in the binding tunnel reordering cache, the binding The size of the used buffer space in the tunnel reordering cache includes the number of packets in the bound tunnel reordering buffer, the length of the packet queue in the bound tunnel reordering buffer, or the bound tunnel reordering buffer. The number of cache tiles that have been used.
  • the size of the buffer space available in the binding tunnel reordering cache includes The length of the cached message queue available in the bound tunnel reordering cache or the number of cached slices available in the bound tunnel reordering cache.
  • the receiving module 502 receives the acknowledgment response sent by the second network device, and specifically includes: the first network device receiving the second network device by using the first tunnel or the second The confirmation response returned by the tunnel.
  • the acknowledgment response is a GRE data packet
  • the GRE data packet includes a binding reorder buffer size field
  • the Bonding Reorder Buffer Size field carries the number of packets in the binding tunnel reordering buffer.
  • the length of the packet queue in the binding tunnel reordering cache, the length of the queue for buffering packets available in the binding tunnel reordering buffer, and the used cache slice in the binding tunnel reordering buffer The number or number of cache tiles available in the bound tunnel reordering cache.
  • the acknowledgment response is a GRE control message, where the GRE control message includes an attribute type length value Attribute TLV field, and the Attribute TLV field includes a type T field, a length L field, and a value V field, where the V field bearer
  • the bound tunnel reorders the number of used cached slices in the cache or the number of available cached slices in the bound tunnel reordering cache to determine the usage of the cached space of the bound tunnel reordering cache. Which packet format or field format is used (such as which fields or extension fields are used), the specific content or value of each field, and how to resize the buffer according to the packet queue in the binding tunnel.
  • a detailed description of the number of used or available cached slices in the cache, or the number of packets in the tunnel reordering cache to determine the usage of the cached space of the bound tunnel reordering cache, may be implemented by referring to the above method. The description of the corresponding parts in the examples will not be repeated here.
  • the first network device is directed to the first network device by resizing the size of the used buffer space in the cache according to the binding tunnel or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic transmission load balancing between the first tunnel and the second tunnel is performed on the packets transmitted by the network device, which can effectively reduce the network delay of the bonded tunnel, and/or can be more obviously suppressed.
  • the packet tunneling reordering cache traffic overflows due to network congestion, which causes packet loss and triggers application layer retransmission.
  • the binding may also be performed according to the foregoing.
  • the number of correctly sorted packets in the tunnel reordering buffer is not completed.
  • the number of correctly sorted packets in the first tunnel and the number of correctly sorted packets in the second tunnel are determined to determine the bound tunnel reordering buffer.
  • the number of messages in the middle are not completed.
  • the processing module 503 according to the number of the acknowledgment message in response to determining that the first tunnel is not completed correctly ordered F 1, the second tunnel is not the number of packets correctly ordered and complete F 2
  • the second network device further includes a first tunnel reordering buffer and a second tunnel reordering buffer, where the first tunnel reordering buffer is used to sort the packets that are transmitted through the first tunnel.
  • the second tunnel reordering buffer is configured to sort the packets transmitted by the second tunnel.
  • the sending module 501 sends, by using the first tunnel, the first part of the data packet to the second network device, where each packet in the first part of the data packet includes the packet a sequence number of the bound tunnel and a sequence number of the first tunnel, where the first tunnel sequence number is used to indicate a transmission sequence of each packet in the first part of the data packet in the first tunnel, the first part of data
  • the binding tunnel sequence number of the packet included in each packet in the packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the sending module 501 sends, by using the second tunnel, the second part of the data packet to the second network device, where each of the second part of the data packet includes a sequence of the binding tunnel of the packet and a second tunnel sequence number, where the second tunnel sequence number is used to indicate a transmission sequence of each packet in the second part of the data packet in the second tunnel,
  • the binding tunnel sequence number of the packet included in each packet in the second part of the data packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the first confirmation response includes the first tunnel acknowledgement sequence number and the binding tunnel acknowledgement sequence number
  • the second acknowledgement response includes the second tunnel acknowledgement sequence number and the binding tunnel acknowledgement sequence number
  • the processing module 503 determines, according to the first acknowledgment response, the number of packets that have entered the first tunnel reordering buffer and completes the correct sorting, and the number of packets that have entered the binding tunnel reordering buffer and complete the correct sorting.
  • the number of correctly ordered packets F 1 ; and the number of the second partial data packets sent to the second network device and according to the second acknowledgement response The determined number of packets that have entered the second tunnel reordering buffer and completed the correct sorting results in the number of packets F 2 that are not correctly sorted in the second tunnel.
  • the processing module 503 may determine, according to the first tunnel acknowledgement sequence number, the number of packets that have entered the first tunnel reordering buffer and complete correct sorting, and according to the first The binding tunnel confirmation sequence number included in the acknowledgment response determines the number M of packets that have entered the binding tunnel reordering buffer and completes the correct sorting; and determines that the second tunnel weight has been entered according to the second tunnel acknowledgment number Sorting the cache and completing the number of correctly sorted packets, and determining the number N of packets that have entered the bound tunnel reordering cache and complete the correct sorting according to the binding tunnel acknowledgement sequence number included in the second acknowledgement response.
  • each of the first part of the data packet includes a sequence number field for carrying the first tunnel sequence number and a binding sequence number for carrying the bound tunnel sequence number.
  • a binding sequence number field each of the second part of the data packet includes a sequence number field for carrying the second tunnel sequence number, and a binding sequence number Binding Sequence for carrying the bound tunnel sequence number Number field.
  • the first acknowledgment response is a general routing encapsulation GRE data packet
  • the first tunnel acknowledgment sequence number is carried by using an acknowledgment number Acknowledgment Number field included in the GRE data packet
  • the binding confirmation number Bonding Acknowledgment Number field included in the GRE data packet carries the binding tunnel acknowledgement sequence number.
  • the first acknowledgment response is a GRE control message
  • the first tunnel acknowledgment sequence number and the binding tunnel acknowledgment sequence number are carried by using an attribute type length value Attribute TLV field included in the GRE control message.
  • the second acknowledgment response is a GRE data packet
  • the second tunnel acknowledgment sequence number is carried by using an acknowledgement number Acknowledgment Number field included in the GRE data packet, and the GRE data is used.
  • the binding acknowledgement number Bonding Acknowledgment Number field included in the packet carries the tunnel binding acknowledgement sequence number.
  • the second acknowledgment response is a GRE control message, and the second tunnel acknowledgment sequence number and the binding tunnel acknowledgment sequence number are carried by using an attribute type length value Attribute TLV field included in the GRE control message.
  • the Sequence Number field, the Bonding Sequence Number field, the Acknowledgment Number field, and the Bonding Acknowledgment Number field carried by the GRE data message, or the Attribute TLV field carried by the GRE control message are not needed for the existing The protocol is extended, and the number of packets in the re-queue buffer of the binding tunnel is determined according to the content carried in the foregoing field.
  • the first network device determines, according to the number of the packets in the binding tunnel reordering cache, the usage of the buffer space of the binding tunnel reordering cache, a text format or a field format (such as which fields or extension fields are used), a specific content or value of each field, and determining, for the first network device, the binding according to the number of packets in the binding tunnel reordering buffer
  • a text format or a field format such as which fields or extension fields are used
  • Determining, according to the usage of the buffer space reordered by the binding tunnel and the set load sharing policy, determining, by the first network device, the packet transmitted to the second network device in the first tunnel and the Second Load sharing between tunnels is described in detail.
  • the configured load balancing policy includes: the processing module determines that a size of a used buffer space in the bound tunnel reordering cache is greater than or equal to a first threshold or the binding After the size of the buffer space available in the tunnel reordering buffer is less than or equal to the second threshold, selecting a tunnel having a small round trip delay RTT in the first tunnel and the second tunnel or selecting the first tunnel and The tunnel in the second tunnel that does not complete the correctly sorted number of packets transmits the packet sent by the first network device to the second network device.
  • the load balancing policy set in the present application may include the space usage of each single tunnel reordering cache, in addition to the number of packets that are not correctly sorted in each single tunnel and the delay of each single tunnel.
  • load balancing is performed according to the number of packets that are not correctly sorted in each single tunnel, and the load balancing is performed according to the usage of the buffer space of each single tunnel reordering cache, and the delay according to each single tunnel is performed.
  • load sharing refer to the detailed description of the corresponding parts of the foregoing method embodiments, and details are not described herein again.
  • the present application provides an implementation manner for determining a single tunnel round-trip delay RTT, which specifically includes: the processing module 503 sends the first partial datagram according to the The round-trip delay RTT of the first tunnel is determined by the time interval between the third data packet in the text and the acknowledgment response sent by the second network device to the third data packet.
  • the processing module 503 determines, according to the time interval between sending the fourth data packet in the second partial data packet and the acknowledgment response sent by the second network device to the fourth data packet, The round trip delay RTT of the second tunnel.
  • the first network device 500 may correspond to a first network device in a method for loading according to an embodiment of the present application, and each module in the network device and the other operations and/or functions described above are respectively In order to implement the corresponding process of the method 100 in FIG. 3, for brevity, details are not described herein again.
  • FIG. 6 is a schematic diagram of a second network device 600 according to an embodiment of the present application.
  • the second network device 600 can be an HG device or a HAAP device in the figure, and can be used to perform the method shown in FIG.
  • a first tunnel and a second tunnel are established between the second network device 600 and the first network device, and the first tunnel and the second tunnel are bound to each other by a Hybrid bonding.
  • the second network device 600 includes: a receiving module 601, a processing module 602, a sending module 603, and a binding tunnel reordering cache module 604.
  • the binding tunnel reordering cache module 604 is configured to sort the packets entering the binding tunnel reordering buffer.
  • the receiving module 601 is configured to receive multiple data packets sent by the first network device.
  • the second network device receives the multiple data packets by using the first tunnel. In another specific implementation manner, the second network device receives the multiple data packets by using the second tunnel. In another specific implementation manner, the receiving, by the second network device, the multiple data packets sent by the first network device, specifically: the second network device receiving the first network device a first part of the data packets sent by the first tunnel, where the second network device receives the multiple data packets sent by the first network device by using the second tunnel The second part of the data message.
  • the processing module 602 is configured to obtain information about a usage of a buffer space of a binding tunnel reordering cache included in the second network device, where the binding tunnel reordering buffer is used to reorder the entering the binding tunnel.
  • the messages in the cache are sorted.
  • the usage of the buffer space of the binding tunnel reordering cache may include, for example, a size of used buffer space in the binding tunnel reordering cache or available in the binding tunnel reordering cache.
  • the size of the cache space may include, for example, a size of used buffer space in the binding tunnel reordering cache or available in the binding tunnel reordering cache.
  • the sending module 603 is configured to send an acknowledgment response to the first network device, where the acknowledgment response includes information about a usage of the cache space of the binding tunnel reordering cache.
  • the information is used by the first network device to determine a usage of the cache space of the binding tunnel reordering cache, and reorder the buffered cache space usage according to the binding tunnel and set load sharing. And performing a load balancing between the first tunnel and the second tunnel for the packet that is sent by the first network device to the second network device.
  • the second network device 600 returns an acknowledgement response to the first network device every time a message is received.
  • the second network device 600 may be configured to periodically return the acknowledgement response to the first network device at a certain time interval.
  • the second network device may further send the acknowledgement response when receiving the request sent by the first network device or reaching a set early warning state.
  • the set alert status includes, but is not limited to, the size of the used cache space of the bound tunnel reorder buffer is greater than or equal to a set threshold, or the size of the available cache space of the bound tunnel reorder buffer is less than or equal to a set value. Threshold. This application does not specifically limit this.
  • the set load balancing policy may include, for example, the number of packets that are not correctly sorted in each single tunnel, the delay of each single tunnel, or the space of each single tunnel reordering cache. Usage and other load sharing strategies that are considered by those skilled in the art after reading the embodiments of the present application.
  • the first network device determines that the size of the used cache space in the bound tunnel reordering cache is greater than or equal to the first threshold or the size of the available cache space in the bound tunnel reordering cache is less than or equal to
  • the tunnel with the small round trip delay RTT in the first tunnel and the second tunnel is selected, or the number of packets that are not correctly sorted in the first tunnel and the second tunnel is selected.
  • the tunnel is transmitted by the first network device to the second network device.
  • the following describes how to obtain the usage of the cache space of the binding tunnel reordering cache.
  • the usage of the cache space of the binding tunnel reordering cache includes the size of the used cache space in the binding tunnel reordering cache
  • the information about the usage of the buffer space of the binding tunnel reordering cache includes the number of packets in the binding tunnel reordering buffer, the length of the packet queue in the binding tunnel reordering buffer, or the binding Tunnel reordering The number of cache tiles that have been used in the store.
  • the information about the usage of the buffering space of the binding tunnel reordering buffer includes: The length of the cached message queue available in the binding tunnel reordering cache or the number of cached slices available in the bound tunnel reordering cache.
  • the sending by the sending module 603, the acknowledgment response to the first network device, the second network device, by using the first tunnel or the second tunnel, to the first network device Send the confirmation response.
  • the acknowledgment response is a GRE data packet
  • the GRE data packet includes a binding reorder buffer size field
  • the Bonding Reorder Buffer Size field carries the binding tunnel.
  • the number of packets in the reordering cache, the length of the packet queue in the binding tunnel reordering cache, the length of the queue for buffering packets available in the binding tunnel reordering cache, and the weight of the binding tunnel The number of cached tiles used in the sorting cache or the number of cached tiles available in the bound tunnel reordering cache.
  • the acknowledgment response is a GRE control message, where the GRE control message includes an attribute type length value Attribute TLV field, and the Attribute TLV field includes a type T field, a length L field, and a value V. Field.
  • the V field carries the number of packets in the binding tunnel reordering buffer, and the length of the packet queue in the binding tunnel reordering buffer, where the binding tunnel reordering buffer is used for buffering packets. The length of the queue, the number of cached tiles used in the binding tunnel reordering cache or the number of available cache tiles in the bound tunnel reordering cache.
  • the binding by the first network device, according to the foregoing binding tunnel reordering cache, the number of packets in the binding tunnel, and the length of the packet queue in the binding tunnel reordering cache, the binding tunnel reordering buffer
  • the length of the cached message queue available in the binding tunnel, the number of used cached slices in the bound tunnel reordering cache or the number of available cached slices in the bound tunnel reordering cache to determine the binding The usage of the cache space of the tunnel reordering cache.
  • the information about the usage of the buffer space of the binding tunnel reordering cache includes a single tunnel acknowledgement number and a binding tunnel acknowledgement sequence number.
  • the single tunnel in the present application refers to each tunnel in the binding tunnel, such as the first tunnel or the second tunnel in this embodiment.
  • the binding may also be performed according to the foregoing.
  • the number of correctly sorted packets in the tunnel reordering buffer is not completed.
  • the number of correctly sorted packets in the first tunnel and the number of correctly sorted packets in the second tunnel are determined to determine the bound tunnel reordering buffer.
  • the number of messages in the middle is determined by using the single tunnel acknowledgement number and the binding tunnel acknowledge sequence number.
  • the second network device further includes:
  • a first tunnel reordering cache module configured to sort the packets that are transmitted into the first tunnel reordering buffer by using the first tunnel
  • a second tunnel reordering cache module configured to sort the packets that are transmitted into the second tunnel reordering buffer by using the second tunnel
  • the receiving module 601 receives the multiple data that is sent by the first network device by using the first tunnel. a first part of the data packet, wherein each of the first part of the data packet includes a binding tunnel number of the packet and a first tunnel serial number, where the first tunnel serial number is used to indicate the first a sequence of transmission of each of the data packets in the first tunnel, and a binding tunnel number of the packet included in each packet in the first partial data packet is used to indicate the packet The order of transmission in the bound tunnel.
  • the receiving module 601 receives a second partial data packet of the plurality of data packets sent by the first network device by using the second tunnel, and each packet in the second partial data packet
  • the binding tunnel sequence number of the packet included in each packet in the second part of the data packet is used to indicate the transmission sequence of the packet in the binding tunnel.
  • the acknowledgment response includes a first acknowledgment response of the second network device 600 for the first partial data message and a second acknowledgment response of the second network device 600 for the second partial data message.
  • the processing module 602 obtains, in the first tunnel reordering cache in the second network device, the first tunnel sequence number and the bound tunnel in the packet that has been correctly sorted, which is the closest to the first acknowledgement response.
  • the binding tunnel sequence number in the reordered buffer is the closest to the first acknowledgment response, and the first tunnel acknowledgment sequence number is determined according to the first tunnel sequence number.
  • the processing module 602 determines, according to the binding tunnel sequence number in the packet that has been correctly sorted in the binding tunnel reordering cache that is sent to the first acknowledgment response, the binding included in the first acknowledgment response.
  • the tunnel confirms the serial number.
  • the information about the usage of the buffer space of the binding tunnel reordering cache included in the first acknowledgment response includes the first tunnel acknowledgment sequence number and a binding tunnel acknowledgment sequence number included in the first acknowledgment response.
  • the processing module 602 obtains, in the second tunnel reordering cache in the second network device, the second tunnel sequence number and the bound tunnel in the packet that has been correctly sorted, which is the closest to the second acknowledgement response.
  • the binding tunnel sequence number in the reordered buffer is the closest to the second acknowledgment response, and the second tunnel acknowledgment sequence number is determined according to the second tunnel sequence number.
  • the processing module 602 determines, according to the bound tunnel sequence number in the packet that has been correctly sorted in the binding tunnel reordering buffer that is sent to the second acknowledgment response, the binding included in the second acknowledgment response.
  • the tunnel confirms the serial number.
  • the information about the usage of the buffer space of the binding tunnel reordering buffer included in the second acknowledgment response includes the second tunnel acknowledgment sequence number and a binding tunnel acknowledgment sequence number included in the second acknowledgment response.
  • the first tunnel acknowledge sequence number, the second tunnel acknowledge sequence number, the binding tunnel acknowledge sequence number included in the first acknowledgement response, and the binding tunnel acknowledge sequence number included in the second acknowledgement response are the first
  • the network device is configured to determine the number of the packets of the binding tunnel reordering cache, and determine the usage of the buffer space of the binding tunnel reordering cache according to the number of the packets that are cached by the binding tunnel.
  • the first acknowledgment response is a GRE data packet
  • the first tunnel acknowledgment sequence number is carried by using an acknowledgement number Acknowledgment Number field included in the GRE data packet, and the GRE data is used.
  • the binding acknowledgement number Bonding Acknowledgment Number field included in the packet carries the tunnel binding acknowledgement sequence number.
  • the first acknowledgment response is a GRE control message
  • the first tunnel acknowledgment sequence number and the identifier are carried by using an attribute type length value Attribute TLV field included in the GRE control message. Bind the tunnel confirmation sequence number.
  • the second acknowledgment response is a GRE data message, and the GRE is used.
  • the acknowledgement number Acknowledgment Number field included in the data packet carries the second tunnel acknowledgement sequence number, and the binding tunnel acknowledgement sequence number is carried by the binding acknowledgement number Bonding Acknowledgment Number field included in the GRE data packet.
  • the second acknowledgment response is a GRE control message
  • the second tunnel acknowledgment sequence number and the identifier are carried by using an attribute type length value Attribute TLV field included in the GRE control message. Bind the tunnel confirmation sequence number.
  • the first network device is directed to the first network device by resizing the size of the used buffer space in the cache according to the binding tunnel or the size of the buffer space available in the binding tunnel reordering buffer.
  • the dynamic transmission load balancing between the first tunnel and the second tunnel is performed on the packets transmitted by the network device, which can effectively reduce the network delay of the bonded tunnel, and/or can be more obviously suppressed.
  • the packet tunneling reordering cache traffic overflows due to network congestion, which causes packet loss and triggers application layer retransmission.
  • the second network device 600 may correspond to the second network device in the method for loading according to the embodiment of the present application, and each module in the network device and the other operations and/or functions described above are respectively In order to implement the corresponding process of the method 100 in FIG. 3, for brevity, details are not described herein again.
  • the first network device 500 and the second network device 600 provided in the foregoing embodiments of the present application are only illustrated by the division of the foregoing functional modules. In an actual application, the function distribution may be completed by different functional modules as needed. That is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • FIG. 7 is another schematic diagram of a first network device 700 according to an embodiment of the present application.
  • the first network device 700 can be a HAAP device or an HG device in FIG. 2, and can be used to perform the method shown in FIG.
  • a first tunnel and a second tunnel are established between the first network device 700 and the second network device, and the first tunnel and the second tunnel are bound by a Hybrid bonding to form a binding tunnel.
  • the second network device includes a binding tunnel reordering buffer, and the binding tunnel reordering buffer is used to sort the packets entering the binding tunnel reordering buffer.
  • the first network device 700 includes an input interface 701, an output interface 702, a processor 703, and a memory 704.
  • the input interface 701, the output interface 702, the processor 703, and the memory 704 can be connected by a bus system 705.
  • the memory 704 is configured to store programs, instructions or code.
  • the processor 703 is configured to execute a program, an instruction, or a code in the memory 704 to control the input interface 701 to receive a signal, control the output interface 702 to send a signal, and implement the first network in the implementation manner corresponding to FIG. 3 above. The steps and functions implemented by the device are not described here.
  • the output interface 702 is configured to send a plurality of data packets to the second network device, where the input interface 701 is configured to receive an acknowledgement response sent by the second network device, where the processing is performed.
  • the determining unit 703 is configured to determine, according to the acknowledgement response, a usage of the cache space of the binding tunnel reordering cache, and reorder the buffered cache space usage according to the binding tunnel and a set load balancing policy.
  • the packet transmitted by the first network device to the second network device performs load sharing between the first tunnel and the second tunnel.
  • FIG. 8 is another schematic diagram of a second network device 800 in accordance with an embodiment of the present application.
  • the second network device 800 can be an HG device or a HAAP device in the figure, and can be used to perform the method 100 shown in FIG.
  • a first tunnel and a second tunnel are established between the second network device 800 and the first network device, and the first tunnel and the second tunnel are bound to each other by a Hybrid bonding.
  • the second network device 800 includes an input interface 801, an output interface 802, a processor 803, and a memory 804.
  • the input interface 801, the output interface 802, the processor 803, and the memory 804 can be connected by a bus system 805.
  • the memory 804 is used to store programs, instructions or code.
  • the processor 803 is configured to execute a program, an instruction, or a code in the memory 804, to control the input interface 801 to receive a signal, control the output interface 802 to send a signal, and implement the second network in the embodiment corresponding to FIG. 3 above.
  • the steps and functions implemented by the device are not described here.
  • the memory 804 includes a binding tunnel reordering buffer for sorting packets that enter the binding tunnel reordering cache, where the input interface 801 is configured to receive the first network device to send.
  • the plurality of data packets, the processor 803 is configured to obtain information about a usage of the buffer space of the binding tunnel reordering cache, where the output interface 802 is configured to send an acknowledgement response to the first network device.
  • the confirmation response includes information about the usage of the cache space of the binding tunnel reordering cache.
  • the information is used by the first network device to determine a usage of the cache space of the binding tunnel reordering cache, and reorder the buffered cache space usage according to the binding tunnel and set load sharing.
  • the memory 804 may further include a first tunnel reordering buffer, configured to sort the packets entering the first tunnel reordering cache, where the memory 804 may further include a second tunnel. And a reordering buffer, configured to sort the packets entering the second tunnel reordering cache.
  • the specific implementations of the foregoing memory 804, the output interface 802, the input interface 801, and the processor 803 may be referred to the receiving module 602, the sending module 601, the processing module 603, and the binding tunnel reordering cache module 604 in the foregoing embodiment of FIG. The specific description will not be repeated here.
  • the processor 703 and the processor 803 may be a central processing unit ("CPU"), and may be other general-purpose processors and digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 704 and memory 804 can include read only memory and random access memory and provide instructions and data to the processor 703 and the processor 803, respectively.
  • Memory 704 or a portion of memory 804 may also include non-volatile random access memory.
  • the memory 704 or the memory 804 can also store information of the device type.
  • the bus system 705 and the bus system 805 may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like. However, for the sake of clarity, the various buses are labeled as bus systems in the figure.
  • the steps of the method 100 may be performed by an integrated logic circuit of hardware in the processor 703 and the processor 803 or an instruction in the form of software.
  • the steps of the positioning method disclosed in the embodiments of the present application may be directly implemented by the hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in memory 704 and memory 804, respectively, processor 703 reads the information in memory 704, processor 803 reads the information in memory 804, and completes the steps of method 100 described above in conjunction with its hardware. To avoid repetition, it will not be described in detail here.
  • the processing module 503 in FIG. 5 can be implemented by the processor 703 of FIG. 7, the sending module 501 can be implemented by the output interface 702 of FIG. 7, and the receiving module 502 can be implemented by the The input interface 701 is implemented.
  • the processing module 602 in FIG. 6 is implemented by the processor 803 of FIG. 8, the transmitting module 603 can be implemented by the output interface 802 of FIG. 8, and the receiving module 601 can be implemented by the input interface 801 of FIG.
  • the present application further provides a communication system, including a first network device and a second network device, where the first network device may be the first network device provided by the embodiment corresponding to FIG. 5 and FIG.
  • the second network device may be the second network device provided by the embodiment corresponding to FIG. 6 and FIG. 8.
  • the communication system is for performing the method 100 of the embodiment corresponding to Figures 2-4.
  • the size of the sequence number of each process does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be taken by the embodiment of the present application.
  • the implementation process constitutes any qualification.
  • modules and method steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (eg, a solid state hard disk Solid State Disk) (SSD)) and so on.
  • a magnetic medium eg, a floppy disk, a hard disk, a magnetic tape
  • an optical medium eg, a DVD
  • a semiconductor medium eg, a solid state hard disk Solid State Disk) (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)

Abstract

本申请提供了一种负载分担的方法,该方法包括:第一网络设备和第二网络设备之间建立第一隧道和第二隧道,第一隧道和第二隧道通过混合接口绑定形成绑定隧道;第一网络设备向第二网络设备发送多个数据报文,第一网络设备根据第二网络设备返回的确认响应,确定第二网络设备的绑定隧道重排序缓存的缓存空间的使用情况,根据所述使用情况和设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。本申请的方法,有效提高了绑定隧道的传输效率。

Description

一种报负载分担方法及网络设备
本申请要求于2017年01月20日提交中国专利局、申请号为201710048343.9、申请名称为“一种报负载分担方法及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及一种负载分担的方法及网络设备。
背景技术
混合接入(Hybrid Access,HA)网络是指将不同的接入网络连接捆绑起来提供给同一用户使用。混合接入网络能够使用户体验高速网络。例如,两种不同的接入网络为数字用户专线(Digital Subscriber Line,DSL)和长期演进(Long Term Evolution,LTE)。目前,一种可能实现HA的方式是:运营商侧网络设备与用户侧网络设备之间通过通用路由封装(Generic Routing Encapsulation,GRE)隧道实现不同网络类型隧道的混合端口的绑定(英文:Hybrid Bonding)。例如,在DSL和LTE的广域网(英文:Wide Area Network,WAN)接口上分别建立GRE隧道,然后,将两条隧道绑定成一条上行接入通道。运营商侧网络设备与用户侧网络设备之间的消息传递,通过将该消息封装成GRE报文格式后在用户侧网络设备(例如可以是HG(英文:Home Gateway,HG))与运营商侧网络设备(例如可以是HAAP(英文:Hybrid Access Aggregation Point,HAAP))设备之间进行转发。运营商侧网络设备用于捆绑连接不同的接入网络,为用户提供高速的互联网接入;用户侧网络设备能够允许两种不同接入网络同时接入,如,可以同时允许固定宽带网络和移动网络的接入。
现有技术中,在混合接入网络中,基于令牌桶来实现负载分担。以DSL和LTE两种接入网络为例,HAAP与HA之间建立有两条隧道,即DSL隧道和LTE隧道。发送端使用染色机制,根据DSL隧道和LTE隧道带宽来决定报文的染色,根据染色,来确定报文沿DSL隧道还是LTE隧道发送。
如图1所示,发送端维护两个令牌桶,DSL令牌桶(图1中用左斜杠示出)
和LTE令牌桶(图1中用右斜杠示出)。按照DSL和LTE隧道带宽确定两个令牌桶的大小。进入DSL令牌桶的报文被标记为绿色(图1中用左斜杠示出),超过DSL令牌桶接收能力的报文进入LTE令牌桶,进入LTE令牌桶的数据报文被标记为黄色(图1中用右斜杠示出)。最终,绿色报文沿着DSL隧道发送,黄色报文沿着LTE隧道发送。
假设DSL隧道和LTE隧道的带宽是固定的,上述基于令牌桶的负载分担机制无法根据DSL隧道和LTE隧道的实际状况来动态调整流量的负载分担比例,导致绑定隧道传输效率被降低。例如,在DSL隧道出现拥塞,时延的情况下,当DSL隧道的吞吐量未达到DSL隧道的签约带宽时,数据报文会被标记成绿色,依旧按照状况不佳的DSL隧道发送流量。此时,LET隧道的可用带宽无法合理被用户使用,造成LTE隧道闲置,DSL隧道时延增加,导致HA网络整体吞吐量不佳,并且容易导致DSL隧 道发生丢包等传输错误,系统资源利用率也会很大程度降低,阻碍混合接入技术的实现。再例如,在LTE隧道出现拥塞,时延的情况下,通过DSL传输的大量报文由于需要等到LTE隧道中的报文到达接收端后,才能进行重排序,于是导致重排序缓冲中可能积攒大量报文,造成整体吞吐量降低,绑定隧道吞吐量不如单DSL隧道,拥塞严重时,可能导致重排序缓存溢出,进而导致报文被丢弃,触发应用层重传的严重问题。
发明内容
本申请提供了一种负载分担的方法的方法及网络设备,用于提高混合接入网中绑定隧道的传输效率。
第一方面,本申请提供了一种负载分担的方法。该方法应用于第一网络设备。该第一网络设备和第二网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道。所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。该第一网络设备例如可以是混合接入网中的HG设备,该第二网络设备例如可以是所述混合接入网中的HAAP设备。或者,该第一网络设备例如可以是混合接入网中的HAAP设备,该第二网络设备例如可以是所述混合接入网中的HG设备。在该方法中,该第一网络设备向该第二网络设备发送多个数据报文,接收该第二网络设备发送的确认响应,具体可以是针对该多个数据报文中的每个报文发送确认响应、针对每几个报文发送的确认响应或者根据其他设定的间隔时间发送的确认响应。该确认响应可以认为是该第二网络设备针对该多个数据报文的确认响应。然后,该第一网络设备根据上述确认响应确定所述重排序绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
可选的,所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文。可选的,所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文。可选的,所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文;所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。
在上述方案中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担。能够有效的提高绑定隧道的传输效率。
在一个可能的设计中,所述绑定隧道重排序缓存的缓存空间的使用情况,可以包括所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
在一个可能的设计中,所述第一网络设备根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况,具体包括:
所述第一网络设备根据所述确认响应确定所述第一隧道中未完成正确排序的报文 数量F1,所述第二隧道中未完成正确排序的报文数量F2以及所述绑定隧道中未完成正确排序的报文数量FB,进而确定所述绑定隧道重排序缓存中的报文数量B,B=FB-F1-F2,根据所述绑定隧道重排序缓存中的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
在一个可能的设计中,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序。
所述第一网络设备向所述第二网络设备发送所述多个数据报文,具体包括:
所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文,所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。
所述第一网络设备接收所述第二网络设备发送的确认响应,具体包括:
所述第一网络设备接收所述第二网络设备发送的针对所述第一部分数据报文中的第一数据报文的第一确认响应,根据所述第一确认响应确定已进入所述第一隧道重排序缓存并完成正确排序的报文数量以及已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M。以及
所述第一网络设备接收所述第二网络设备发送的针对所述第二部分数据报文中的第二数据报文的第二确认响应,根据所述第二确认响应确定已进入所述第二隧道重排序缓存并完成正确排序的报文数量以及已进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。
所述第一网络设备根据M和N中的较大值以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道中未完成正确排序的报文数量FB
所述第一网络设备根据向所述第二网络设备发送的所述第一部分数据报文的数量以及根据所述第一确认响应确定的已进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量F1
所述第一网络设备根据向所述第二网络设备发送的所述第二部分数据报文的数量以及根据所述第二确认响应确定的已进入所述第二隧道重排序缓存并完成正确排序的报文数量得到所述第二隧道中未完成正确排序的报文数量F2
在一个可能的设计中,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号。所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序。所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。所述第一确认响应包括第一隧道确认序号以及绑定隧道确认序号;所述第二确认响应中包括第二隧道确认序号以及绑定隧道确认序号。
所述第一网络设备根据所述第一隧道确认序号确定所述已进入所述第一隧道重排序缓存并完成正确排序的报文数量,并根据所述第一确认响应包括的绑定隧道确认序 号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M。
所述第一网络设备根据所述第二隧道确认序号确定所述已进入所述第二隧道重排序缓存并完成正确排序的报文数量,并根据所述第二确认响应包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。
上述方案中,通过确定绑定隧道重排序缓存中的报文数量,可用于确定所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。另外,通过上述方案,可以利用现有的协议(例如GRE协议)已经定义的字段携带报文序号,从而通过报文序号确定绑定隧道重排序缓存中报文的数量,降低了该方法的实现复杂度。
在一个可能的设计中,所述第一部分数据报文中的每个报文包括用于承载所述第一隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段;所述第二部分数据报文中的每个报文包括用于承载所述第二隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。
在一个可能的设计中,所述第一确认响应为通用路由封装GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号;或所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
在一个可能的设计中,所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号。
在一个可能的设计中,所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
在一个可能的设计中,所述第一网络设备根据发送所述第一部分数据报文中的第三数据报文与收到所述第二网络设备发送的针对所述第三数据报文的确认响应的时间间隔,确定所述第一隧道的往返时延RTT。
在一个可能的设计中,所述第一网络设备根据发送所述第二部分数据报文中的第四数据报文与收到所述第二网络设备发送的针对所述第四数据报文的确认响应的时间间隔,确定所述第二隧道的往返时延RTT。
通过本申请提供的确定单隧道RTT的方法,在基于动态负载分担的方案中,同时可以确定单隧道RTT。无需单独发送探测报文来统计单隧道的RTT,有效节省了网络 开销。
在一个可能的设计中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存中已用的缓存空间的大小包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量。
在一个可能的设计中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存中可用的缓存空间的大小包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在一个可能的设计中,所述第一网设备接收到的所述确认响应为GRE数据报文,所述GRE数据报文包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在一个可能的设计中,所述第一网设备接收到的所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
上述实施方式中,通过根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
在一个可能的设计中,所述设定的负载分担策略包括:
所述第一网络设备确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。
当第一网络设备确定绑定隧道重排序缓存中已用的缓存空间的大小大于等一个设定的门限之后,或者当第一网络设备确定所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于的一个设定的门限之后,可以使得接下来到达第二网络设备的多个连续报文序列,根据设定的负载分担策略,例如,选择RTT小的隧道传输所述报文序列,或者选择单隧道中未完成正确排序数量少的隧道传输所述报文序列,而不再被分配到时延大或者单隧道中报文数量更多的隧道。由此,可以有效的防止更多的报文被滞留在发生拥塞的隧道,从而造成通过非拥塞隧道传输的报文在重排序缓存中的等待时延, 和/或可能引发的绑定隧道重排序缓存中流量溢出的情况,可以有效的减少丢包和系统重传。
第二方面,本申请提供了一种负载分担的方法,该方法应用于第二网络设备。所述第二网络设备和第一网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。该第一网络设备例如可以是混合接入网中的HG设备,该第二网络设备例如可以是所述混合接入网中的HAAP设备。或者,该第一网络设备例如可以是混合接入网中的HAAP设备,该第二网络设备例如可以是所述混合接入网中的HG设备。首先,该所述第二网络设备接收所述第一网络设备发送的多个数据报文。第二网络设备获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息。然后,向所述所述第一网络设备发送确认响应。所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息。所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
在上述方案中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担。能够有效的提高绑定隧道的传输效率。
在一个可能的设计中,所述绑定隧道重排序缓存的缓存空间的使用情况包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
在一个可能的设计中,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序。
所述第二网络设备接收所述第一网络设备发送的所述多个数据报文,具体包括:
所述第二网络设备接收所述第一网络设备通过所述第一隧道发送的所述多个数据报文中的第一部分数据报文。所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号。所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序。所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。
所述第二网络设备接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文。所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号。所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序。所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。
所述确认响应包括所述第二网络设备针对所述第一部分数据报文的第一确认响应和所述第二网络设备针对所述第二部分数据报文的第二确认响应。
所述第二网络设备获得所述第一隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的第一隧道序号以及所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号。所述第二网络设备根据所述第一隧道序号确定第一隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第一确认响应中包括的绑定隧道确认序号。所述第一确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第一隧道确认序号和所述第一确认响应中包括的绑定隧道确认序号。
所述第二网络设备获得所述第二隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的第二隧道序号以及所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号。所述第二网络设备根据所述第二隧道序号确定第二隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第二确认响应中包括的绑定隧道确认序号。所述第一确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第二隧道确认序号和所述第二确认响应中包括的绑定隧道确认序号。
所述第一隧道确认序号、所述第二隧道确认序号、所述第一确认响应中包括的绑定隧道确认序号以及所述第二确认响应中包括的绑定隧道确认序号被所述第一网络设备用于确定所述绑定隧道重排序缓存的报文数量,并根据所述绑定隧道重排序缓存的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
通过确定绑定隧道重排序缓存中的报文数量,可用于确定所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。另外,通过上述方案,可以利用现有的协议(例如GRE协议)已经定义的字段携带报文序号,从而通过报文序号确定绑定隧道重排序缓存中报文的数量,降低了该方法的实现复杂度。
在一个可能的设计中,所述第一确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号。
在一个可能的设计中,所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
在一个可能的设计中,所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承 载所述绑定隧道确认序号。
在一个可能的设计中,所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
在一个可能的设计中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量。
在一个可能的设计中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在一个可能的设计中,所述第二网络设备发送的所述确认响应为通用路由封装GRE数据报文,所述GRE数据报文中包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在一个可能的设计中,所述第二网络设备发送的所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
通过根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
第三方面,本申请实施例提供了一种第一网络设备,用于执行第一方面或第一方面的任意一种可能的设计中的方法。具体地,该第一网络设备包括用于执行第一方面或第一方面的任意一种可能的设计中的方法的模块。
第四方面,本申请实施例提供了一种第二网络设备,用于执行第二方面或第二方面的任意一种可能的设计中的方法。具体地,该第二网络设备包括用于执行第二方面或第二方面的任意一种可能的设计中的方法的模块。
第五方面,本申请实施例提供了一种第一网络设备,所述第一网络设备包括:输入接口、输出接口、处理器和存储器。其中,输入接口、输出接口、处理器以及所述存储器之间可以通过总线系统相连。该存储器用于存储程序、指令或代码,所述处理器用于执行所述存储器中的程序、指令或代码,完成第一方面或第一方面的任意可能 的设计中的方法。
第六方面,本申请实施例提供了一种第二网络设备,所述第二网络设备包括:输入接口、输出接口、处理器和存储器。其中,输入接口、输出接口、处理器以及所述存储器之间可以通过总线系统相连。该存储器用于存储程序、指令或代码,所述处理器用于执行所述存储器中的程序、指令或代码,完成第二方面或第二方面的任意可能的设计中的方法。
第七方面,本申请实施例提供了一种通信系统,该通信系统包括第三方面或第五方面所述的第一网络设备和第四方面或第六方面所述的第二网络设备。
第八方面,本申请提实施例供了一种计算机可读存储介质或者计算机程序产品,用于存储计算机程序,该计算机程序用于执行第一方面、第二方面、第一方面任意可能的设计或第二方面任意可能的设计中的方法的指令。
附图说明
图1为现有技术中基于令牌桶来实现负载分担的示意图;
图2为本申请实施例提供的混合接入网网络架构示意图;
图3为本申请实施例提供的负载分担方法的流程示意图;
图4为本申请实施例提供的绑定隧道重排序缓存中报文数量计算示意图;
图5为本申请实施例提供的第一网络设备的示意图;
图6为本申请实施例提供的第二网络设备的示意图;
图7为本申请实施例提供的第一网络设备的硬件结构示意图;
图8为本申请实施例提供的第二网络设备的硬件结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
除非有相反的说明,本申请实施例提及“第一”、“第二”等序数词用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。
本申请实施例可应用于混合接入网络,混合接入网络中包括第一网络设备和第二网络设备。该第一网络设备和第二网络设备之间建立有第一隧道和第二隧道。该第一隧道和该第二隧道通过绑定连接形成虚拟绑定隧道,在本申请中简称为绑定隧道。该第一网络设备和该第二网络设备之间传输的所有报文均通过该绑定隧道传输。具体地,该所有报文包括分别通过该第一隧道和该第二隧道进行传输的报文。具体地,所述绑定隧道例如可以:GRE隧道,点到点协议(英文:Point-to-point Tunnelling Protocol,PPTP)隧道,用户数据协议(英文:User datagram Protocol,UDP)隧道等,本申请 对此不作具体限定。本申请仅以通过GRE实现混合端口绑定形成绑定隧道为例进行介绍。本申请实施例描述的应用场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
一种可能的混合接入网络架构示例如图2所示。手机,电话,笔记本电脑等终端设备可以通过网线直连、无线局域网(英文:Wireless Fidelity,WiFi)等接入方式与HG设备连接。HG设备可以同时接入DSL和LTE。HG设备通过向HAPP设备发送LTE隧道请求和DSL隧道请求,分别建立GRE隧道(图2中以LTE GRE Tunnel和DSL GRE Tunnel示出),将LTE隧道和DSL隧道绑定成绑定隧道(亦可称之为逻辑GRE隧道),接入HAPP设备,经由HAPP设备接入公共网络(例如Internet)。HG设备和HAAP设备之间的消息,基于GRE协议封装成GRE报文格式后进行转发。
在本申请中,所述的第一网络设备可以为所述HG设备,第二网络设备为所述HAAP设备;或者,所述第一网络设备为HAAP设备,所述第二网络设备为HG设备。第一网络设备发送的所有报文均采用绑定隧道序号(也可称之为逻辑GRE隧道序号)进行全局编号。绑定隧道序号用于表示第一网络设备发送的所述所有报文在绑定隧道中的序号,用于表示所述所有报文在所述绑定隧道中的传输顺序。其中,所述所有报文包括在DSL隧道中传输的报文和在LTE隧道中传输的报文。第二网络设备根据绑定隧道序号恢复所有报文顺序,从而实现HG和HAAP之间混合接入网路的数据传输机制。
下面以HG设备向HAAP设备发送数据流为例进行举例说明,应理解,不够成对本申请的限制。HG接收待发送的数据流,该数据流包括六个数据报文,每个数据报文包括一个绑定隧道序号,分别是1,2,3,4,5,6。绑定隧道序号用于标识该六个数据报文在绑定隧道中的传输顺序。第一网络设备采用DSL隧道发送绑定隧道序号为1-4的数据报文,DSL无可用带宽时,通过LTE隧道进行负载分担,传输绑定隧道序号为5和6的数据报文。HAAP设备使用绑定隧道重排序缓存对通过DSL隧道和LTE隧道传输的报文进行缓存,根据每个报文携带的绑定隧道序号进行重排序。如果绑定隧道序号为1,2,5,6的报文进入绑定隧道重排序缓存,而绑定隧道序号为3和4的报文由于DSL隧道发生拥塞而未到达该绑定隧道重排序缓存,则绑定隧道序号为1和2的报文完成正确排序,输出到网络。绑定隧道序号为5和6的报文未完成正确排序,需要等到绑定隧道序号为3和4的报文进入绑定隧道重排序缓冲,完成正确排序后才能输出到网络。
需要说明的是,图2所示的网络架构示例中,仅以第一网络设备和第二网络设备之间通过绑定第一隧道和第二隧道来实现绑定隧道为例进行介绍,第一网络设备和第二网络设备之间还可以通过绑定两条以上的隧道来实现绑定隧道进行通信。
图2中示出的混合接入网络中的应用场景仅是一种示例,实际混合接入网络中还可以包括其它形式的结构,本申请中不作限制。
下面基于图2所示的混合接入网架构,结合附图3对本申请实施例提供的负载分担方法100作详细说明。该负载分担方法应用于第一网络设备和第二网络设备,该第 一网络设备和第二网络设备之间建立有第一隧道和第二隧道。第一隧道和第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道。所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。
在一个具体的实施方式中,所述第一网络设备可以是图2所示HG设备,所述第二网络设备可以是图2所示的HAAP设备;或者该第一网络设备可以是图2所示的HAAP设备,所述第二网络设备可以是图2所示的HG设备。所述第一隧道可以是图2所示的DSL隧道,所述第二隧道可以是图2所示的LTE隧道;或者所述第一隧道可以是图2所示的LTE隧道,相应的,所述第二隧道可以是图2所示的DSL隧道。本申请实施例提供的提供的负载分担方法100包括以下部分:
S101、所述第一网络设备向所述第二网络设备发送多个数据报文。
具体地,所述第一网络设备接收所述多个数据报文,以所述第一网络设备为所述HG为例,HG从手机或者其它终端设备接收多个数据报文。手机或其它终端设备通过网线或者WiFi与HG连接,将所述多个数据报文发送给HG。
所述第一网络设备接收到所述多个数据报文以后,根据设定的负载分担策略,例如基于令牌桶来实现负载分担,向所述第二网络设备发送所述多个数据报文。在一个具体的实施方式中,所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文。在另一个具体的实施方式中,所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文。在另一个具体的实施方式中,所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文;所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。
S102、所述第二网络设备接收所述第一网络设备发送的所述多个数据报文。
在一个具体的实施方式中,所述第二网络设备通过所述第一隧道接收所述多个数据报文。在另一个具体的实施方式中,所述第二网络设备通过所述第二隧道接收所述多个数据报文。在另一个具体的实施方式中,所述第二网络设备接收所述第一网络设备发送的所述多个数据报文,具体包括:所述第二网络设备接收所述第一网络设备通过所述第一隧道发送的所述多个数据报文中的第一部分数据报文;所述第二网络设备接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文。
S103、所述第二网络设备获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息。
具体地,所述绑定隧道重排序缓存的缓存空间的使用情况,例如可以包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或者所述绑定隧道重排序缓存中可用的缓存空间的大小。
在一个具体的实施方式中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量。 当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在另一个具体的实施方式中,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括单隧道确认号以及绑定隧道确认序号。本申请所述的单隧道指的是组成所述绑定隧道中的各个隧道,如本实施例中的第一隧道或第二隧道。关于“单隧道确认号”以及“绑定隧道确认号”的具体含义,以及如何确定“单隧道确认号”以及“绑定隧道确认号”,参见下面的详细说明。
S104、所述第二网络设备向所述第一网络设备发送确认响应。
具体地,所述第二网络设备向所述第一网络设备发送确认响应,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息,所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。具体如何发送确认响应,可参见下面的详细说明。
该设定的负载分担策略,包括但不限于:所述第一网络设备确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。
在一个具体的实施方式中,所述第二网络设备每收到一个报文,就会向所述第一网络设备返回一个确认响应。在另一个具体的实施方式中,所述第二网络设备可以设定,在一定的时间间隔下,周期性的向第一网络设备返回所述确认响应。在另一个具体的实施方式中,所述第二网络设备还可以在接收到所述第一网络设备发送的请求或者达到设定的预警状态时发送所述确认响应。该设定的预警状态包括但不限于绑定隧道重排序缓存的已用缓存空间的大小大于等于一个设定的阈值,或者绑定隧道重排序缓存的可用缓存空间的大小小于等于一个设定的阈值。本申请对此不作具体限定。
S105、所述第一网络设备接收第二网络设备发送的确认响应。
S106、所述第一网络设备根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况。
具体如何根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况参考下面的详细说明。
S107、第一网络设备根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行负载分担。
具体如何根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担可参考下述详细说明。
在上述方案中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担,能够有效的提高绑定隧道的传输效率。
下面对S105中如何发送确认响应以及S106中如何根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况进行具体说明。
在一个具体的实施方式中,所述第一网络设备接收所述第二网络设备发送的确认响应,具体包括:所述第二网络设备通过所述第一隧道或第二隧道向所述第一网络设备发送所述确认响应。相应的,所述第一网络设备接收所述第二网络设备通过所述第一隧道或所述第二隧道返回的所述确认响应。所述确认响应为通用路由封装GRE数据报文,所述GRE数据报文中包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述第一网络设备根据所述Bonding Reorder Buffer Size字段所承载的内容确定所述绑定隧道重排序缓存的缓存空间的使用情况。例如,可以在GRE报文头携带Bonding Reorder Buffer Size字段。所述Bonding Reorder Buffer Size字段例如可以是32比特。
或者,所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述第一网络设备根据所述V字段所承载的内容确定所述绑定隧道重排序缓存的缓存空间的使用情况。
当所述确认响应为GRE数据报文时,所述Bonding Reorder Buffer Size字段所承载的内容包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量。
当所述确认响应为所述GRE控制报文时,所述V字段所承载的内容包括所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
具体地,GRE控制报文中的Attribute TLV字段的格式如下所示:
Figure PCTCN2017109132-appb-000001
在一个具体的实施方式中,Attribute TLV字段例如可以承载于GRE Tunnel Notify报文中。属性类型Attribute Type的取值例如可以是36,用于表示返回绑定隧道重排序缓存的空间使用情况,属性值Attribute Value所承载的内容如上文所述,不再赘述。
需要说明的是,本申请中所述的GRE数据报文或控制报文,及其中的字段或格式仅是示例性说明,不构成本对发明的限定。本领域技术人员在阅读本申请文件的基础上可以想到采用其他报文或者GRE数据报文或控制报文的其他字段或格式以携带上述实施方式中的绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量,这些都属于本申请应有之意,在此不一一赘述。
在一个具体的实施方式中,第一网络设备根据报文队列在绑定隧道重排序缓存中长度,来确定绑定隧道重排序缓存的缓存空间的使用情况。举例来说,缓存空间的最小单位为1个字节,该绑定隧道重排序缓存中的报文队列的长度为1518个字节,则绑定隧道重排序缓存已用的缓存空间的大小为1518个字节。采用该绑定隧道重排序缓存配置的缓存队列的最大长度减去已有的报文队列的长度,就可以得到绑定隧道重排序缓存可用的缓存空间的大小。这种情况,可以使用设定的字节数作为设定门限值作为进一步负载分担的依据。可选的,在所述确认响应中,可以直接返回所述绑定隧道重排序缓存中的报文队列的长度,即返回绑定隧道重排序缓存已用的缓存空间的大小。第一网络设备接收到所述确认响应后,可以直接基于绑定隧道重排序缓存已用的缓存空间的大小作为确定进一步负载分担策略的依据。第一网络设备也可以在接收到所述确认响应后,基于绑定隧道重排序缓存已用的缓存空间的大小获得所述绑定隧道重排序缓存可用的缓存空间的大小,以此作为作为确定进一步负载分担策略的依据。可选的,在所述确认响应中,可以直接返回绑定隧道重排序缓存可用的缓存空间的大小,即返回绑定隧道重排序缓存中可用的用于缓存报文队列的长度。第一网络设接收到所述确认响应后,可以直接基于绑定隧道重排序缓存可用的缓存空间的大小,或者通过绑定隧道重排序缓存可用的缓存空间的大小获得绑定隧道重排序缓存已用的缓存空间的大小,来作为确定进一步负载分担策略的依据。
在另一个具体的实施方式中,所述第一网络设备可以根据绑定隧道重排序缓存中已用或可用的缓存切片的数量,来确定绑定隧道重排序缓存的缓存空间的使用情况。举例来说,缓存空间的最小单位为一个缓存切片,将绑定隧道重排序缓存中的缓存资源划分为多个切片,每个切片可以具有固定的大小,例如256字节。绑定隧道重排序缓存存储报文时,根据每个报文的长度,分别为每个报文分配一个或多个切片。以单个缓存切片大小为256字节为例,当该绑定隧道重排序缓存中的报文队列的长度为1518个字节时,则该绑定隧道重排序缓存已用的缓存空间的大小为6个缓存切片。假设该绑定隧道重排序缓存的缓存空间被划分为20个缓存切片,则所述绑定隧道重排序缓存可用的缓存空间的大小为14个缓存切片。这种情况下,可以使用设定的缓存切片的数量作为设定门限值。可选的,在所述确认响应中,可以直接返回所述绑定隧道重排序缓存中已用的缓存切片的数量,即返回绑定隧道重排序缓存已用的缓存空间的大小。第一网络设备接收到所述确认响应后,可以直接基于绑定隧道重排序缓存已用的 缓存空间的大小作为确定进一步负载分担策略的依据。第一网络设备也可以在接收到所述确认响应后,基于绑定隧道重排序缓存已用的缓存空间的大小获得所述绑定隧道重排序缓存可用的缓存空间的大小,以此作为作为确定进一步负载分担策略的依据。可选的,在所述确认响应中,可以直接返回绑定隧道重排序缓存可用的缓存空间的大小,即返回绑定隧道重排序缓存中可用缓存切片的数量。第一网络设接收到所述确认响应后,可以直接基于绑定隧道重排序缓存可用的缓存空间的大小,或者通过绑定隧道重排序缓存可用的缓存空间的大小获得绑定隧道重排序缓存已用的缓存空间的大小,来作为确定进一步负载分担策略的依据。
在另一个具体的实施方式中,所述第一网络设备还可以根据在绑定隧道重排序缓存中的报文数量,来确定绑定隧道重排序缓存的缓存空间的使用情况。需要说明的是,举例来说,缓存空间的最小单位为一个报文。绑定隧道重排序缓存配置最大可存储报文的数量为S个,在绑定隧道重排序缓存中排队的报文的数量为T个,此时,该绑定隧道重排序缓存已用的缓存空间的大小为T个报文。这种情况下,可以使用设定的报文数量作为设定门限值。S大于1,T大于0。可选的,在所述确认响应中,可以直接返回所述绑定隧道重排序缓存中的报文数量,即返回绑定隧道重排序缓存已用的缓存空间的大小。第一网络设备接收到所述确认响应后,可以直接基于绑定隧道重排序缓存已用的缓存空间的大小作为确定进一步负载分担策略的依据。第一网络设备也可以在接收到所述确认响应后,基于绑定隧道重排序缓存已用的缓存空间的大小获得所述绑定隧道重排序缓存可用的缓存空间的大小,以此作为确定进一步负载分担策略的依据。可选的,在所述确认响应中,可以直接返回绑定隧道重排序缓存可用的缓存空间的大小,即返回绑定隧道重排序缓存中可用的能够存储的报文数量。第一网络设备接收到所述确认响应后,可以直接基于绑定隧道重排序缓存可用的缓存空间的大小,或者通过绑定隧道重排序缓存可用的缓存空间的大小获得绑定隧道重排序缓存已用的缓存空间的大小,来作为确定进一步负载分担策略的依据。在本申请各实施方式中,所述“绑定隧道重排序缓存中的报文”指代绑定隧道重排序缓存中未完成正确排序的报文,已经完成排序的报文不算入绑定隧道重排序缓存中的报文。
上述实施方式中,通过根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
在上述根据在绑定隧道重排序缓存中的报文数量,来确定绑定隧道重排序缓存的缓存空间的使用情况的实施方式中,在一种具体的实施方式中,还可以根据所述绑定隧道重排序缓存中未完成正确排序的报文数量,所述第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量来确定绑定隧道重排序缓存中的报文数量。以下对根据各隧道中未完成正确排序的报文数量确定绑定隧道重排序缓存的使用情况进行具体说明。
需要说明的是,在本申请各实施例中,一个报文在通过第一网络设备进行负载分担并发送之后,在第二网络设备绑定隧道重排序缓存完成正确排序之前,报文可能存 在于以下几个位置中:所述绑定隧道中的各隧道(为了便于说明,这里用隧道i表示)的传输路径中,隧道i的重排序缓存中,以及绑定隧道重排序缓存中。i为大于或等于1的正整数,隧道1即指代本申请所述的第一隧道,隧道2即指代本申请所述的第二隧道,以此类推,不再赘述。在任意一个时刻下,存在于上述任一位置中的报文称之为绑定隧道中的“未完成正确排序的报文”。在隧道i中的正在传输的报文以及在隧道i的重排序缓存中的报文称之为隧道i中的“未完成正确排序的报文”。隧道i的重排序缓存中的报文指的是那些已进入隧道i的重排序缓存中但未完成正确排序的报文。举例来说,对于第一隧道而言,第一隧道中正在传输的报文以及在第一隧道重排序缓存中的报文的总数记为第一隧道中未完成正确排序的报文数量。第二隧道中正在传输的报文以及在第二隧道重排序缓存中的报文的总数记为第二隧道中未完成正确排序的报文数量。第一隧道的重排序缓存,在本申请中称之为第一隧道重排序缓存,第二隧道的重排序缓存,在本申请中称之为第二隧道重排序缓存。所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行正确排序,经过所述第一隧道重排序缓存正确排序后的报文进入所述绑定隧道重排序缓存。同理,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行正确排序,经过所述第二隧道重排序缓存正确排序后的报文进入所述绑定隧道重排序缓存。所述绑定隧道重排序缓存用于对通过所述第一隧道和所述第二隧道传输的所有报文进行正确排序。
本实施方式中,根据所述绑定隧道重排序缓存中未完成正确排序的报文数量,所述第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量确定绑定隧道重排序缓存中的报文数量,具体包括:所述第一网络设备根据所述确认响应确定所述第一隧道中未完成正确排序的报文数量F1,所述第二隧道中未完成正确排序的报文数量F2以及所述绑定隧道中未完成正确排序的报文数量FB,进而确定所述绑定隧道重排序缓存中的报文数量B,B=FB-F1-F2。在一个具体的实施方式中,所述第一网络设备可以直接基于确定的所述绑定隧道重排序缓存中的报文数量B,确定绑定隧道重排序缓存已用的缓存空间的大小,从而作为确定进一步负载分担策略的依据。第一网络设备也可以基于绑定隧道重排序缓存已用的缓存空间的大小获得所述绑定隧道重排序缓存可用的缓存空间的大小,以此作为确定进一步负载分担策略的依据。
具体来说,所述第二网络设备还包括上述第一隧道重排序缓存和第二隧道重排序缓存。所述第一网络设备向所述第二网络设备发送所述多个数据报文,具体包括:所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文;所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。
所述第二网络设备向所述第一网络设备发送针对所述第一部分数据报文的第一确认响应,所述第一网络设备根据所述第一确认响应确定已经进入上述第一隧道重排序缓存并完成正确排序的报文数量以及已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量。在一个具体实施方式中,所述第一网络设备可以根据所述第一确认响应携带的已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号确定已经进入第一隧道重排序缓存并完成正确排序的报文数量,以及根据所述第一确认响应携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文的报文序号确定已经 进入所述绑定隧道重排序缓存并完成正确排序的报文数量M。一个具体的实施方式中,已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号可以为发送所述第一确认响应时或之前获取的最新的已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号。同理,其中已经进入所述绑定隧道重排序缓存并完成正确排序的报文的报文序号可以为发送所述第一确认响应之前获取的最新的已经进入所述绑定隧道重排序缓存并完成正确排序的报文的报文序号。本领域技术人员在阅读本申请实施例的基础上可以容易想到,也可以通过在所述第一确认响应中直接携带所述已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文数量。此外,已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号或数量也可以以设定时间或期限内获取的最新的已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号或数量。同理,上述这些实施方式也可以适用于确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M。在确定所述已经进入所述第一隧道重排序缓存并完成正确排序的报文数量基础上,结合所述第一网络设备根据通过所述第一隧道向所述第二网络设备发送的第一部分数据报文的数量即可获得所述第一隧道中未完成正确排序的报文数量F1。具体而言,可以将所述第一网络设备向所述第二网络设备发送的第一部分数据报文的数量减去所述已经进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量F1
同理,所述第二网络设备向所述第一网络设备发送针对所述第二部分数据报文的第二确认响应,所述第一网络设备根据所述第二确认响应确定已经进入第二隧道重排序缓存并完成正确排序的报文数量以及所述已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。具体的实施方式与上述第一网络设备根据所述第一确认响应确定确定已经进入第二隧道重排序缓存并完成正确排序的报文数量以及所述已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M相似,这里不再赘述。在确定所述已经进入所述第二隧道重排序缓存并完成正确排序的报文数量基础上,结合所述第一网络设备根据通过所述第二隧道向所述第二网络设备发送的第二部分数据报文的数量即可获得所述第二隧道中未完成正确排序的报文数量F2,具体的实施方式,参考上述确定所述第一隧道中未完成正确排序的报文数量F1的方式,这里不再赘述。
所述第一网络设备根据所述第一确认响应中携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M和所述第二确认响应中携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N中的较大值以及所述第一网络设备所发送的所述多个数据报文的数量确定所述绑定隧道重排序缓存中未完成正确排序的报文数量FB。具体而言,可以将所述第一网络设备所发送的所述多个数据报文的数量减去所述第一确认响应携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M和所述第二确认响应中携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N中的较大值得到所述绑定隧道中未完成正确排序的报文数量FB
可以理解,所述第一网络设备也可以根据所述第一确认响应中的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道重排序缓存中未完成正确排序的报文数量(为了便于说明,这里称之为绑定隧道重排序缓存中未完成正确排序的报文数量第一值) 以及所述第二确认响应中的已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道重排序缓存中未完成正确排序的报文数量(为了便于说明,这里称之为绑定隧道重排序缓存中未完成正确排序的报文数量第二值),然后比较所述绑定隧道重排序缓存中未完成正确排序的报文数量第一值和所述绑定隧道重排序缓存中未完成正确排序的报文数量第二值,以其中的较大值确定为所述绑定隧道中未完成正确排序的报文数量FB
以下具体说明所述第一网络设备如何根据所述第一确认响应携带的已经进入所述第一隧道重排序缓存并完成正确排序的报文的报文序号确定已经进入第一隧道重排序缓存并完成正确排序的报文数量,以及如何根据所述第一确认响应携带的已经进入所述绑定隧道重排序缓存并完成正确排序的报文的报文序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M。
在一个具体的实施方式中,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号。所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的序号,即所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序。第一隧道中传输的报文可能没有按照所述第一隧道序号所指示的传输顺序到达所述第一隧道重排序缓存,所述第一隧道重排序缓存需要根据所述每个报文的第一隧道序号进行排序,完成正确排序后的报文从所述第一隧道重排序缓存进入到所述绑定隧道重排序缓存。所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的序号,即该报文在所述绑定隧道中传输顺序。报文可能没有按照所述绑定隧道序号所指示的传输顺序到达所述绑定隧道重排序缓存,所述绑定隧道重排序缓存需要根据所述每个报文的绑定隧道序号进行排序,完成正确排序后的报文从所述绑定隧道重排序缓存发送到网络中的其他设备。
所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号。同理,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的序号,即所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序。所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的序号,即该报文在所述绑定隧道中的传输顺序。
所述第二网络设备确定已进入所述第一隧道重排序缓存并完成正确排序的报文中所携带的第一隧道序号,具体的可以按照发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述第一隧道重排序缓存并完成正确排序的报文中所携带的第一隧道序号,并将其作为第一隧道确认序号携带在所述第一确认响应中发给所述第一网络设备。需要说明的是,这里的第一隧道确认序号可以是发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述第一隧道重排序缓存并完成正确排序的报文中所携带的第一隧道序号,也可以是与该第一隧道序号有映射关系的序号,具体参考下面的例子说明。
同理,所述第二网络设备确定已进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号,具体的可以按照发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号,并将其携带在所述第一确认响应中发给所述第一网络设备。为 了便于描述,本申请实施例将携带在所述第一确认响应中的最新进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号称之为绑定隧道确认序号。
所述第一网络设备根据所述第一确认响应中的所述第一隧道确认序号即可确定已经进入第一隧道重排序缓存并完成正确排序的报文数量,一个具体的实施方式中,可以根据所述第一网络设备向所述第二网络设备发送的第一部分数据报文中的最大的第一隧道序号减去所述第一确认响应中携带的所述第一隧道确认序号即可得到所述已经进入第一隧道重排序缓存并完成正确排序的报文数量。
同理,所述第一网络设备根据所述第一确认响应中的所述绑定隧道确认序号即可确定已经进入绑定隧道重排序缓存并完成正确排序的报文数量,一个具体的实施方式中,可以根据所述第一网络设备向所述第二网络设备发送的所述多个数据报文中的最大的绑定隧道序号减去所述第一确认响应中的所述绑定隧道确认序号即可得到所述已经进入绑定隧道重排序缓存并完成正确排序的报文数量。
所述第二网络设备确定已进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的第二隧道序号,具体的可以按照发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述第二隧道重排序缓存并完成正确排序的报文中所携带的第二隧道序号,并将其携带在所述第二确认响应中发给所述第一网络设备。为了便于描述,本申请实施例将携带在所述第二确认响应中最新进入所述第二隧道重排序缓存并完成正确排序的报文中所携带的第二隧道序号称之为第二隧道确认序号。
同理,所述第二网络设备确定已进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号,具体的可以按照发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号,并将其携带在所述第二确认响应中发给所述第一网络设备。为了便于描述,本申请实施例将携带在所述第二确认响应中的最新进入所述绑定隧道重排序缓存并完成正确排序的报文中所携带的绑定隧道序号称之为绑定隧道确认序号。
所述第一网络设备根据所述第二确认响应中的所述第二隧道确认序号即可确定已经进入第二隧道重排序缓存并完成正确排序的报文数量,一个具体的实施方式中,可以根据所述第一网络设备向所述第二网络设备发送的第二部分数据报文中的最大的第二隧道序号减去所述第二确认响应中携带的所述第二隧道确认序号即可得到所述已经进入第二隧道重排序缓存并完成正确排序的报文数量。
同理,所述第一网络设备根据所述第二确认响应中的所述绑定隧道确认序号即可确定已经进入绑定隧道重排序缓存并完成正确排序的报文数量,一个具体的实施方式中,可以根据所述第一网络设备向所述第二网络设备发送的所述多个数据报文中的最大的绑定隧道序号减去所述第二确认响应中的所述绑定隧道确认序号即可得到所述已经进入绑定隧道重排序缓存并完成正确排序的报文数量。
下面对单隧道序号与单隧道确认序号的映射关系,绑定隧道序号与绑定隧道确认序号的映射关系,进行具体说明。在本申请实施例中,单隧道是第一隧道时,单隧道序号指代第一隧道序号。单隧道是第二隧道时,单隧道序号指代第二隧道序号。所述第二网络设备中可以保存一个映射关系表,所述映射关系表用于保存第一隧道确认序号与第一隧道序号之间的映射关系,第二隧道确认序号与第二隧道序号之间的映射关 系以及绑定隧道确认序号与绑定隧道序号之间的映射关系。相应的,所述第一网络设备中也保存有所述映射关系。所述映射关系可以通过如下方式建立:例如,第一隧道序号为阿拉伯数字1,第一隧道确认序号为与数字1相映射的字母A。需要说明的是,上述映射关系的建立方式仅是例举,具体可以通过多种不同的方式实现,本领域技术人员可以想到的任何来建立这种对应关系的手段都覆盖在本申请实施例中的映射规则中。映射关系表的具体形式可以以多种不同的方式实现,可以以表格的形式,也可以是其他的方式表达该对应关系,本申请对此不做限定。
需要说明的是,以上实施方式中,所述第一网络设备向所述第二网络设备发送的所述多个数据报文中的每个报文包括了两种序号,即该报文的绑定隧道序号以及第一隧道序号,在另一个具体实施方式中,也可以在每个报文只包括绑定隧道序号而不包括各单隧道的隧道序号。在这种情况下,所述第二网络设备可以按照发送所述确认响应时或之前或在设定时间或期限内获得的最新的进入所述第一隧道重排序缓存并完成正确排序的报文中所携带的所述绑定隧道序号,并将其携带在所述第一确认响应中发给所述第一网络设备。第一网络设备根据所述第一确认响应中的所述绑定隧道序号以及所述第一网络设备向所述第二网络设备发送的第一部分数据报文中各数据报文的绑定隧道序号确定所述已经进入第一隧道重排序缓存并完成正确排序的报文数量。同理,第一网络设备根据所述第二确认响应中的所述绑定隧道序号以及所述第一网络设备向所述第二网络设备发送的第二部分数据报文中各数据报文的绑定隧道序号确定所述已经进入第二隧道重排序缓存并完成正确排序的报文数量。
在一个具体的实施方式中,所述第一部分数据报文中的每个报文包括用于承载所述第一隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。所述第二部分数据报文中的每个报文包括用于承载所述第二隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。在一个具体的实施方式中,所述Sequence Number字段和所述Bonding Sequence Number字段通过GRE数据报文承载。例如,可以在GRE报文头携带Sequence Number字段和Bonding Sequence Number字段。所述Sequence Number字段和Bonding Sequence Number字段例如可以是32比特。
所述第一确认响应中包括确认号Acknowledgment Number字段和绑定确认号Bonding Acknowledgment Number字段,所述Acknowledgment Number字段用于承载所述第一隧道确认序号,所述Bonding Acknowledgment Number字段用于承载隧道绑定确认序号。
在一个具体的实施方式中,所述第一确认响应为GRE数据报文,所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段通过GRE数据报文承载。例如,可以在GRE数据报文的报文头携带所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段。所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段例如可以是32比特。
在另一个具体的实施方式中,所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。该GRE控制报文中的Attribute TLV字段的封装格式如下所 示:
Figure PCTCN2017109132-appb-000002
在一个具体地的实施方式中,Attribute TLV字段可以承载于GRE Tunnel Notify报文中。属性类型Attribute Type的取值例如可以是37,属性值Attribute Value包括所述第一绑定隧道确认序号以及绑定隧道确认序号。
在一个具体的实施方式中,该GRE控制报文中的Attribute TLV字段的格式,可以如下所示:
Figure PCTCN2017109132-appb-000003
所述第二确认响应中包括确认号Acknowledgment Number字段和绑定确认号Bonding Acknowledgment Number字段,所述Acknowledgment Number字段用于承载所述第二隧道确认序号,所述Bonding Acknowledgment Number字段用于承载隧道绑定确认序号。在一个具体的实施方式中,所述第二确认响应为GRE数据报文,所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段通过GRE数据报文承载。例如,可以在GRE数据报文的报文头携带所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段。所述Acknowledgment Number字段和所述Bonding Acknowledgment Number字段例如可以是32比特。
在另一个具体的实施方式中,所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。第二确认响应中Attribute TLV字段的封装格式,参见上文对于第一确认响应响应中的描述,此处不再赘述。
通过本申请提供的上述方案,利用GRE数据报文所携带的Sequence Number字段,Bonding Sequence Number字段,Acknowledgment Number字段以及Bonding Acknowledgment Number字段,或者GRE控制报文所携带的Attribute TLV字段,无 需对现有协议进行扩展,即可根据上述字段携带的内容,确定绑定隧道重排序缓存中的报文数量。
需要说明的是,上述中的GRE数据报文或控制报文,及其中的字段或格式仅是示例性说明,不构成本对发明的限定。本领域技术人员在阅读本申请文件的基础上可以想到采用其他报文或者GRE数据报文或控制报文的其他字段或格式以携带上述实施方式中第一隧道序号,第一隧道确认序号,绑定隧道序号,第二隧道序号,第二隧道确认序号以及绑定隧道确认序号。
下面结合图4,对本申请中,根据所述绑定隧道重排序缓存中未完成正确排序的报文数量,所述第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量来确定绑定隧道重排序缓存已用的缓存空间的大小的方法进行举例说明。
在时刻1,第一网络设备接收到6个报文。第一网络设备向第二网络设备发送6个数据报文,每个数据报文携带一个基于绑定隧道的绑定隧道序号。如图4中所示,第一个数据报文的绑定隧道序号为1,第二数据报文的绑定隧道序号为2,以此类推,第6个数据报文的绑定序号为6。将该6个报文分别简称为报文1,报文2,报文3,报文4,报文5以及报文6。
在时刻2,按照设定的负载分担策略,例如,基于令牌桶机制决定将报文1至报文3分配至第一隧道进行传输,将报文4至报文6通过第二隧道传输。随后,第一网络设备为报文1至报文3分别分配一个基于第一隧道的第一隧道序号,该3个报文的第一隧道序号分别为1,2,3。第一网络设备为报文4至报文6分别分配一个基于第二隧道的第二隧道序号,该3个报文的第二隧道序号分别为1,2,3。
第一网络设备通过第一隧道和第二隧道发送上述报文时,会分别记录第一隧道序号,第二隧道序号以及绑定隧道序号。所述第一网络设备可以以表格的形式保存上述信息,也可以以其他形式保存上述信息,保存的形式不限。
结合图4,首先,第一网络设备通过第一隧道发送报文1,会记录报文1的第一隧道序号以及绑定隧道序号,即发送报文1后,记录的第一隧道序号为1,绑定隧道序号为1。随后,第一网络设备通过第一隧道发送报文2,则第一网络设备会记录报文2的第一隧道序号以及绑定隧道序号,即发送报文2后,记录的第一隧道序号为2,绑定隧道序号为2。以此类推,直到第一网络设备通过第二隧道发送报文6时,第一网络设备的记录结果为:第一隧道序号为3,第二隧道序号为3,绑定隧道序号为6。
在本示例中,第一网络设备向第二网络设备发送一个报文,会记录该报文所对应的单隧道序号以及绑定隧道序号。在一个具体的实施方式中,第一网络设备中可以保存一个记录报文所对应的单隧道序号以及绑定隧道序号的列表,每条表项用于记录第一隧道序号,第二隧道序号以及绑定隧道序号。在一个具体的实施方式中,第一网络设备也可以保存两个列表,其中一个列表用于记录第一隧道序号和绑定隧道序号的对应关系,另外一个列表用于记录第二隧道序号和绑定隧道序号的对应关系。本申请对于第一网络设备如何保存报文所对应的单隧道序号以及绑定隧道序号的形式不做具体限制。通过第一隧道发送的报文的单隧道序号是第一隧道序号,通过第二隧道发送的报文的单隧道序号是第二隧道序号。
在一个具体的实施方式中,第一网络设备在发送当前报文时,可以选择在上一条 表项记录的基础上直接更新得到当前发送的报文所对应的表项记录。结合图4,第一网络设备发送报文1时,表项记录为第一隧道序号为1,绑定隧道序号为1,当发送报文2时,发送报文1时的表项记录即为上一条表项记录,第一网络设备将第一隧道序号直接更新为2,绑定隧道序号直接更新为2。可选的,第一网络设备可以通过新建一条表项,用于记录当前报文所对应的单隧道序号以及绑定隧道序号,同时删除上一条表项记录。可选的,第一网络设备还可以通过新建一条表项,记录当前报文所对应的单隧道序号以及绑定隧道序号,并保留上一条表项记录或者待一定老化时间到达后自动老化上一条表项记录,本申请对此不作限制。
时刻3,第二网络设备接收所述第一网设备发送的所述6个报文。其中,由于传输过程中,报文可能需要经过多次路由转发,若其中任一路由器发生拥塞、丢包时,会造成报文乱序,或者在传输过程中,报文的物理信号受到干扰,也可能造成丢包、乱序。基于上述可能的原因,第一隧道中,报文1和报文3先到达第二网络设备,而报文2还在第一隧道的传输路径中。同理,第二隧道中,报文4和报文6先到达第二网络设备,而报文5还在第二隧道的传输路径中。
由此,结合图4,在时刻3中,第二网络设备通过所述第一隧道接收到报文1和报文3,由于报文1携带的第一隧道序号为1,因此,认为报文1在第一隧道重排序缓存中完成正确排序,从第一隧道重排序缓存进入绑定隧道重排序缓存,从绑定隧道重排序缓存输出到其它的网络设备。报文3比报文2先到达第一隧道重排序缓存,因此,需要等待报文2到达第一隧道重排序缓存,完成正确排序,才可以进入绑定隧道重排序缓存。第二网络设备通过所述第二隧道接收到报文4和报文6,报文4携带的第二隧道序号为1,因此,认为报文4在第二隧道重排序缓存中完成正确排序,进入绑定隧道重排序缓存中。报文4比报文2和报文3先到达绑定隧道重排序缓存,因此,需要等待报文2和报文3到达绑定隧道重排序缓存中,才可以完成正确排序。
由此,在本示例中,第二网络设备在回复给第一网络设备的第一确认响应中,将该完成正确排序的报文1的第一隧道序号1作为第一隧道确认序号回复给第一网络设备,将当前时刻进入绑定隧道重排序缓存且已经完成正确排序的报文的所携带的绑定隧道序号作为绑定隧道确认序号回复给第一网络设备。第二网络设备在回复给第一网络设备的第二确认响应中,将完成正确排序的报文4的第二隧道序号作为第二隧道确认序号回复给第一网络设备,将当前时刻进入绑定隧道重排序缓存且已经完成正确排序的报文的所携带的绑定隧道序号作为绑定隧道确认序号回复给第一网络设备。由于时刻3中,绑定隧道重排序缓存中仅有报文1完成正确排序,因此,在第二网络设备回复给第一网络设备的第一确认响应和第二确认响应中,都是将报文1的绑定隧道序号作为绑定隧道确认序号回复给第一网络设备。
对应于图4的场景,所述第一网络设备通过第一隧道发送的报文的数量是3,第一网络设备记录离接收所述第一确认响应最近发送的报文的第一隧道序号为3;根据接收到的所述第一确认响应,确定第一隧道确认序号为1。所述第一网络设备通过第二隧道发送的报文数量为3,第一网络设备记录离接收所述第二确认响应最近发送的报文的第二隧道序号为3;根据接收到的所述第二确认响应,确定所述第二隧道确认序号为1。第一网络设备所发送绑定隧道中的总的报文数量为6,记录的绑定隧道序号 为6;根据第一确认响应和第二确认响应确定返回的绑定隧道确认序号的最大值为1。由此,可以计算出:第一隧道中未完成正确排序的报文数量F1=3-1=2,第二隧道中未完成正确排序的报文数量F2=3-1=2,绑定隧道中未完整正确排序的报文数量FB=6-1=5,因此,绑定隧道重排序缓存中报文的数量B=FB-F1-F2=5-2-2=1。如图4可以知,目前绑定隧道重排序缓存中确实只有报文4。本示例中所述的时刻1,时刻2,时刻3可以指代一个时间范围,也可以指代某一个具体的时刻。
以下对于S107中具体如何根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担进行详细说明。所述设定的负载分担策略具体可以包括根据各单隧道中未完成正确排序的报文数量,也可以包括各单隧道的时延情况,也可以是各单隧道重排序缓存的空间使用情况以及本领域技术人员在阅读本申请实施例后所想到的其他负载分担策略。
根据各单隧道中未完成正确排序的报文数量进行负载分担的实施情况具体说明如下。当第一网络设备确定所述绑定隧道重排序缓存的已用空间大小大于或者等于一个设定的第一阈值时,或者所述绑定隧道重排序缓存的可用空间大小小于或者等于一个设定的第二阈值时,所述第一网络设备选择单隧道中未完成正确排序的报文数量少的隧道来对第一网络设向第二网络设备传输的报文进行负载分担。此处的单隧道指的是组成所述绑定隧道中的各个隧道,如本实施例中的第一隧道或第二隧道。
在一个具体的实施方式中,可以参考上文中描述的确定第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量的具体实施方式来确定各单隧道中未完成正确排序的报文数量。下面以确定所述第一隧道和第二隧道中未完成正确排序的报文数量为例进行简要说明。所述第一网络设备通过所述第一隧道向所述第二网络设备发送多个数据报文中的第一部分数据报文;所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。所述第一部分数据报文中的每个报文包括该报文的第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的序号。所述第二部分数据报文中的每个报文包括该报文的第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的序号。所述第一网络设备接收所述第二网络设备发送的针对所述第一部分数据报文中的第一数据报文的第一确认响应。所述第一网络设备接收所述第二网络设备发送的针对所述第二部分数据报文中的第二数据报文的第二确认响应。所述第一确认响应包括第一隧道确认序号,所述第二确认响应中包括第二隧道确认序号。所述第一网络设备根据所述第一隧道确认序号确定已进入所述第一隧道重排序缓存并完成正确排序的报文数量。所述第一网络设备根据所述第二隧道确认序号确定已进入第二隧道重排序缓存并完成正确排序的报文数量。所述第一网络设备根据所述第一网络设备发送的所述第一部分数据报文的数量以及根据所述第一确认响应确定的已进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量。所述第一网络设备根据所述第一网络设备发送的所述第二部分数据报文的数量以及根据所述第二确认响应确定的已进入所述第二隧道重排序缓存并完成正确排序的报文数量得到所述第二隧道中未完成正确排序 的报文数量。
根据各单隧道重排序缓存的缓存空间的使用情况进行负载分担的实施方式具体说明如下。当第一网络设备确定所述绑定隧道重排序缓存的已用空间大小大于或者等于一个设定的第一阈值时,或者所述绑定隧道重排序缓存的可用空间大小小于或者等于一个设定的第二阈值时,所述第一网络设备根据单隧道重排序缓存的缓存空间的使用情况在多个单隧道之间对第一网络设备向第二网络设备传输的报文进行负载分担。此处的单隧道指的是组成所述绑定隧道中的各个隧道,如本实施例中的第一隧道或第二隧道;单隧道重排序缓存指的是组成所述绑定隧道中的各个隧道的重排序缓存,如本实施例中的第一隧道重排序缓存或第二隧道重排序缓存;单隧道重排序缓存的缓存空间的使用情况,例如可以包括:单隧道重排序缓存中已用的缓存空间的大小或单隧道重排序缓存中可用的缓存空间的大小。具体地,在本申请实施例中,当第一网络设备确定所述绑定隧道重排序缓存的已用空间大小大于或者等于一个设定的第一阈值时,或者所述绑定隧道重排序缓存的可用空间大小小于或者等于一个设定的第二阈值时,所述第一网络设备选择单隧道重排序缓存中已用的缓存空间少的隧道或单隧道重排序缓存中可用的缓存空间多的隧道来对第一网路设备后续接收到的多个连续的报文进行负载分担。
在本实施方式中,第一网络设备根据报文队列在单隧道重排序缓存中长度,来确定单隧道重排序缓存的缓存空间的使用情况;或者第一网络设备根据单隧道重排序缓存中已用或者可用的缓存切片的数量,来确定单隧道重排序缓存的缓存空间的使用情况;或者第一网络设备根据单隧道重排序缓存中的报文数量,来确定单隧道重排序缓存的缓存空间的使用情况。关于具体的说明可以参见前文中对于如何绑定隧道重排序缓存中缓存空间的使用情况的说明,此处不再赘述。
需要说明的是,在根据上述方式确定单隧道重排序缓存的缓存空间的使用情况时,可以通过在第二网络设备向第一网络设备返回确认响应。例如,在本申请实施例中,通过第一隧道返回的第一确认响应,用于确定第一隧道重排序缓存的缓存空间的使用情况,通过第二隧道返回的第二确认响应,用于确定第二隧道重排序缓存的缓存空间的使用情况。所述确认响应为GRE数据报文,所述GRE数据报文中包括重排序缓存大小Reorder Buffer Size字段,根据Reorder Buffer Size字段所承载的内容来确定单隧道重排序缓存的缓存空间的使用情况。例如,可以在GRE报文头携带Reorder Buffer Size字段。所述Reorder Buffer Size字段例如可以是32比特。所承载的内容包括所述单隧道重排序缓存中的报文数量、所述单隧道重排序缓存中报文队列的长度,所述单隧道重排序缓存中可用的用于缓存报文队列的长度,所述单隧道重排序缓存中已用的缓存切片的数量或所述单隧道重排序缓存中可用的缓存切片的数量。
或者所述确认响应为GRE控制报文,所述GRE控制报包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述第一网络设备根据所述V字段所承载的内容确定所述绑定隧道重排序缓存的缓存空间的使用情况。GRE控制报文中的Attribute TLV字段如上文所示,此处不再赘述。属性类型Attribute Type的取值例如可以是38,用于表示返回单隧道重排序缓存的空间使用情况,属性值Attribute Value所承载的内容如上文所述,不再赘述。
需要说明的是,上述中的GRE数据报文或控制报文,及其中的字段或格式仅是示例性说明,不构成本对发明的限定。本领域技术人员在阅读本申请文件的基础上可以想到采用其他报文或者GRE数据报文或控制报文的其他字段或格式以携带上述实施方式中的单隧道重排序缓存中的报文数量、所述单隧道重排序缓存中报文队列的长度,所述单隧道重排序缓存中可用的用于缓存报文队列的长度,所述单隧道重排序缓存中已用的缓存切片的数量或所述单隧道重排序缓存中可用的缓存切片的数量,这些都属于本申请应有之意,在此不一一赘述。
根据各单隧道的时延情况进行负载分担的实施方式具体说明如下。当第一网络设备确定所述绑定隧道重排序缓存的已用空间大小大于或者等于一个设定的第一阈值时,或者所述绑定隧道重排序缓存的可用空间大小小于或者等于一个设定的第二阈值时,所述第一网络设备选择单隧道中RTT小的隧道来对第一网络设向第二网络设备传输的报文进行负载分担。在一个具体的实施方式中,所述第一网络设备根据发送所述第一部分数据报文中的第三数据报文与收到所述第二网络设备发送的针对所述第三数据报文的确认响应的时间间隔,确定所述第一隧道的往返时延RTT,其中,所述第三数据报文包括的第一隧道序号与针对所述第三数据报文的确认响应中包括的第一隧道确认序号为对应关系。所述第一网络设备根据发送所述第二部分数据报文中的第四数据报文与收到所述第二网络设备发送的针对所述第四数据报文的确认响应的时间间隔,确定所述第二隧道的往返时延RTT,其中,所述第四数据报文包括的第二隧道序号与针对所述第二报文的确认响应中包括的第二隧道确认序号为对应关系。
具体来说,第一网络设备通过所述第一隧道发送所述第三数据报文时,在第三数据报文中携带与所述第三数据报文对应的第一隧道序号。第二网络设备接收到该第三数据报文,并且该第三数据报文进入所述绑定隧道重排序缓存,则向所述第一网络设备返回一个确认响应,在该确认响应中携带第一隧道确认序号。第一网络设备接收到第二网络设备发送的针对该第三数据报文的确认响应后,统计从发送所述第三数据报文到接收到该确认响应的时间间隔,从而确定第一隧道的往返时延。相似的,第一网络设备基于同样的原理确定所述第二隧道的往返时延。
通过本申请提供的确定单隧道RTT的方法,在基于动态负载分担的方案中,同时可以确定单隧道RTT。无需单独发送探测报文来统计单隧道的RTT,有效节省了网络开销。需要说明的是,除了本申请实施例提供的上述确定RTT的方法,还可以采用现有技术中任意已有的方法来确定隧道的RTT,比如单独发送用于检测RTT的探测报文,本申请不再赘述。
基于本申请提供的技术方案,当第一网络设备确定绑定隧道重排序缓存中已用的缓存空间的大小大于等一个设定的门限之后,或者当第一网络设备确定所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于的一个设定的门限之后,可以使得接下来到达第二网络设备的多个连续报文序列,根据设定的负载分担策略,例如,选择RTT小的隧道传输所述报文序列,或者选择单隧道中未完成正确排序数量少的隧道传输所述报文序列,而不再被分配到时延大或者单隧道中报文数量更多的隧道。由此,可以有效的防止更多的报文被滞留在发生拥塞的隧道,从而造成通过非拥塞隧道传输的报文在重排序缓存中的等待时延,和/或可能引发的绑定隧道重排序缓存中流量溢出的情 况,可以有效的减少丢包和系统重传。
在本申请中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担。能够有效的提高绑定隧道的传输效率。
以上,结合图2至图4详细说明了根据本申请实施例提供的负载分担的方法。以下,以下结合图5-图8详细说明根据本申请实施例提供的用于负载分担的网络设备和系统。
图5是根据本申请一实施例提供的第一网络设备500的示意图。该第一网络设备500可以是图2中的HAAP设备或HG设备,可以用于执行图3所示的方法。所述第一网络设备500和第二网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。如图5所示,该第一网络设备500包括:发送模块501,接收模块502和处理模块503。
该发送模块501用于向所述第二网络设备发送多个数据报文。
在一个具体的实施方式中,所述发送模块501通过所述第一隧道向所述第二网络设备发送所述多个数据报文。在另一个具体的实施方式中,所述发送模块501通过所述第二隧道向所述第二网络设备发送所述多个数据报文。在另一个具体的实施方式中,所述发送模块501通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文,所述发送模块501通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文。
该接收模块502用于接收所述第二网络设备发送的确认响应。
具体地,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息,第一网络设备根据所述绑定隧道重排序缓存的缓存空间的使用情况的信息,确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
该处理模块503用于根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
具体地,所述绑定隧道重排序缓存的缓存空间的使用情况,例如可以包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。所述设定的负载分担策略例如可以包括根据各单隧道中未完成正确排序的报文数量,也可以包括各单隧道的时延情况,也可以是各单隧道重排序缓存的空间使用情况以及本领域技术人员在阅读本申请实施例后所想到的其他负载分担策略。
对于具体如何根据确认响应,确定所述绑定隧道重排序缓存的缓存空间的使用情况,以及如何根据绑定隧道重排序的缓存空间的使用情况和设定的负载分担策略确定 对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担,可以参考下面的详细说明。
在本申请中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担,能够有效的提高绑定隧道的传输效率。例如,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
下面对于具体如何根据确认响应,确定所述绑定隧道重排序缓存的缓存空间的使用情况,进行具体说明。
在本申请提供的一个具体的实施方式中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存中已用的缓存空间的大小包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量。当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存中可用的缓存空间的大小包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在本实施方式中,所述接收模块502接收所述第二网络设备发送的确认响应,具体包括:所述第一网络设备接收所述第二网络设备通过所述第一隧道或所述第二隧道返回的所述确认响应。所述确认响应为GRE数据报文,所述GRE数据报文包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量。或者所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。根据上述所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量,来确定所述绑定隧道重排序缓存的缓存空间的使用情况。具体采用何种报文格式或字段格式(如采用哪些字段或扩展字段),各字段的具体内容或取值,以及具体如何根据报文队列在绑定隧道重排序缓存中长度,绑定隧道重排序缓存中已用或可用的缓存切片的数量,或者绑定隧道重排序缓存中的报文数量来确定所述绑定隧道重排序缓存的缓存空间的使用情况的详细说明,可以参考上述方法实施例中对应部分的描述,此处不再赘述。
上述实施方式中,通过根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
在上述根据在绑定隧道重排序缓存中的报文数量,来确定绑定隧道重排序缓存的缓存空间的使用情况的实施方式中,在一种具体的实施方式中,还可以根据所述绑定隧道重排序缓存中未完成正确排序的报文数量,所述第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量来确定绑定隧道重排序缓存中的报文数量。本实施方式中,所述处理模块503根据所述确认响应确定所述第一隧道中未完成正确排序的报文数量F1,所述第二隧道中未完成正确排序的报文数量F2以及所述绑定隧道中未完成正确排序的报文数量FB,进而确定所述绑定隧道重排序缓存中的报文数量B,B=FB-F1-F2
具体来说,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对进入通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序。
所述发送模块501通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。所述发送模块501通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文,所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。
所述接收模块502接收所述第二网络设备发送的针对所述第一部分数据报文中的第一数据报文的第一确认响应,并接收所述第二网络设备发送的针对所述第二部分数据报文中的第二数据报文的第二确认响应。所述第一确认响应包括上述第一隧道确认序号以及绑定隧道确认序号,所述第二确认响应中包括上述第二隧道确认序号以及绑定隧道确认序号。
所述处理模块503根据所述第一确认响应确定已进入所述第一隧道重排序缓存并完成正确排序的报文数量以及已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M,并根据所述第二确认响应确定已进入所述第二隧道重排序缓存并完成正确排序的报文数量以及已进入所述绑定隧道重排序缓存并完成正确排序的报文数量N;根据M和N中的较大值以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道中未完成正确排序的报文数量FB;根据向所述第二网络设备发送的所述第一部分数据报文的数量以及根据所述第一确认响应确定的已进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量F1; 以及根据向所述第二网络设备发送的所述第二部分数据报文的数量以及根据所述第二确认响应确定的已进入所述第二隧道重排序缓存并完成正确排序的报文数量得到所述第二隧道中未完成正确排序的报文数量F2
在一个具体地实施方式中,所述处理模块503可以根据所述第一隧道确认序号确定所述已进入所述第一隧道重排序缓存并完成正确排序的报文数量,并根据所述第一确认响应包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M;以及根据所述第二隧道确认序号确定所述已进入所述第二隧道重排序缓存并完成正确排序的报文数量,并根据所述第二确认响应包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。
在一个具体的实施方式中,所述第一部分数据报文中的每个报文包括用于承载所述第一隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段;所述第二部分数据报文中的每个报文包括用于承载所述第二隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。
在一个具体的方式中,所述第一确认响应为通用路由封装GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号。或所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
在一个具体的实施方式中,所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号。或所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
通过本申请提供的上述方案,利用GRE数据报文所携带的Sequence Number字段,Bonding Sequence Number字段,Acknowledgment Number字段以及Bonding Acknowledgment Number字段,或者GRE控制报文所携带的Attribute TLV字段,无需对现有协议进行扩展,即可根据上述字段携带的内容,确定绑定隧道重排序缓存中的报文数量。
需要说明的是,在第一网络设备根据所述绑定隧道重排序缓存中的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况的实施方式中,对于具体采用何种报文格式或字段格式(如采用哪些字段或扩展字段),各字段的具体内容或取值,以及对于所述第一网络设备根据所述绑定隧道重排序缓存中的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况的更为具体的说明,可以参见上述方法实施例对应部分的详细说明,此处不再具体赘述。
下面对于如何根据绑定隧道重排序的缓存空间的使用情况和设定的负载分担策略确定对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二 隧道之间进行负载分担,进行具体说明。
在一个具体的实施方式中,所述设定的负载分担策略包括:所述处理模块确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。本申请中所述设定的负载分担策略除了可以包括根据各单隧道中未完成正确排序的报文数量以及各单隧道的时延情况以外,也可以是各单隧道重排序缓存的空间使用情况以及本领域技术人员在阅读本申请实施例后所想到的其他负载分担策略。具体根据各单隧道中未完成正确排序的报文数量进行负载分担的实施方式,根据各单隧道重排序缓存的缓存空间的使用情况进行负载分担的实施方式以及根据各单隧道的时延情况进行负载分担的实施方式的具体说明,参见上述方法实施例对应部分的详细说明,此处不再具体赘述。
在根据各单隧道的时延情况进行负载分担的实施方式中,本申请提供了一种确定单隧道往返时延RTT的实施方式,具体包括:所述处理模块503根据发送所述第一部分数据报文中的第三数据报文与收到所述第二网络设备发送的针对所述第三数据报文的确认响应的时间间隔,确定所述第一隧道的往返时延RTT。所述处理模块503根据发送所述第二部分数据报文中的第四数据报文与收到所述第二网络设备发送的针对所述第四数据报文的确认响应的时间间隔,确定所述第二隧道的往返时延RTT。确定单隧道往返时延RTT的实施方式的具体说明,参见上述方法实施例对应部分的详细说明,此处不再具体赘述。
根据本申请实施例的第一网络设备500可对应于根据本申请实施例中用于负载的方法中的第一网络设备,并且,该网络设备中的各模块和上述其他操作和/或功能分别为了实现图3中的方法100的相应流程,为了简洁,此处不再赘述。
图6是根据本申请一实施例提供的第二网络设备600的示意图。该第二网络设备600可以是图中的HG设备或HAAP设备,可以用于执行图3所示的方法。所述第二网络设备600和第一网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道。所述第二网络设备600包括:接收模块601,处理模块602,发送模块603和绑定隧道重排序缓存模块604。
所述绑定隧道重排序缓存模块604用于对进入绑定隧道重排序缓存中的报文进行排序。
所述接收模块601用于接收所述第一网络设备发送的多个数据报文。
在一个具体的实施方式中,所述第二网络设备通过所述第一隧道接收所述多个数据报文。在另一个具体的实施方式中,所述第二网络设备通过所述第二隧道接收所述多个数据报文。在另一个具体的实施方式中,所述第二网络设备接收所述第一网络设备发送的所述多个数据报文,具体包括:所述第二网络设备接收所述第一网络设备通过所述第一隧道发送的所述多个数据报文中的第一部分数据报文;所述第二网络设备接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文。
所述处理模块602用于获取所述第二网络设备包括的绑定隧道重排序缓存的缓存空间的使用情况的信息,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。
具体地,所述绑定隧道重排序缓存的缓存空间的使用情况,例如可以包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或者所述绑定隧道重排序缓存中可用的缓存空间的大小。
关于如何获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息,请参见下面的具体说明。
所述发送模块603用于向所述第一网络设备发送确认响应,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息。所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
在一个具体的实施方式中,所述第二网络设备600每收到一个报文,就会向所述第一网络设备返回一个确认响应。在另一个具体的实施方式中,所述第二网络设备600可以设定,在一定的时间间隔下,周期性的向第一网络设备返回所述确认响应。在另一个具体的实施方式中,所述第二网络设备还可以在接收到所述第一网络设备发送的请求或者达到设定的预警状态时发送所述确认响应。该设定的预警状态包括但不限于绑定隧道重排序缓存的已用缓存空间的大小大于等于一个设定的阈值,或者绑定隧道重排序缓存的可用缓存空间的大小小于等于一个设定的阈值。本申请对此不作具体限定。
具体地,所述设定的负载分担策略例如可以包括根据各单隧道中未完成正确排序的报文数量,也可以包括各单隧道的时延情况,也可以是各单隧道重排序缓存的空间使用情况以及本领域技术人员在阅读本申请实施例后所想到的其他负载分担策略。例如,所述第一网络设备确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。
在上述方案中,通过第二网络设备返回的确认响应,确定绑定隧道重排序缓存的缓存空间的使用情况,并根据所述缓存空间的使用情况和设定的负载分担策略,在所述第一隧道和所述第二隧道之间对所述第一网络设备向所述第二网络设备传输的报文进行动态的负载分担。能够有效的提高绑定隧道的传输效率。
以下对如何获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息,进行具体说明。
在一个具体的实施方式中,在一个具体的实施方式中,当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓 存中已用的缓存切片的数量。当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
在本实施方式中,所述发送模块603向所述第一网络设备发送所述确认响应,具体包括:所述第二网络设备通过所述第一隧道或第二隧道向所述第一网络设备发送所述确认响应。在一个具体的实施方式中,所述确认响应为GRE数据报文,所述GRE数据报文包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量。在另一个具体的实施方式中,所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段。所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。具体采用何种报文格式或字段格式(如采用哪些字段或扩展字段),各字段的具体内容或取值的详细说明,可以参考上述方法实施例中对应部分的描述,此处不再赘述。
所述第一网络设备根据确认响应中包括的上述所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度,所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度,所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量,来确定所述绑定隧道重排序缓存的缓存空间的使用情况。
在另一个具体的实施方式中,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括单隧道确认号以及绑定隧道确认序号。本申请所述的单隧道指的是组成所述绑定隧道中的各个隧道,如本实施例中的第一隧道或第二隧道。
在上述根据在绑定隧道重排序缓存中的报文数量,来确定绑定隧道重排序缓存的缓存空间的使用情况的实施方式中,在一种具体的实施方式中,还可以根据所述绑定隧道重排序缓存中未完成正确排序的报文数量,所述第一隧道中未完成正确排序的报文数量以及第二隧道中未完成正确排序的报文数量来确定绑定隧道重排序缓存中的报文数量。在本实施方式中,利用上述的单隧道确认号以及绑定隧道确认序号来确定绑定隧道重排序缓存中的报文数量。
具体来说,所述第二网络设备进一步包括:
第一隧道重排序缓存模块,用于对通过所述第一隧道传输进入所述第一隧道重排序缓存中的报文进行排序;
第二隧道重排序缓存模块,用于对通过所述第二隧道传输进入所述第二隧道重排序缓存中的报文进行排序;
所述接收模块601接收所述第一网络设备通过所述第一隧道发送的所述多个数据 报文中的第一部分数据报文,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。所述接收模块601接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文,所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序。
所述确认响应包括所述第二网络设备600针对所述第一部分数据报文的第一确认响应和所述第二网络设备600针对所述第二部分数据报文的第二确认响应。
所述处理模块602获得所述第二网络设备中的第一隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的第一隧道序号以及所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第一隧道序号确定第一隧道确认序号。所述处理模块602根据所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第一确认响应中包括的绑定隧道确认序号。所述第一确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第一隧道确认序号和所述第一确认响应中包括的绑定隧道确认序号。
所述处理模块602获得所述第二网络设备中的第二隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的第二隧道序号以及所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第二隧道序号确定第二隧道确认序号。所述处理模块602根据所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第二确认响应中包括的绑定隧道确认序号。所述第二确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第二隧道确认序号和所述第二确认响应中包括的绑定隧道确认序号。所述第一隧道确认序号、所述第二隧道确认序号、所述第一确认响应中包括的绑定隧道确认序号以及所述第二确认响应中包括的绑定隧道确认序号被所述第一网络设备用于确定所述绑定隧道重排序缓存的报文数量,并根据所述绑定隧道重排序缓存的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
在一个具体的实施方式中,所述第一确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号。
在另一个具体的实施方式中,所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
在一个具体的实施方式中,所述第二确认响应为GRE数据报文,采用所述GRE 数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号。
在另一个具体的实施方式中,所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
具体采用何种报文格式或字段格式(如采用哪些字段或扩展字段),各字段的具体内容或取值的详细说明,可以参考上述方法实施例中对应部分的描述,此处不再赘述。
上述实施方式中,通过根据所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行动态的负载分担,可以有效地降低所述绑定隧道的网络时延,和/或能较为明显的抑制由于网络拥塞所可能导致的绑定隧道重排序缓存流量溢出,进而导致的报文丢包,触发应用层重传的问题。
根据本申请实施例的第二网络设备600可对应于根据本申请实施例中用于负载的方法中的第二网络设备,并且,该网络设备中的各模块和上述其他操作和/或功能分别为了实现图3中的方法100的相应流程,为了简洁,此处不再赘述。
本申请上述实施例中提供的第一网络设备500和第二网络设备600,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或部分功能。
图7是根据本申请一实施例提供的第一网络设备700的另一示意图。该第一网络设备700可以是可以是图2中的HAAP设备或HG设备,可以用于执行图3所示的方法。所述第一网络设备700和第二网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序。如图7所示,该第一网络设备700包括:输入接口701、输出接口702、处理器703和存储器704。该输入接口701、输出接口702、处理器703和存储器704可以通过总线系统705相连。
所述存储器704用于存储包括程序、指令或代码。所述处理器703,用于执行所述存储器704中的程序、指令或代码,以控制输入接口701接收信号、控制输出接口702发送信号以及实施上述图3所对应的实施方式中的第一网络设备所实施的各步骤及功能,此处不再赘述。
一个具体的实施方式中,所述输出接口702用于向所述第二网络设备发送多个数据报文,所述输入接口701用于接收所述第二网络设备发送的确认响应,所述处理器703用于根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。上述输出接口702,输入接口701以及处理器703的具体实施方式可 以相应参考上述图5实施方式中的接收模块502、发送模块501以及处理模块503中的具体说明,这里不再赘述。
图8是根据本申请一实施例的第二网络设备800的另一示意图。该第二网络设备800可以是图中的HG设备或HAAP设备,可以用于执行图3所示的方法100。所述第二网络设备800和第一网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道。如图8所示,该第二网络设备800包括:输入接口801、输出接口802、处理器803和存储器804。该输入接口801、输出接口802、处理器803和存储器804可以通过总线系统805相连。
所述存储器804用于存储包括程序、指令或代码。所述处理器803,用于执行所述存储器804中的程序、指令或代码,以控制输入接口801接收信号、控制输出接口802发送信号以及实施上述图3所对应的实施方式中的第二网络设备所实施的各步骤及功能,此处不再赘述。
一个具体的实施方式中,存储器804包括绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序,所述输入接口801用于接收所述第一网络设备发送的多个数据报文,所述处理器803用于获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息,所述输出接口802用于向所述第一网络设备发送确认响应,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息。所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。在另一个具体的实施方式中,存储器804还可以包括第一隧道重排序缓存,用于对进入所述第一隧道重排序缓存中的报文进行排序,所述存储器804还可以包括第二隧道重排序缓存,用于对进入所述第二隧道重排序缓存中的报文进行排序。上述存储器804、输出接口802,输入接口801以及处理器803的具体实施方式可以相应参考上述图6实施方式中的接收模块602、发送模块601、处理模块603以及绑定隧道重排序缓存模块604中的具体说明,这里不再赘述。
应理解,在本申请实施例中,该处理器703和处理器803可以是中央处理单元(Central Processing Unit,简称为“CPU”),还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器704和存储器804可以包括只读存储器和随机存取存储器,并分别向处理器703和处理器803提供指令和数据。存储器704或存储器804的一部分还可以包括非易失性随机存取存储器。例如,存储器704或存储器804还可以存储设备类型的信息。
该总线系统705和总线系统805除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统。
在实现过程中,方法100的各步骤可以通过处理器703和处理器803中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的定位方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质分别位于存储器704和存储器804中,处理器703读取存储器704中的信息,处理器803读取存储器804中的信息,结合其硬件完成上述方法100的步骤。为避免重复,这里不再详细描述。
需要说明的是,一个具体的实施方式中,图5中的处理模块503可以用图7的处理器703实现,发送模块501可以由图7的输出接口702实现,接收模块502可以由图7的输入接口701实现。同理,图6中的处理模块602用图8的处理器803实现,发送模块603可以由图8的输出接口802实现,接收模块601可以用由图8的输入接口801实现。
本申请还提供了一种通信系统,包括第一网络设备和第二网络设备,所述第一网络设备可以是图5、图7对应的实施例所提供的第一网络设备。所述第二网络设备可以是图6、图8对应的实施例所提供的第二网络设备。所述通信系统用于执行图2-图4对应的实施例的方法100。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk (SSD))等。本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,对于装置和系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例部分的说明即可。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (39)

  1. 一种负载分担的方法,其特征在于,应用于第一网络设备,所述第一网络设备和第二网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序,所述方法包括:
    所述第一网络设备向所述第二网络设备发送多个数据报文;
    所述第一网络设备接收所述第二网络设备发送的确认响应;
    所述第一网络设备根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况;
    所述第一网络设备根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
  2. 根据权利要求1所述的方法,其特征在于,所述绑定隧道重排序缓存的缓存空间的使用情况,具体包括:
    所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
  3. 根据权利要求1或2所述的方法,其特征在于:所述第一网络设备根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况,具体包括:
    所述第一网络设备根据所述确认响应确定所述第一隧道中未完成正确排序的报文数量F1,所述第二隧道中未完成正确排序的报文数量F2以及所述绑定隧道中未完成正确排序的报文数量FB,进而确定所述绑定隧道重排序缓存中的报文数量B,B=FB-F1-F2,根据所述绑定隧道重排序缓存中的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
  4. 根据权利要求3所述的方法,其特征在于,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序;
    所述第一网络设备向所述第二网络设备发送所述多个数据报文,具体包括:
    所述第一网络设备通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文,所述第一网络设备通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文;
    所述第一网络设备接收所述第二网络设备发送的确认响应,具体包括:
    所述第一网络设备接收所述第二网络设备发送的针对所述第一部分数据报文中的第一数据报文的第一确认响应,根据所述第一确认响应确定已进入所述第一隧道重排序缓存并完成正确排序的报文数量以及已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M;
    所述第一网络设备接收所述第二网络设备发送的针对所述第二部分数据报文中的第二数据报文的第二确认响应,根据所述第二确认响应确定已进入所述第二隧道重排 序缓存并完成正确排序的报文数量以及已进入所述绑定隧道重排序缓存并完成正确排序的报文数量N;
    所述第一网络设备根据M和N中的较大值以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道中未完成正确排序的报文数量FB
    所述第一网络设备根据向所述第二网络设备发送的所述第一部分数据报文的数量以及根据所述第一确认响应确定的已进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量F1
    所述第一网络设备根据向所述第二网络设备发送的所述第二部分数据报文的数量以及根据所述第二确认响应确定的已进入所述第二隧道重排序缓存并完成正确排序的报文数量得到所述第二隧道中未完成正确排序的报文数量F2
  5. 根据权利要求4所述的方法,其特征在于,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;所述第一确认响应包括第一隧道确认序号以及绑定隧道确认序号,所述第二确认响应中包括第二隧道确认序号以及绑定隧道确认序号;
    所述第一网络设备根据所述第一隧道确认序号确定所述已进入所述第一隧道重排序缓存并完成正确排序的报文数量,并根据所述第一确认响应中包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M;
    所述第一网络设备根据所述第二隧道确认序号确定所述已进入所述第二隧道重排序缓存并完成正确排序的报文数量,并根据所述第二确认响应中包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。
  6. 根据权利要求5所述的方法,其特征在于,所述第一部分数据报文中的每个报文包括用于承载所述第一隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段;所述第二部分数据报文中的每个报文包括用于承载所述第二隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。
  7. 根据权要求5或6所述的方法,其特征在于,所述第一确认响应为通用路由封装GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号;或所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
  8. 根据权利要求5-7任一项所述的方法,其特征在于,所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承 载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号;或所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
  9. 根据权利要求4-8任一项所述的方法,其特征在于:所述第一网络设备根据发送所述第一部分数据报文中的第三数据报文与收到所述第二网络设备发送的针对所述第三数据报文的确认响应的时间间隔,确定所述第一隧道的往返时延RTT;
    所述第一网络设备根据发送所述第二部分数据报文中的第四数据报文与收到所述第二网络设备发送的针对所述第四数据报文的确认响应的时间间隔,确定所述第二隧道的往返时延RTT。
  10. 根据权利要求2所述的方法,其特征在于:
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存中已用的缓存空间的大小包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量;
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存中可用的缓存空间的大小包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
  11. 根据权利要求10所述的方法,其特征在于:
    所述确认响应为GRE数据报文,所述GRE数据报文包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量;
    或者
    所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
  12. 根据权利要求2-11任一项所述的方法,其特征在于:所述设定的负载分担策略包括:
    所述第一网络设备确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。
  13. 一种负载分担的方法,其特征在于,应用于第二网络设备,所述第二网络设备和第一网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序,所述方法包括:
    所述第二网络设备接收所述第一网络设备发送的多个数据报文;
    所述第二网络设备获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息,
    所述第二网络设备向所述第一网络设备发送确认响应,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息,所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
  14. 根据权利要求13所述的方法,其特征在于,所述绑定隧道重排序缓存的缓存空间的使用情况包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
  15. 根据权利要求13或14所述的方法,其特征在于,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序;
    所述第二网络设备接收所述第一网络设备发送的所述多个数据报文,具体包括:
    所述第二网络设备接收所述第一网络设备通过所述第一隧道发送的所述多个数据报文中的第一部分数据报文,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;
    所述第二网络设备接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文,所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;
    所述确认响应包括所述第二网络设备针对所述第一部分数据报文的第一确认响应和所述第二网络设备针对所述第二部分数据报文的第二确认响应;
    所述第二网络设备获得所述第一隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的第一隧道序号以及所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第一隧道序号确定第一隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第一确认响应中包括的绑定隧道确认序号,所述第一确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第一隧道确认序号和所述第一确认响应中包括的绑定 隧道确认序号;
    所述第二网络设备获得所述第二隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的第二隧道序号以及所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第二隧道序号确定第二隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第二确认响应中包括的绑定隧道确认序号,所述第二确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第二隧道确认序号和所述第二确认响应中包括的绑定隧道确认序号;
    所述第一隧道确认序号、所述第二隧道确认序号、所述第一确认响应中包括的绑定隧道确认序号以及所述第二确认响应中包括的绑定隧道确认序号被所述第一网络设备用于确定所述绑定隧道重排序缓存的报文数量,并根据所述绑定隧道重排序缓存的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
  16. 根据权利要求15所述的方法,其特征在于:
    所述第一确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号;或者
    所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
  17. 根据权利要求15或16所述的方法,其特征在于:
    所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号;或者
    所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
  18. 根据权利要求14所述方法,其特征在于:
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量;
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
  19. 根据权利要求18所述的方法,其特征在于,
    所述确认响应为通用路由封装GRE数据报文,所述GRE数据报文中包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字 段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量;或者
    所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
  20. 一种第一网络设备,其特征在于,所述第一网络设备和第二网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括绑定隧道重排序缓存,所述绑定隧道重排序缓存用于对进入所述绑定隧道重排序缓存中的报文进行排序,所述第一网络设备包括:
    发送模块,用于向所述第二网络设备发送多个数据报文;
    接收模块,用于接收所述第二网络设备发送的确认响应;
    处理模块,用于根据所述确认响应确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
  21. 根据权利要求20所述的第一网络设备,其特征在于,所述绑定隧道重排序缓存的缓存空间的使用情况,具体包括:
    所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
  22. 根据权利要求20或21所述的第一网络设备,其特征在于,
    所述处理模块,具体用于根据所述确认响应确定所述第一隧道中未完成正确排序的报文数量F1,所述第二隧道中未完成正确排序的报文数量F2以及所述绑定隧道中未完成正确排序的报文数量FB,进而确定所述绑定隧道重排序缓存中的报文数量B,B=FB-F1-F2,根据所述绑定隧道重排序缓存中的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
  23. 根据权利要求22所述的第一网络设备,其特征在于,所述第二网络设备还包括第一隧道重排序缓存和第二隧道重排序缓存,所述第一隧道重排序缓存用于对进入通过所述第一隧道传输的报文进行排序,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序;
    所述发送模块,具体用于通过所述第一隧道向所述第二网络设备发送所述多个数据报文中的第一部分数据报文,并通过所述第二隧道向所述第二网络设备发送所述多个数据报文中的第二部分数据报文;
    所述接收模块,具体用于接收所述第二网络设备发送的针对所述第一部分数据报 文中的第一数据报文的第一确认响应,并接收所述第二网络设备发送的针对所述第二部分数据报文中的第二数据报文的第二确认响应;
    所述处理模块,具体用于:
    根据所述第一确认响应确定已进入所述第一隧道重排序缓存并完成正确排序的报文数量以及已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M,并根据所述第二确认响应确定已进入所述第二隧道重排序缓存并完成正确排序的报文数量以及已进入所述绑定隧道重排序缓存并完成正确排序的报文数量N,根据M和N中的较大值以及所述第一网络设备所发送的所述多个数据报文的数量得到所述绑定隧道中未完成正确排序的报文数量FB
    根据向所述第二网络设备发送的所述第一部分数据报文的数量以及根据所述第一确认响应确定的已进入所述第一隧道重排序缓存并完成正确排序的报文数量得到所述第一隧道中未完成正确排序的报文数量F1;以及
    根据向所述第二网络设备发送的所述第二部分数据报文的数量以及根据所述第二确认响应确定的已进入所述第二隧道重排序缓存并完成正确排序的报文数量得到所述第二隧道中未完成正确排序的报文数量F2
  24. 根据权利要求23所述的第一网络设备,其特征在于,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;所述第一确认响应包括第一隧道确认序号以及绑定隧道确认序号,所述第二确认响应中包括第二隧道确认序号以及绑定隧道确认序号;
    所述处理模块,具体用于:
    根据所述第一隧道确认序号确定所述已进入所述第一隧道重排序缓存并完成正确排序的报文数量,并根据所述第一确认响应中包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量M;以及
    根据所述第二隧道确认序号确定所述已进入所述第二隧道重排序缓存并完成正确排序的报文数量,并根据所述第二确认响应中包括的绑定隧道确认序号确定已经进入所述绑定隧道重排序缓存并完成正确排序的报文数量N。
  25. 根据权利要求24所述的第一网络设备,其特征在于,所述第一部分数据报文中的每个报文包括用于承载所述第一隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段;所述第二部分数据报文中的每个报文包括用于承载所述第二隧道序号的序号Sequence Number字段和用于承载所述绑定隧道序号的绑定序号Bonding Sequence Number字段。
  26. 根据权利要求24或25所述的第一网络设备,其特征在于,所述第一确认响应为通用路由封装GRE数据报文,采用所述GRE数据报文中包括的确认号 Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号;或所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
  27. 根据权利要求24-26任一项所述的第一网络设备,其特征在于,所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号;或所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
  28. 根据权利要求23-27任一项所述的第一网络设备,所述处理模块,还用于:
    根据发送所述第一部分数据报文中的第三数据报文与收到所述第二网络设备发送的针对所述第三数据报文的确认响应的时间间隔,确定所述第一隧道的往返时延RTT;以及
    根据发送所述第二部分数据报文中的第四数据报文与收到所述第二网络设备发送的针对所述第四数据报文的确认响应的时间间隔,确定所述第二隧道的往返时延RTT。
  29. 根据权利要求21所述的第一网络设备,其特征在于,
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中已用的缓存空间的大小时,所述绑定隧道重排序缓存中已用的缓存空间的大小包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量;
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存中可用的缓存空间的大小包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
  30. 根据权利要求29所述的第一网络设备,其特征在于,
    所述确认响应为GRE数据报文,所述GRE数据报文包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量;或者
    所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
  31. 根据权利要求21-30任一项所述的第一网络设备,其特征在于,所述设定的负载分担策略包括:
    所述第一网络设备确定所述绑定隧道重排序缓存中已用的缓存空间的大小大于等于第一门限值或所述绑定隧道重排序缓存中可用的缓存空间的大小小于等于第二门限值后,则选择所述第一隧道和所述第二隧道中往返时延RTT小的隧道或选择所述第一隧道和所述第二隧道中未完成正确排序的报文数量少的隧道传输所述第一网络设备向所述第二网络设备发送的报文。
  32. 一种第二网络设备,其特征在于,所述第二网络设备和第一网络设备之间建立有第一隧道和第二隧道,所述第一隧道和所述第二隧道通过混合端口绑定Hybrid bonding形成绑定隧道,所述第二网络设备包括:
    绑定隧道重排序缓存模块,用于对进入所述第二网络设备中包括的绑定隧道重排序缓存中的报文进行排序;
    接收模块,用于接收所述第一网络设备发送的多个数据报文;
    处理模块,用于获取所述绑定隧道重排序缓存的缓存空间的使用情况的信息;
    发送模块,用于向所述第一网络设备发送确认响应,所述确认响应中包括所述绑定隧道重排序缓存的缓存空间的使用情况的信息,所述信息被所述第一网络设备用于确定所述绑定隧道重排序缓存的缓存空间的使用情况,并根据所述绑定隧道重排序缓存的缓存空间的使用情况以及设定的负载分担策略,对所述第一网络设备向所述第二网络设备传输的报文在所述第一隧道和所述第二隧道之间进行负载分担。
  33. 根据权利要求32所述的第二网络设备,其特征在于,所述绑定隧道重排序缓存的缓存空间的使用情况包括:所述绑定隧道重排序缓存中已用的缓存空间的大小或所述绑定隧道重排序缓存中可用的缓存空间的大小。
  34. 根据权利要求32或33所述的第二网络设备,其特征在于,所述第二网络设备进一步包括:
    第一隧道重排序缓存模块,用于对通过所述第一隧道传输进入第一隧道重排序缓存中的报文进行排序;
    第二隧道重排序缓存模块,用于对通过所述第二隧道进入第二隧道重排序缓存中的报文进行报文;
    所述接收模块,具体用于:
    接收所述第一网络设备通过所述第一隧道发送的所述多个数据报文中的第一部分数据报文,所述第一部分数据报文中的每个报文包括该报文的绑定隧道序号以及第一隧道序号,所述第一隧道序号用于表示所述第一部分数据报文中的每个报文在所述第一隧道中的传输顺序,所述第一部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;以及
    接收所述第一网络设备通过所述第二隧道发送的所述多个数据报文中的第二部分数据报文,所述第二部分数据报文中的每个报文包括该报文的绑定隧道序号以及第二隧道序号,所述第二隧道序号用于表示所述第二部分数据报文中的每个报文在所述第二隧道中的传输顺序,所述第二部分数据报文中的每个报文包括的该报文的绑定隧道序号用于表示该报文在所述绑定隧道中的传输顺序;
    所述确认响应包括所述第二网络设备针对所述第一部分数据报文的第一确认响应和所述第二网络设备针对所述第二部分数据报文的第二确认响应;
    所述处理模块,具体用于:
    获得所述第二网络设备中的第一隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的第一隧道序号以及所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第一隧道序号确定第一隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第一确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第一确认响应中包括的绑定隧道确认序号,所述第一确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第一隧道确认序号和所述第一确认响应中包括的绑定隧道确认序号,所述第一隧道重排序缓存用于对通过所述第一隧道传输的报文进行排序;以及
    获得所述第二网络设备中的第二隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的第二隧道序号以及所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号,根据所述第二隧道序号确定第二隧道确认序号,根据所述绑定隧道重排序缓存中离发送所述第二确认响应最近的已经完成正确排序的报文中的绑定隧道序号确定所述第二确认响应中包括的绑定隧道确认序号,所述第二确认响应中包括的所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述第二隧道确认序号和所述第二确认响应中包括的绑定隧道确认序号,所述第二隧道重排序缓存用于对通过所述第二隧道传输的报文进行排序;
    所述第一隧道确认序号、所述第二隧道确认序号、所述第一确认响应中包括的绑定隧道确认序号以及所述第二确认响应中包括的绑定隧道确认序号被所述第一网络设备用于确定所述绑定隧道重排序缓存的报文数量,并根据所述绑定隧道重排序缓存的报文数量确定所述绑定隧道重排序缓存的缓存空间的使用情况。
  35. 根据权利要求34所述的第二网络设备,其特征在于,
    所述第一确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第一隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述隧道绑定确认序号;或者
    所述第一确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第一隧道确认序号和所述绑定隧道确认序号。
  36. 根据权利要求34或35所述的第二网络设备,其特征在于,
    所述第二确认响应为GRE数据报文,采用所述GRE数据报文中包括的确认号Acknowledgment Number字段来承载所述第二隧道确认序号,采用所述GRE数据报文中包括的绑定确认号Bonding Acknowledgment Number字段来承载所述绑定隧道确认序号;或者
    所述第二确认响应为GRE控制报文,采用所述GRE控制报文中包括的属性类型长度值Attribute TLV字段来承载所述第二隧道确认序号和所述绑定隧道确认序号。
  37. 根据权利要求33所述的第二网络设备,其特征在于,
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存 中已用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度或所述绑定隧道重排序缓存中已用的缓存切片的数量;
    当所述绑定隧道重排序缓存的缓存空间的使用情况包括所述绑定隧道重排序缓存中可用的缓存空间的大小时,所述绑定隧道重排序缓存的缓存空间的使用情况的信息包括所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度或所述绑定隧道重排序缓存中可用的缓存切片的数量。
  38. 根据权利要求37所述的第二网络设备,其特征在于,
    所述确认响应为通用路由封装GRE数据报文,所述GRE数据报文中包括绑定重排序缓存大小Bonding Reorder Buffer Size字段,所述Bonding Reorder Buffer Size字段承载所述绑定隧道重排序缓存中的报文数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用的缓存切片的数量;或者
    所述确认响应为GRE控制报文,所述GRE控制报文包括属性类型长度值Attribute TLV字段,所述Attribute TLV字段包括类型T字段,长度L字段以及值V字段,所述V字段承载所述绑定隧道重排序缓存中报文的数量、所述绑定隧道重排序缓存中报文队列的长度、所述绑定隧道重排序缓存中可用的用于缓存报文队列的长度、所述绑定隧道重排序缓存中已用的缓存切片的数量或所述绑定隧道重排序缓存中可用缓存切片的数量。
  39. 一种通信系统,包括权利要求20-31任一项所述的第一网络设备以及权利要求32-38任一项所述的第二网络设备。
PCT/CN2017/109132 2017-01-20 2017-11-02 一种报负载分担方法及网络设备 WO2018133496A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17892794.3A EP3562108B1 (en) 2017-01-20 2017-11-02 Load sharing between hybrid tunnels
US16/517,224 US10999210B2 (en) 2017-01-20 2019-07-19 Load sharing method and network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710048343.9 2017-01-20
CN201710048343.9A CN108337182B (zh) 2017-01-20 2017-01-20 一种报负载分担方法及网络设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/517,224 Continuation US10999210B2 (en) 2017-01-20 2019-07-19 Load sharing method and network device

Publications (1)

Publication Number Publication Date
WO2018133496A1 true WO2018133496A1 (zh) 2018-07-26

Family

ID=62907707

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109132 WO2018133496A1 (zh) 2017-01-20 2017-11-02 一种报负载分担方法及网络设备

Country Status (4)

Country Link
US (1) US10999210B2 (zh)
EP (1) EP3562108B1 (zh)
CN (2) CN108337182B (zh)
WO (1) WO2018133496A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11936566B2 (en) * 2020-12-18 2024-03-19 Dish Wireless L.L.C. Intelligent router bonding 5G telephony and digital subscriber line services
WO2023096540A1 (en) * 2021-11-26 2023-06-01 Telefonaktiebolaget Lm Ericsson (Publ) Ue, radio network node, and methods performed in a wireless communication network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868629A (zh) * 2012-08-30 2013-01-09 汉柏科技有限公司 利用ipsec实现负载分担的方法及系统
WO2014067711A1 (en) * 2012-11-02 2014-05-08 Deutsche Telekom Ag Method and system for network and service controlled hybrid access
CN104158761A (zh) * 2014-08-05 2014-11-19 华为技术有限公司 一种分流流量的方法和装置
CN104158752A (zh) * 2014-09-01 2014-11-19 华为技术有限公司 业务流量的处理方法和装置
CN105743760A (zh) * 2014-12-12 2016-07-06 华为技术有限公司 一种流量切换方法和装置
CN105933232A (zh) * 2016-03-29 2016-09-07 东北大学 支持多业务数据传输需求的多径传输控制终端及方法

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7139276B1 (en) * 2001-02-27 2006-11-21 Cisco Technology, Inc. Load sharing between L2TP tunnels
IL151796A0 (en) * 2002-09-18 2003-04-10 Lightscape Networks Ltd Method for protection of ethernet traffic in optical ring networks
US7673048B1 (en) * 2003-02-24 2010-03-02 Cisco Technology, Inc. Methods and apparatus for establishing a computerized device tunnel connection
CN100420220C (zh) * 2006-01-09 2008-09-17 华为技术有限公司 二层隧道协议网络服务器及其隧道建立方法
CN101383755B (zh) * 2007-09-04 2012-10-17 华为技术有限公司 代理移动IPv6切换方法及相关网络实体
FR2939994B1 (fr) * 2008-12-12 2010-12-17 Canon Kk Procede de transmission d'un flux de donnees multi-canal sur un tunnel multi-transport, produit programme d'ordinateur, moyen de stockage et tetes de tunnel correspondantes
CN101741740B (zh) * 2009-12-15 2012-02-08 杭州华三通信技术有限公司 一种负载平衡的方法、系统和设备
US8831658B2 (en) * 2010-11-05 2014-09-09 Qualcomm Incorporated Controlling application access to a network
CN103297358B (zh) * 2012-02-27 2016-12-14 北京东土科技股份有限公司 一种智能电网跨广域网goose报文传输系统及方法
CN102833161B (zh) * 2012-08-21 2018-01-30 中兴通讯股份有限公司 隧道负荷分担方法及装置
CN103229466B (zh) * 2012-12-27 2016-03-09 华为技术有限公司 一种数据包传输的方法及装置
US20140321376A1 (en) * 2013-04-29 2014-10-30 Qualcomm Incorporated Lte-wlan centralized downlink scheduler
CN104883687B (zh) * 2014-02-28 2019-02-26 华为技术有限公司 无线局域网隧道建立方法、装置及接入网系统
US9093088B1 (en) * 2014-05-14 2015-07-28 International Business Machines Corporation Load balancing and space efficient big data tape management
US9699800B2 (en) * 2014-10-23 2017-07-04 Intel IP Corporation Systems, methods, and appartatuses for bearer splitting in multi-radio HetNet
CN106304401B (zh) * 2015-05-22 2020-06-02 华为技术有限公司 一种公共wlan架构下的数据隧道建立方法和ap
US9692709B2 (en) * 2015-06-04 2017-06-27 Oracle International Corporation Playout buffering of encapsulated media
US20160380884A1 (en) * 2015-06-26 2016-12-29 Futurewei Technologies, Inc. Flow-Based Distribution in Hybrid Access Networks
CN105657748B (zh) * 2016-03-16 2020-06-26 华为技术有限公司 基于隧道绑定的通信方法和网络设备
CN105703999B (zh) * 2016-03-29 2019-06-11 华为技术有限公司 建立gre隧道的方法和设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868629A (zh) * 2012-08-30 2013-01-09 汉柏科技有限公司 利用ipsec实现负载分担的方法及系统
WO2014067711A1 (en) * 2012-11-02 2014-05-08 Deutsche Telekom Ag Method and system for network and service controlled hybrid access
CN104158761A (zh) * 2014-08-05 2014-11-19 华为技术有限公司 一种分流流量的方法和装置
CN104158752A (zh) * 2014-09-01 2014-11-19 华为技术有限公司 业务流量的处理方法和装置
CN105743760A (zh) * 2014-12-12 2016-07-06 华为技术有限公司 一种流量切换方法和装置
CN105933232A (zh) * 2016-03-29 2016-09-07 东北大学 支持多业务数据传输需求的多径传输控制终端及方法

Also Published As

Publication number Publication date
EP3562108A1 (en) 2019-10-30
CN108337182B (zh) 2020-06-02
EP3562108B1 (en) 2023-01-04
US10999210B2 (en) 2021-05-04
US20190342227A1 (en) 2019-11-07
EP3562108A4 (en) 2019-11-13
CN111740919B (zh) 2023-08-22
CN108337182A (zh) 2018-07-27
CN111740919A (zh) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2019140556A1 (zh) 一种报文传输的方法及装置
CN109412964B (zh) 报文控制方法及网络装置
WO2017215392A1 (zh) 一种网络拥塞控制方法、设备及系统
WO2017050216A1 (zh) 一种报文传输方法及用户设备
WO2018082382A1 (zh) 一种混合接入网络中处理报文的方法及网络设备
WO2021036962A1 (zh) 一种业务报文传输的方法及设备
US20140226663A1 (en) Method, device, and system to prioritize encapsulating packets in a plurality of logical network connections
WO2019001240A1 (zh) 一种报文传输的方法和网络设备
WO2016062106A1 (zh) 报文处理方法、装置及系统
WO2020063298A1 (zh) 处理tcp报文的方法、toe组件以及网络设备
CN110944358B (zh) 数据传输方法和设备
WO2019179157A1 (zh) 一种数据流量处理方法及相关网络设备
CN107770085B (zh) 一种网络负载均衡方法、设备及系统
KR20190112804A (ko) 패킷 처리 방법 및 장치
CN113076280B (zh) 一种数据传输方法及相关设备
US20220052951A1 (en) Congestion Information Collection Method and System, Related Device, and Computer Storage Medium
WO2022143902A1 (zh) 数据包传输方法及相关设备
WO2018133496A1 (zh) 一种报负载分担方法及网络设备
WO2020169039A1 (zh) 一种策略管理的方法及装置
CN113938431B (zh) 突发数据包传输方法、装置和电子设备
WO2022068633A1 (zh) 一种切片帧的发送方法及装置
CN110858794B (zh) 多频段传输方法及装置
CN110611548B (zh) 数据传输方法、设备、发送设备、接收设备及存储介质
CN104022961A (zh) 一种数据传输方法、装置及系统
CN105307207B (zh) 无线联网装置中的数据传输的方法和无线联网装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892794

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017892794

Country of ref document: EP

Effective date: 20190726