WO2016138786A1 - 传输控制协议tcp数据包的发送方法、发送装置和系统 - Google Patents

传输控制协议tcp数据包的发送方法、发送装置和系统 Download PDF

Info

Publication number
WO2016138786A1
WO2016138786A1 PCT/CN2015/099278 CN2015099278W WO2016138786A1 WO 2016138786 A1 WO2016138786 A1 WO 2016138786A1 CN 2015099278 W CN2015099278 W CN 2015099278W WO 2016138786 A1 WO2016138786 A1 WO 2016138786A1
Authority
WO
WIPO (PCT)
Prior art keywords
congestion window
tcp
trip delay
throughput rate
packet
Prior art date
Application number
PCT/CN2015/099278
Other languages
English (en)
French (fr)
Inventor
朱夏
李峰
程剑
孔维庆
陈刚
郭跃栋
任广涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2017546188A priority Critical patent/JP6526825B2/ja
Priority to EP15883836.7A priority patent/EP3255847B1/en
Priority to KR1020177027525A priority patent/KR102030574B1/ko
Publication of WO2016138786A1 publication Critical patent/WO2016138786A1/zh
Priority to US15/694,581 priority patent/US10367922B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0864Round trip delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/27Evaluation or update of window size, e.g. using information derived from acknowledged [ACK] packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the embodiments of the present invention relate to the field of network technologies, and in particular, to a method, a sending apparatus, and a system for transmitting a TCP packet of a transmission control protocol.
  • Network congestion means that there are too many packets transmitted in the network, but the resources of the storage and forwarding nodes in the network are limited, resulting in a decrease in network transmission performance.
  • network congestion occurs, data loss, increased delay, and decreased throughput are common.
  • congestion collapses When the network is severely congested, congestion collapses.
  • high throughput applications such as online video and audio applications
  • the amount of data that needs to be transmitted over the network has also increased dramatically, requiring the network to maintain high throughput, if congestion control measures for coordinating network resources are insufficient. Reasonable, even if the network bandwidth is sufficient, it will seriously affect the good use of high throughput applications.
  • TCP Transmission Control Protocol
  • congestion control includes congestion avoidance and congestion recovery.
  • congestion avoidance is a preventive mechanism to prevent the network from entering the congestion state and keep the network working under high throughput and low latency.
  • Congestion recovery is a kind of congestion recovery. The recovery mechanism, that is, congestion has occurred, needs to recover the network from the congestion state and re-enter the high throughput and low latency state.
  • TCP congestion control is to adjust the size of the congestion window (CWND) to control the throughput of TCP packets.
  • the size of the congestion window is the maximum number of TCP packets that can be sent in a Round Trip Time (RTT).
  • RTT Round Trip Time
  • the larger the congestion window the faster the data transmission rate and the higher the throughput rate, but the more likely the network congestion occurs.
  • the smaller the congestion window the slower the data transmission rate and the throughput rate.
  • MSS Maximum Segment Size
  • TCP congestion control is to adjust the optimal congestion window value, so that the throughput rate is maximized and no congestion occurs.
  • Reno algorithm and CUBIC algorithm there are many mature window adjustment algorithms, including Reno algorithm and CUBIC algorithm.
  • the Reno algorithm is the most widely used and mature TCP congestion control algorithm.
  • the slow start, congestion avoidance and fast retransmission and fast recovery mechanisms included in the algorithm are the basis of many existing algorithms.
  • the AIMD Additional Increase Multiplicative Decrease
  • the shrinkage of the congestion window caused by the loss of a TCP packet takes a long time to recover, and the bandwidth utilization is not high, especially in the case of a large congestion window, the drawback is more obvious.
  • the congestion window is reduced to half when a TCP packet is detected is lost.
  • the congestion window of the data transmission in each round of congestion window increases the congestion window by 1 MSS (ie, the increase is 1 MSS).
  • the congestion window takes longer to recover from half the size.
  • the congestion window value is about 863MSS, Reno.
  • the algorithm needs 431 rounds of RTT to recover the congestion window when the TCP packet is lost from half the size, which takes about 43.1 seconds.
  • the CUBIC algorithm is improved in terms of congestion window growth.
  • the CUBIC algorithm records the congestion window when the TCP packet is lost.
  • the window is increased in an exponential manner similar to the slow start.
  • the growth step of the congestion window is greatly reduced.
  • the growth step of the congestion window is re-adjusted to approximate the rapid growth of the index. If the time is only maintained by chance, the CUBIC algorithm is still in the The rapid growth of the congestion window after the period of time will inevitably cause more TCP packets to be lost when the network is congested again, causing the network condition to deteriorate further.
  • the above two algorithms have the same disadvantages in performing TCP congestion control: the congestion window grows according to a preset fixed value, cannot effectively utilize the current good network bandwidth, and even adjusts the congestion window. It is possible to make an adjustment strategy that is diametrically opposed to the actual network conditions, affecting the application's throughput requirements.
  • an embodiment of the present invention provides a method, a sending apparatus, and a system for transmitting a TCP packet of a transmission control protocol, and adjusting a congestion window according to a throughput rate expected by a service and a round-trip delay for transmitting the TCP packet. Controlling the transmission of TCP packets with the adjusted congestion window can satisfy the throughput of the service as much as possible.
  • an embodiment of the present invention provides a method for transmitting a TCP packet of a transmission control protocol, where the method includes:
  • the second round-trip delay is a round-trip delay when the congestion window determined according to the first algorithm has the same size as the congestion window determined according to the second algorithm, wherein the first algorithm is based on Determining, by the first round-trip delay, a growth stride of the congestion window, the second algorithm determining a growth stride of the congestion window according to the first round-trip delay and a target throughput rate, the target throughput rate being the TCP packet The throughput rate expected by the corresponding business;
  • the congestion window determined by the first algorithm is used as the first congestion window
  • the congestion window determined by the second algorithm is used as the first congestion window
  • the TCP packet is transmitted in the first congestion window.
  • a growth stride of the congestion window determined in the second algorithm is positively correlated with the target throughput rate and negatively related to the first round-trip delay.
  • the growth step of the congestion window determined in the first algorithm is inversely related to the first round trip delay.
  • the target throughput rate is based on the service that is parsed from the packet of the TCP packet The bit rate is determined.
  • the target throughput rate is determined according to a bit rate of the service that is parsed from a packet of the TCP packet.
  • the algorithm is that the target throughput rate is equal to a bit rate of the service multiplied by an expansion factor, wherein the expansion factor is greater than one.
  • the method further includes:
  • the congestion window of the TCP packet transmission is adjusted to a second congestion window according to a third algorithm, wherein the second congestion window is based on the TCP data.
  • the third round-trip delay when the packet is lost is determined, and the congestion window of the TCP packet transmission is adjusted to be the second
  • the reduced step size of the plug window is inversely related to the third round trip delay;
  • the TCP packet is transmitted in the second congestion window.
  • the adjusting, by the third algorithm, the congestion window of the TCP packet transmission to a second congestion window specifically includes:
  • the congestion window when the TCP packet is lost is used as the second congestion window, if the third round trip delay is equal to the delay
  • the upper limit of the interval is used as the second congestion window, where the upper limit of the delay interval is the timeout retransmission RTO of the TCP in the network, and the lower limit of the delay interval It is a round trip delay when the network is lightly loaded.
  • the congestion window determined by the first algorithm is used as the first congestion window, and specifically includes:
  • the growth step of the congestion window is taken as a fast recovery value to obtain the first congestion window, and the fast recovery value and the slow start value are The same order of magnitude;
  • the growth step of the congestion window is taken as one maximum segment length MSS to obtain the first congestion window;
  • the growth step of the congestion window is taken as 1 MSS and The first congestion window is obtained between the fast recovery values and negatively correlated with the first round trip delay.
  • the congestion window determined by the second algorithm is used as the first congestion window, and specifically includes:
  • the method further includes:
  • the ratio of the actual throughput rate to the target throughput rate is greater than the first threshold, and the difference between the fourth round-trip delay of the network sending the TCP packet and the round-trip delay detected when the network is lightly loaded is less than a second threshold, increasing the target throughput rate;
  • the target throughput rate is decreased.
  • the target throughput rate is delivered to the TCP protocol stack by using a target throughput parameter.
  • an embodiment of the present invention provides a device for transmitting a TCP packet of a transmission control protocol, where the device includes:
  • a delay determining unit configured to acquire a first round-trip delay of sending a TCP packet in the network, and determine a second round-trip delay, where the second round-trip delay is a congestion window determined according to the first algorithm and according to the second algorithm Determining a congestion window having a round-trip delay when the size is equal, wherein the first algorithm determines a growth step of the congestion window according to the first round-trip delay, and the second algorithm is based on the first round-trip delay and The target throughput rate determines a growth step of the congestion window, the target throughput rate being a desired throughput rate of the service corresponding to the TCP data packet;
  • a window adjustment unit configured to: if the first round trip delay is greater than the second round trip delay, the congestion window determined by the first algorithm is used as the first congestion window, if the first round trip delay is less than or equal to Referring to the second round-trip delay, the congestion window determined by the second algorithm is used as the first congestion window;
  • a data packet sending unit configured to send the TCP data packet in the first congestion window.
  • a growth stride of the congestion window determined in the second algorithm is positively correlated with the target throughput rate and negatively related to the first round-trip delay.
  • the growth step of the congestion window determined in the first algorithm is inversely related to the first round trip delay.
  • the target throughput rate is based on the service that is parsed from the packet of the TCP packet The bit rate is determined.
  • the target throughput rate is determined according to a bit rate of the service that is parsed from a packet of the TCP packet.
  • the algorithm is that the target throughput rate is equal to a bit rate of the service multiplied by an expansion factor, wherein the expansion factor is greater than one.
  • the window adjusting unit is further configured to: if the TCP packet is sent, a packet, the congestion window of the TCP packet transmission is adjusted to a second congestion window according to a third algorithm, wherein the second congestion window is determined according to a third round-trip delay when the TCP packet is lost. And adjusting, by the congestion window of the TCP packet transmission, that the reduced stride of the second congestion window is negatively related to the third round trip delay;
  • the data packet sending unit is further configured to send the TCP data packet in the second congestion window.
  • the window is adjusted
  • the whole unit is further configured to adjust the congestion window of the TCP packet transmission to a second congestion window according to a third algorithm, specifically:
  • the window adjusting unit is configured to: if the third round-trip delay is equal to a lower limit of the delay interval, use a congestion window when the TCP packet is lost as the second congestion window, if the third The round-trip delay is equal to the upper limit of the delay interval, and the preset congestion window is used as the second congestion window, where the upper limit of the delay interval is a timeout retransmission RTO of the TCP in the network.
  • the lower limit of the delay interval is a round trip delay when the network is lightly loaded.
  • the window adjustment unit is configured to use a congestion window determined by the first algorithm as the first congestion window, Specifically include:
  • the window adjusting unit is configured to: if the first round trip delay is equal to a round trip delay of the light load of the network, take a growth step of the congestion window as a fast recovery value to obtain the first congestion window, where The fast recovery value is the same as the magnitude of the slow start;
  • the window adjusting unit is configured to obtain, if the first round-trip delay is equal to a timeout retransmission RTO of the TCP in the network, obtain a growth step of the congestion window as a maximum segment length MSS.
  • the window adjustment unit is configured to increase a growth step of the congestion window if the first round-trip delay is in a round trip delay of the network light load and a timeout retransmission RTO interval of the TCP in the network changes
  • the value is between 1 MSS and the fast recovery value and is negatively correlated with the first round trip delay to obtain the first congestion window.
  • the window adjustment unit is configured to use a congestion window determined by a second algorithm as the first congestion
  • the window is specifically:
  • the window adjusting unit is configured to calculate a target window according to the target throughput rate and the first round trip delay, and take a growth step of the congestion window as the target window and the TCP in the network The difference in the current congestion window determines the first congestion window.
  • the device further includes:
  • a throughput detection unit configured to detect an actual throughput rate of sending the TCP packet in the network
  • a target throughput adjustment unit configured to detect, when the ratio of the actual throughput rate to the target throughput rate is greater than a first threshold, and the fourth round-trip delay of the network sending the TCP packet and the network light load Round trip If the difference is less than the second threshold, increase the target throughput rate, if the ratio of the actual throughput rate to the target throughput rate is less than a third threshold, and the fourth round trip delay is lighter than the network When the difference of the round trip delay detected at the time of loading is greater than the fourth threshold, the target throughput rate is decreased.
  • the target throughput rate is delivered to the TCP protocol stack by using a target throughput parameter.
  • an embodiment of the present invention provides a transmitting control protocol TCP data packet sending apparatus, where the sending apparatus includes a processor, a memory, and a network interface, where the processor passes the memory and the network interface respectively.
  • the memory is configured to store a computer execution instruction, the processor reading the computer execution instruction stored in the memory to execute the first aspect or any one of the above aspects based on the first aspect when the transmitting device is running A possible implementation manner of the transmission control protocol TCP packet transmission method.
  • an embodiment of the present invention provides a system, where the system includes a server and a terminal, and the server is connected to the terminal through a network; the server is the second aspect or any of the foregoing A possible implementation manner of the item or a transmitting device of a transmission control protocol TCP data packet provided by the third aspect, by which the TCP data packet is sent to the terminal.
  • an embodiment of the present invention provides a system, where the system includes a server, a first proxy device, and a terminal, where the first proxy device is respectively connected to the server and the terminal;
  • the server configured to send, by the first proxy device proxy, a TCP data packet to the terminal;
  • the first proxy device is the transmitting device of the second aspect or the foregoing any possible implementation manner of the second aspect or the transmission control protocol TCP data packet provided by the third aspect, for receiving the server to the terminal Sending the TCP packet and proxying the server to send the TCP packet to the terminal.
  • the terminal is configured to pass, by using a target throughput rate parameter, a TCP protocol stack of the terminal to a TCP protocol stack of the first proxy device by using a target throughput parameter .
  • an embodiment of the present invention provides a system, where the system includes a server, a first proxy device, a second proxy device, and a terminal, where the first proxy device is respectively connected to a server and a second proxy device;
  • the server configured to send, by the first proxy device proxy, a TCP data packet to the terminal;
  • the first proxy device is the transmitting device of the second aspect or the foregoing any possible implementation manner of the second aspect or the transmission control protocol TCP data packet provided by the third aspect, for receiving the server to the terminal Transmitting the TCP data packet, and proxying the server to send the TCP data packet to the second proxy device;
  • the second proxy device is configured to receive the TCP data packet and forward the packet to the terminal.
  • the terminal a TCP protocol for transmitting a target throughput rate by using a target throughput parameter from a TCP protocol stack of the terminal to the second proxy device In the stack
  • the second proxy device is further configured to pass the target throughput rate by the target throughput rate parameter in a TCP protocol stack of the second proxy device to a TCP protocol stack of the first proxy device.
  • the first congestion window is determined according to the target throughput rate and the first round-trip delay reflecting the current network condition, the current congestion window is updated with the first congestion window, and the TCP packet is controlled by the first congestion window under the current network condition.
  • the transmission can meet the throughput expected by the business as much as possible; follow the target throughput rate and network status, and directly increase from the current congestion window to the first congestion window, which can better meet the throughput requirements of the service and more effectively Take advantage of network bandwidth.
  • 1A is a schematic diagram of a system logic structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol
  • FIG. 1B is a schematic diagram of still another system logic structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol
  • 1C is a schematic diagram of another system logic structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol
  • FIG. 2 is a flow chart of a method for transmitting a TCP packet of a transmission control protocol
  • FIG. 3 is a workflow diagram of a method for transmitting a TCP packet based on a packet loss
  • FIG. 4 is an optional optimization flowchart of a method for transmitting a TCP packet based on the Transmission Control Protocol shown in FIG. 2;
  • 5 is a flow chart of updating a target throughput rate in a method for transmitting a TCP packet of a transmission control protocol
  • FIG. 6 is a schematic diagram showing the logical structure of a transmitting apparatus 600 for transmitting a control protocol TCP packet;
  • FIG. 7 is a schematic diagram showing an optimized logical structure of a transmitting apparatus 600 based on a TCP packet of the Transmission Control Protocol shown in FIG. 6;
  • FIG. 8 is a schematic diagram showing a hardware structure of a transmission control protocol TCP packet sending apparatus 800 according to an embodiment of the present invention.
  • FIG. 1A is a schematic diagram of a system logic structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol according to an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are provided.
  • the system 100 includes a server 101, a terminal 102, and a network 103; the server 101 is interconnected with the terminal 102 via a network 103, through which the server 101 and the server are implemented.
  • TCP/IP protocol Transmission Control Protocol/Internet Protocol
  • the network 103 may include a forwarding device that forwards TCP data packets, such as a switch, a router, and the like, by which the TCP data packet exchanged between the server 101 and the terminal 102 is forwarded.
  • a forwarding device that forwards TCP data packets, such as a switch, a router, and the like, by which the TCP data packet exchanged between the server 101 and the terminal 102 is forwarded.
  • the server according to the embodiment of the present invention is an electronic device having a data processing function formed by using an electronic device; the electronic device is composed of an electronic component such as an integrated circuit, a transistor, an electron tube, etc.; and the electronic device can be operated by the electronic device.
  • Software consisting of program instructions to implement data processing, control of other devices, and so on. If an electronic device is installed with an operating system, if the electronic device has a network card installed and network configuration is completed, the electronic device can access a network formed based on the TCP/IP protocol and perform TCP data with other electronic devices (such as terminals). The interaction of the packages to achieve data interaction.
  • the terminal in the embodiment of the present invention is an electronic device with data processing functions formed by using an electronic device; the terminal can access a network formed by using a TCP/IP protocol, and other electronic devices (for example, the server interacts with TCP packets to implement data interaction.
  • the server 101 is directly in communication with the terminal 102, and the TCP data packet exchanged between the server 101 and the terminal 102 via the network 103 does not need to be forwarded by the forwarding device (such as a router) in the network 103 for TCP data packets.
  • the forwarding device such as a router
  • the server 101 can perform peer-to-peer TCP packet interaction with the terminal 102.
  • Server 101 can The terminal 102 transmits a TCP packet, and correspondingly, the terminal 102 can also transmit a TCP packet to the server 101.
  • the system 100 is provided in FIG. 1A.
  • the server 101 performs a master-slave TCP packet exchange with the terminal 102 through the network 103.
  • the server 101 serves as a server, and the terminal 102 functions as a master-slave communication.
  • the client corresponding to the server.
  • the server 101 can send a TCP packet to the terminal 102.
  • the terminal 102 downloads the audio and video file from the server 101
  • the server 101 transmits the TCP packet carrying the audio and video file to the terminal 102; correspondingly, the terminal 102 can also send the server to the server.
  • the 101 transmits a TCP packet, and when the terminal 102 uploads a text file to the server 101, the terminal 102 transmits the TCP packet carrying the text file to the server 101.
  • FIG. 1B is a schematic diagram of another system logical structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol according to an embodiment of the present invention.
  • the system 200 includes a server 201, a terminal 202, a network 203, and a first proxy device 204.
  • the server 201 is TCP
  • the protocol stack does not support modification, and the first proxy device 204 is added.
  • the first proxy device 204 supports a protocol stack for modifying its TCP, and the first proxy device 204 proxies the server 201 and the terminal 202 to perform the TCP packet.
  • the interaction between the first proxy device 204 and the TCP of the terminal 202 in FIG. 1B is similar to the TCP packet interaction between the server 101 and the terminal 102 in FIG. 1A; of course, the first proxy device 204 may be added due to other factors. Adding, for example, the first proxy device 204 to reduce the load imposed by the server 201 by sending TCP packets; preferably, the first proxy device 204 is implemented using a proxy server; preferably, the first proxy The device 204 is a service board on the router, and the function is implemented by logically programming the service board.
  • FIG. 1C is a schematic diagram of another system logical structure of an application scenario of a method for transmitting a TCP packet of a transmission control protocol according to an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are provided.
  • the system 300 includes a server 301, a terminal 302, a network 303, and a first proxy device 304.
  • the TCP of the server 301 is The protocol stack does not support being modified, then adding a first proxy device 304, which supports the protocol stack of the TCP that modifies it, and the first proxy device 304 proxyes the server 301 to perform the interaction of the TCP packet.
  • a second proxy device 305 can also be added, by which the second proxy device 305 acts on the terminal 302 to perform the interaction of the TCP data packet, so that between the first proxy device 304 and the second proxy device 305 Performing the interaction of the TCP data packet, the first proxy device 304 and the second proxy device 305 in FIG.
  • the TCP packet interaction is similar to the TCP packet interaction of the server 101 and the terminal 102 in FIG. 1A; of course, the first proxy device 304 and the second proxy device 305 may be simultaneously added due to other factors, see above, here No longer.
  • One reason for adding the second proxy device 305 is that the protocol stack of the TCP of the terminal 302 does not support modification; the second proxy device 305 supports the protocol stack of the TCP that modifies it, and the second proxy device 305 represents the protocol stack.
  • the terminal 302 performs the interaction of the TCP data packet, thereby performing the interaction of the TCP data packet between the first proxy device 304 and the second proxy device 305.
  • the first proxy device 304 is implemented by using a proxy server; preferably, the first proxy device 304 is a service card on the router, and the function is implemented by logically programming the service card.
  • the second proxy device 305 is implemented by using a proxy server; preferably, the second proxy device 305 is a service card on the router, and the function is implemented by logically programming the service card.
  • the server designs a TCP TCP packet transmission method in the process of the network sending the TCP data packet to the terminal in order to satisfy the throughput expected by the service.
  • FIG. 1A the method provided by the embodiment of the present invention is applied to the server 101; in FIG. 1B, the method provided by the embodiment of the present invention is applied to the first proxy device 204; in FIG. 1C, the present invention is implemented.
  • the method provided by the example is applicable to the first proxy device 304.
  • FIG. 2 shows the basic implementation flow of the method, but For convenience of description, FIG. 2 only shows parts related to the embodiment of the present invention.
  • the method for transmitting the TCP TCP packet shown in FIG. 2 includes: step A201, step A202, step A203, step A204, and step A205.
  • Step A201 Acquire a first round trip time (RTT) of sending a TCP packet in the network.
  • RTT round trip time
  • step A201 is detailed.
  • the server When the server provides services to the terminal, it establishes a TCP stream for each service provided, and sends a TCP packet carrying the corresponding service in each TCP stream.
  • the RTT required to send the TCP packet is calculated, and the calculated RTT is used as the first round trip.
  • the RTT of the TCP packet is optionally calculated using an algorithm provided by RFC6289 (Jacobson/Karels algorithm).
  • Step A202 determining a second round-trip delay, where the second round-trip delay is a round-trip delay when the congestion window determined according to the first algorithm has the same size as the congestion window determined according to the second algorithm, where the first Algorithm root Determining, according to the first round trip delay, a growth stride of the congestion window, the second algorithm determining a growth stride of the congestion window according to the first round trip delay and a target throughput rate, the target throughput rate being the TCP The throughput rate expected by the service corresponding to the packet.
  • the embodiment of the present invention does not limit the specific form of the service, including: audio service, video service, audio and video service, online killing virus service, instant communication service, and online application service. .
  • the server In the process of the server providing the service to the terminal, if the service is expected to be normally provided, the server needs to transmit the TCP packet carrying the service, and the throughput rate is defined as the target throughput rate.
  • the server establishes a TCP flow for the service, and establishes a congestion window for the TCP flow, and sends the TCP carrying the service through the TCP flow under the control of the congestion window.
  • the packet, the throughput that the server can provide is determined by the congestion window. Therefore, in order to achieve the throughput required by the service under the current network conditions of the network, it is necessary to adjust the congestion window.
  • the server determines the target throughput rate required by the server to provide each service to the terminal separately; establishes a TCP flow for each service separately, and sets a congestion window for each TCP stream. For each service, the corresponding congestion window is separately adjusted to provide the throughput rate that the service expects to obtain.
  • the embodiment of the present invention provides a first algorithm and a second algorithm.
  • the first algorithm or the second algorithm is used to adjust the size of the congestion window, and the congestion window obtained by the adjustment is adopted.
  • step A202 is to determine a growth step of the congestion window according to the first algorithm or the second algorithm to implement adjustment of the size of the congestion window.
  • the first algorithm or the second algorithm is used to adjust the congestion window, and according to the RTT determining the network condition, step A202 determines the second round trip delay. If the RTT reflecting the network condition is less than or equal to the second round-trip delay, that is, the RTT reflecting the network condition is small, and the current network condition is good, the second algorithm is used to determine the growth step of the congestion window; When the growth step of the congestion window is increased, the growth step is determined according to the RTT reflecting the network condition and the target throughput rate, that is, the second algorithm considers the network condition and the target throughput rate at the same time, usually, the second When the congestion window determined by the algorithm controls the transmission of the TCP packet, the server can provide the target throughput rate required by the service.
  • the first algorithm is used to determine the growth step of the congestion window; and the first algorithm is used to determine the congestion.
  • the growth step is determined according to the RTT delay reflecting the network condition, that is, the first algorithm considers the network condition more, usually,
  • the congestion window determined by the second algorithm controls the transmission of the TCP packet, the throughput provided by the server does not reach the target throughput rate required by the service.
  • Step A203 If the first round trip delay is greater than the second round trip delay, the congestion window determined by the first algorithm is used as the first congestion window.
  • Step A204 If the first round trip delay is less than or equal to the second round trip delay, the congestion window determined by the second algorithm is used as the first congestion window.
  • Step A205 sending the TCP data packet in the first congestion window.
  • Step A201 calculates an RTT that reflects the current network condition, and uses the calculated RTT as the first round trip delay.
  • step A203 is performed by using the first algorithm. Determining a growth step of the congestion window, determining a growth step of the congestion window according to the first round trip delay; and increasing the growth determined according to the first algorithm on the basis of detecting a congestion window corresponding to the delay of the first round trip
  • the stride gets the first congestion window.
  • step A204 determine congestion by using a second algorithm. a growth step of the window, and determining a growth step of the congestion window according to the first round trip delay and the target throughput rate; and increasing the second according to the congestion window corresponding to the delay of detecting the first round trip.
  • the first congestion window is re-determined every step A204 or every execution step A205, and the service is updated once in the first congestion window.
  • Corresponding congestion window replaces the pre-update congestion window with the first congestion window; in the TCP flow of the service, the update is to control the number of transmissions of the TCP packet by the first congestion window.
  • the embodiment of the present invention triggers at which time or condition triggers according to the first round trip delay and Determining the first congestion window by the target throughput rate and updating the current congestion window with the first congestion window is not limited; for example, determining the first congestion window according to the first round trip delay and the target throughput rate in real time, and determining the first congestion window a congestion window update a current congestion window of the TCP stream of the service; for example, the first congestion window may be determined according to the first round trip delay and the target throughput rate, and the current congestion window is updated with the first congestion window; For example, after the TCP packet is lost and the congestion window is reduced to the second congestion window, if the first round of receiving the acknowledgement response (ACK) of the TCP packet is controlled by the second congestion window, The first congestion window is determined according to the first round trip delay and the target throughput rate, and the current congestion window is updated with the first congestion window.
  • ACK acknowledgement response
  • the embodiment of the present invention can adjust the first congestion window that satisfies the target throughput rate according to the target throughput rate that the service expects, and control the sending by the first congestion window.
  • a TCP packet that provides as much as possible the target throughput that the service expects to achieve.
  • the method of the embodiment of the present invention is used to adjust the congestion window, and the method provided by the embodiment of the present invention is described in detail as follows:
  • the target throughput rate required to send the TCP packet of the service in the network is obtained, and the target throughput rate is added in the protocol stack of the TCP.
  • the server needs to transmit the TCP packet carrying the service, and the throughput required for the service is defined as the target throughput rate.
  • a TCP flow is established for the service, and the server sends the TCP packet carrying the service in the TCP stream.
  • the server also establishes a congestion window for the TCP stream; under the control of the congestion window, controlling the sending of the TCP packet carrying the service in the TCP stream, The congestion window determines the maximum number of TCP packets that can be sent, so the size of the congestion window can be adjusted to adjust the throughput provided by the server for the service.
  • the server 101 of FIG. 1A supports The protocol stack of the TCP is modified.
  • the server directly sends the TCP data packet carrying the service to the terminal, instead of sending the server to the terminal and carrying the service by the first proxy device as shown in FIG. 1B.
  • the TCP packet thus, when the server only provides a certain service to the terminal, the server adjusts the congestion window according to the target throughput rate in the protocol stack of the TCP and the first round trip delay in response to the current network condition.
  • the target throughput rate is added as a protocol stack of the TCP of the server 101 of FIG. 1A.
  • An optional implementation manner is to modify the socket interface and add a parameter of the target throughput rate parameter. After the server provides the terminal with the target throughput rate required by the service, the target throughput rate is determined and the target throughput rate parameter is assigned. And transmitting, by the set of interfaces, the target throughput parameter and its corresponding assignment (ie, the target throughput rate expected by the service) to the TCP protocol stack of the server, and adding the target to the TCP protocol stack.
  • the throughput rate parameter and its corresponding assignment is to modify the socket interface and add a parameter of the target throughput rate parameter.
  • target throughput parameter of "target_throughput” For example, add the target throughput parameter of "target_throughput" to the set function of "setsockopt()". After determining the target throughput rate that the service expects to obtain based on the throughput required by the server to provide services to the terminal, The determined target throughput rate is assigned to the "target_throughput", the "target_throughput” and its assignment are transmitted to the protocol stack of the TCP of the server by "setsockopt()", and "target_throughput” is added to the protocol stack of the TCP. This target throughput parameter and its assignment.
  • step B301, step B302, step B303, and step B304 are sequentially performed, as shown in FIG. 3.
  • Step B301 in the process of sending the TCP data packet by the network, if the TCP data packet is lost, adjusting a congestion window when the TCP data packet is lost to a second congestion window, where the network control station is The TCP packet is sent in the second congestion window.
  • the server sends the TCP packet carrying the service to the TCP stream that provides the service to the terminal
  • the terminal receives a certain TCP packet in the correct order
  • the terminal returns a confirmation response corresponding to the TCP packet to the server.
  • the server deletes the TCP data packet in the cache when receiving the acknowledgment response of the TCP data packet, and adds other TCP data packets to be sent in the cache; the embodiment of the present invention is based on The cache further sets a congestion window through which congestion control is performed.
  • the congestion window of the TCP stream is decreased, and the congestion window is adjusted to the second.
  • the second congestion window is smaller than the congestion window when the TCP packet is lost, and the second congestion window is not limited according to which algorithm, such as the existing Reno algorithm or the CUBIC algorithm.
  • the server determines that a TCP packet in the congestion window has been lost, and the scenario of triggering the loss of the TCP packet is not limited; for example, after the server sends the TCP packet, the preset time is still exceeded.
  • the acknowledgment response of the terminal to the TCP packet is not received; for example, the TCP packet is lost when the TCP packet is transmitted in the network after the server sends the TCP packet, for example, when the TCP packet is transmitted in the wireless network.
  • the lost TCP packet for example, after the server sends the TCP packet, the terminal does not promptly feed back the acknowledgment response of the TCP packet; for another example, the service
  • the device sequentially sends a plurality of TCP data packets to the terminal, and the time when the TCP data packets arrive at the terminal is out of order, and the terminal has received the sorted multiple TCP data packets (for example, the three TCP packets sorted in the following).
  • the terminal Upon receiving a certain sorted TCP packet, the terminal sends an acknowledgment response (ACK response) requesting the sorting of the preceding TCP packet to each server every time the TCP packet is sorted.
  • the server receives the acknowledgment response multiple times (eg, three times), and the server determines that the TCP packet that was prior to the sort has been lost.
  • step B301 adjusts a congestion window when the TCP packet is lost to a second congestion window, and controls transmission of the TCP packet by using the second congestion window, and the server can at most The number of TCP packets sent by the network is determined by the second congestion window.
  • step B301 is entered.
  • Step B302 If an acknowledgment response of sending the TCP packet in the second congestion window is received, determining a first round trip delay corresponding to the second congestion window.
  • step B301 adjusts the TCP congestion window when the TCP packet is lost to the second congestion window
  • the server controls the TCP packet of the service by using the second congestion window in the TCP stream of the service.
  • the terminal sends, if the server completes the first round to send the TCP packet in the second congestion window to the terminal, and receives the acknowledgment response of all the TCP packets sent in the first round, the first round of the calculation is calculated.
  • the RTT of the last TCP packet in the second congestion window defines the calculated RTT as the first round trip delay;
  • the server completes the first round of sending the TCP packet in the second congestion window to the terminal and receiving the acknowledgment response of all the TCP packets sent in the first round
  • the algorithm provided by RFC6289 is adopted (The Jacobson/Karels algorithm calculates the RTT of the last received acknowledgment response (the last received acknowledgment response in the acknowledgment response of all TCP packets sent in the first round), and uses the calculated RTT as the first round trip delay.
  • Step B303 determining a first congestion window according to the first algorithm or the second algorithm, based on the target throughput rate and the first round trip delay.
  • the first round trip delay is strongly related to the current network condition of the reaction (such as the network path length of forwarding TCP packets in the network), and the network Situation The better, the first round trip delay is relatively small; in an extreme case, when the network is seriously overloaded, the first round trip delay is equal to the network's Retransmission Time Out (RTO) mechanism. Being in serious congestion.
  • RTO Retransmission Time Out
  • step B302 receives an acknowledgment response of the TCP packet in the second congestion window of the first round
  • the representative network may allow more TCP packets to be transmitted; in this case, the second congestion window may be appropriately increased, The second congestion window is increased to the first congestion window.
  • the first congestion window is determined by using a first algorithm or a second algorithm; in particular, in the second algorithm, the target throughput rate parameter is introduced, and the first round-trip delay is introduced. a parameter; because the current network condition determined according to the first round-trip delay is better, for the first congestion window determined according to the second algorithm, controlling the sending of the TCP packet by the first congestion window enables: the server is transmitting The terminal can obtain the desired target throughput rate when carrying the TCP packet of the service; in addition, controlling the sending of the TCP packet by using the first congestion window determined according to the first algorithm, and also under the current network condition, Try to increase the throughput rate when the server provides services, so that the terminal can get the maximum throughput under the current network conditions.
  • Step B304 adjusting the second congestion window to the first congestion window, and controlling, by the network, the TCP data packet to be sent in the first congestion window.
  • the server controls, in the step B304, the TCP packet carrying the service sent to the terminal by using the first congestion window, that is, at a certain moment.
  • the server sends at most all the TCP data packets (the TCP data packets carrying the service) included in the first congestion window to the terminal.
  • the target throughput rate required for the server to transmit the TCP packet carrying the service is determined, and the target throughput rate is written to the protocol stack of the TCP.
  • the server sends the TCP packet to the terminal in the TCP stream, if the occurrence of the TCP packet is lost, the TCP congestion window is adjusted to the second congestion window; if the second round of the first congestion window is successfully sent to the terminal a TCP packet (ie, receiving a confirmation response of the terminal to the TCP packet in the second congestion window of the first round), and determining a first round trip delay corresponding to the second congestion window of the first round;
  • Step B303 according to the first round-trip delay, determining a network condition (sending a network condition of a TCP packet in a second congestion window of the first round), and determining, in the network condition, the first congestion that satisfies a target throughput rate as much as possible a window; thus, after determining the network condition based on the first round trip delay, step B304 can adjust the second
  • Reno algorithm greatly reduce the congestion window of TCP when TCP packet loss occurs.
  • Reno algorithm reduces the congestion window by half when TCP packet is lost, and then the CUBIC algorithm is When the TCP packet is lost, the congestion window is reduced to 717/1024 (reduced by nearly one-third); however, after the congestion window is reduced, the server needs to go through a round when using the existing window adjustment algorithm to adjust the congestion window.
  • Another round of tentatively sending TCP packets after each round of sending TCP packets, if the TCP packets sent in the round are lost, the congestion window is greatly reduced again, if the terminal feedback is successfully received.
  • the response increases the congestion window once; but after each round successfully receives the confirmation response from the terminal feedback, the existing algorithm does not consider the target throughput rate expected by the service, but gradually increases the congestion one after another according to the algorithm. Window, slowly reaching the maximum congestion window that can be provided for the service under the current network condition (the first congestion window);
  • the target throughput rate that the service is expected to obtain is determined in advance. Even if the server sends the TCP packet to the terminal, the TCP packet is lost, and the TCP congestion window is adjusted to the second congestion window; if the TCP data in the first round of the second congestion window is successfully sent to the terminal a packet, determining a first round-trip delay corresponding to the second congestion window of the first round, determining a network condition according to the first round-trip delay, and determining the first of the network conditions to satisfy the target throughput rate a congestion window; the congestion window is added from the second congestion window to the first congestion window at a time to satisfy the target throughput rate corresponding to the service; wherein, if the TCP packet in the second congestion window of the first round is sent The network condition is good.
  • sending the TCP packet in the first congestion window can satisfy the target throughput rate; wherein, even if the network condition is not good, combining the first round trip delay and The first congestion window determined by the target throughput rate is also a congestion window that best satisfies the bit rate of the service under network conditions.
  • the method is particularly applicable to a wireless network with a large network bandwidth; compared with a wired network, a probability of random packet loss occurs in a wireless network, and each time a random packet loss occurs, the existing algorithm reduces the congestion window, but Each time the congestion window is reduced, the user can only gradually increase to the first congestion window.
  • the time delay for resuming the service to the terminal is long, which is not conducive to the server to provide services to the terminal.
  • the embodiment of the present invention supports the service, and the congestion window when the packet loss occurs is adjusted to the first congestion window after two steps.
  • the two steps include: adjusting from the congestion window at the time of random packet loss to the second congestion window, and adjusting from the second congestion window to the first congestion window, effectively utilizing the network bandwidth, as timely as possible Support business.
  • FIG. 4 is an optional workflow of the method for transmitting TCP-based TCP packets based on FIG. 2, but for ease of description, FIG. 4 only shows portions related to the embodiments.
  • an optional refinement is performed to adjust a congestion window when packet loss occurs in the process of sending the TCP data packet by the network, where the method is Also included are step C401 and step C402.
  • Step C401 If a packet loss occurs during the process of sending the TCP packet, adjust a congestion window of the TCP packet transmission to a second congestion window according to a third algorithm, where the second congestion window is The third round-trip delay is determined when the TCP packet is lost, and the congestion window of the TCP packet transmission is adjusted to be a negative correlation between the reduced stride of the second congestion window and the third round-trip delay.
  • the server sequentially sends multiple TCP data packets to the terminal, and the time when the TCP data packet arrives at the terminal is out of order, and the terminal After receiving the sorted TCP packets (for example, the last three TCP packets), the TCP packet that has been sorted is not received, the terminal will sort after each received.
  • Each TCP packet is sent to the server to request an acknowledgment response (ACK response) of the preceding TCP packet, and the server receives the acknowledgment response multiple times (for example, three times), and the server determines the ranking.
  • ACK response acknowledgment response
  • the server sends the TCP data packet and exceeds the RTO (which belongs to the preset time)
  • the acknowledgment response of the terminal to the TCP data packet is not received, and it is determined that the TCP data packet has been received. Lost.
  • the RTT when the TCP packet is lost is detected, and the RTT detected when the TCP packet is lost is used as the third round trip delay, and therefore, the third round trip delay Reflects the network condition when the TCP packet is lost. It is worth noting that if the TCP packet is randomly lost, the detected third round trip delay is equal to the RTT when the network is lightly loaded.
  • Step C401 determines, according to a third round-trip delay, the second congestion window according to a third algorithm when the data packet is lost.
  • the second congestion window determined in the third algorithm is negatively correlated with the third round trip delay; specifically, the third round trip delay is used as the input of the third algorithm, along with the third round trip time. As the delay increases, the second congestion window determined according to the third algorithm will be smaller.
  • the specific implementation form or the step of the third algorithm is not limited in this embodiment; for example, the number of TCPs may be determined according to service requirements.
  • the third algorithm is designed according to the need to reduce the amplitude of the congestion window when the packet is lost; for example, an existing algorithm (for example, Reno algorithm or CUBIC algorithm) may be used as the third algorithm.
  • step C401 the adjusting, by using the third algorithm, the congestion window of the TCP packet transmission to the second congestion window, specifically includes:
  • the congestion window when the TCP packet is lost is used as the second congestion window, if the third round trip delay is equal to the delay
  • the upper limit of the interval is used as the second congestion window, where the upper limit of the delay interval is a timeout retransmission (RTO) of the TCP in the network.
  • the lower limit of the delay interval is a round trip delay when the network is lightly loaded.
  • the preset congestion window refers to a congestion window preset by the user.
  • the TCP protocol defines an RTO for the TCP stream; optionally, the RTO can be artificially modified, or the RTO is set according to experimental data of the current network. This embodiment determines the upper bound of the delay interval by the RTO.
  • the server sends the TCP packet carrying the service to the terminal, the acknowledgment response of the terminal to the TCP packet is not received after the specific time exceeds the specific time, and the packet is also determined to be lost.
  • the round-trip delay when the network is lightly loaded refers to: after the network does not have network congestion (that is, in the light load of the network), the server receives the TCP packet carrying the service to the terminal, and receives the packet.
  • the RTT of the TCP packet calculated when the terminal responds to the acknowledgment of the TCP packet.
  • the server sends a TCP packet carrying a service to the terminal when the network is lightly loaded, detects an RTT of each TCP packet, and selects a minimum RTT from the detected RTT; The embodiment determines that the lower bound of the delay interval is: the smallest RTT is selected in the detected RTT.
  • the server uses the TCP congestion window to control the TCP packet loss in the process of transmitting the TCP packet carrying the service to the terminal, and if the TCP packet is lost, the third round trip delay is equal to the lower bound of the delay interval. , indicating that the network is in good condition, the loss of the TCP packet is only an accidental factor (such as random packet loss), and does not reduce the congestion window when the TCP packet is lost, that is, the congestion window when the TCP packet is lost is taken as the second. Congestion window, resend the lost TCP packets.
  • this embodiment can more effectively utilize the network bandwidth and support the service as much as possible.
  • the server generates the TCP packet loss in the process of transmitting the TCP packet carrying the service to the terminal by using the congestion window of the TCP, and if the third round trip delay when the TCP packet is sent is detected, the third round trip delay is equal to The upper bound of the delay interval, that is, the third round trip delay is equal to RTO, indicating that the network is heavily congested, and the congestion window needs to be reduced to reduce the congestion window when the TCP packet is lost to the preset congestion window. It should be noted that the third round trip delay reaches the RTO and determines that the TCP packet is lost, and does not continue to wait for the third round trip delay of detecting the TCP packet, so the third round trip delay can only be the RTO.
  • the third round-trip delay belongs to a lower bound and an upper bound of the delay interval, determining, according to the third algorithm, that the second congestion window is greater than the preset congestion window, and is smaller than when the TCP packet is lost.
  • the congestion window, and as the third round trip delay increases, the second congestion window determined according to the third algorithm is smaller.
  • Step C402 sending the TCP data packet in the second congestion window.
  • the step C401 determines a second congestion window, and the step C402 replaces the congestion window when the TCP packet is lost with the second congestion window to perform congestion.
  • the update of the window is replaced, and the transmission of the TCP packet of the service is controlled by the second congestion window.
  • the second algorithm is further optionally refined according to the foregoing embodiment and the embodiment of the present invention, and the growth step of the congestion window determined in the first algorithm is related to the target throughput.
  • the rate is positively correlated and negatively correlated with the first round trip delay
  • the growth step of the congestion window determined in the first algorithm is negatively correlated with the first round trip delay.
  • the network does not have network congestion, and the network bandwidth when the TCP packet is lost can meet the target throughput rate, according to the The second algorithm increases the congestion window; specifically, when the first congestion window is determined according to the second algorithm, as the target throughput rate increases, the first congestion window determined according to the second algorithm is also larger. As the first round trip delay increases, the first congestion window determined according to the second algorithm is smaller.
  • the first round trip delay is less than or equal to the second round trip delay, indicating that no network congestion occurs, and the target throughput rate is compared to the first round trip when the first congestion window is determined according to the second algorithm.
  • the weight of the delay is significant.
  • the target throughput rate can be achieved, so that the bit rate of the service parsed by the terminal from the TCP packet satisfies : The bit rate required for this service; the server can normally provide the service to the terminal.
  • the first algorithm determines a first congestion window when the first congestion window is determined according to the first round-trip delay, and the first congestion window determined according to the second algorithm increases as the first round-trip delay increases small.
  • the server determines the TCP packet sent to the terminal according to the first congestion window determined by the first algorithm
  • the throughput rate provided by the first congestion window determined according to the first algorithm cannot reach the target throughput rate, and only The difference between the target and the throughput rate is reduced, so that the service that the terminal parses the TCP packet has a bit rate that cannot meet the bit rate required by the service, but can support the current network state to the greatest extent.
  • Business reduce the gap with the target throughput rate, try to support the server to provide the service to the terminal.
  • the second algorithm is further optionally refined according to the foregoing embodiment and the embodiment of the present invention, where the congestion window determined by the first algorithm is used as the first congestion window, and specifically includes:
  • the growth step of the congestion window is taken as a fast recovery value to obtain the first congestion window, and the fast recovery value and the slow start value are The same order of magnitude;
  • the growth step of the congestion window is taken as a maximum segment size (MSS) to obtain the The first congestion window;
  • the growth step of the congestion window is taken as 1 MSS and The first congestion window is obtained between the fast recovery values and negatively correlated with the first round trip delay.
  • a slow start threshold (ssthresh) is set in advance for the congestion window. If the first round trip delay is equal to the round trip delay of the network light load, indicating that the network condition is good, if the congestion window of the first round trip delay is detected to be smaller than the slow start threshold, then When the first algorithm determines the growth step of the congestion window, the fast recovery value determined according to the first algorithm (the growth step determined according to the first algorithm) is of the same order of magnitude as the growth step determined according to the slow start algorithm, and will be detected. The congestion window of the first round trip delay increases the fast recovery value to obtain the first congestion window. It should be noted that the slow start and the corresponding slow start algorithm are not limited in this embodiment, and may be implemented by using an existing slow start and a slow start algorithm.
  • the first algorithm takes the value of the growth step of the congestion window as a value. 1 MSS; in this case, even if a TCP packet of a congested window is successfully transmitted, only one MSS is added to the current congestion window to obtain the first congestion window, and the first congestion window is updated and replaced. Current congestion window.
  • the first algorithm correspondingly increases the growth step of the congestion window Between 1 MSS and the fast recovery value, but the growth step determined according to the first algorithm is negatively correlated with the first round trip delay; in this case, even if a congestion window is successfully transmitted
  • the TCP packet adds the growth step to the current congestion window to obtain the first congestion window.
  • the second algorithm is further optionally refined according to the foregoing embodiment and the embodiment of the present invention, where the congestion window determined by the second algorithm is used as the first congestion window, which specifically includes :
  • the network condition reflected by the first round trip delay is good, and the network bandwidth is greater than or equal to the target throughput rate, which may increase congestion.
  • Window to increase the throughput rate provided for the service in order to increase the current congestion window to a target window capable of providing the target throughput rate under the current network condition, the growth step of the congestion window is taken as: the target window and The difference in the current congestion window of the TCP in the network.
  • the first congestion window obtained by detecting the congestion window of the first round trip delay is increased by the growth step, and the first congestion window is directly equal to the target window; thereby directly meeting the target throughput rate of the service directly under the current network condition.
  • the target window controls the sending of TCP packets and provides this service to the greatest extent possible.
  • the foregoing embodiment and the embodiment of the present invention are further optionally refineable, and the target throughput rate is determined according to a bit rate of the service parsed from a packet of the TCP packet.
  • the server may parse the bit rate required for each service to be provided to the terminal, and then calculate the server to provide each service to the terminal according to the parsed bit rate.
  • the specific throughput of the target is not limited to the specific implementation manner of the present invention.
  • each service has a standard corresponding bit rate, and the target throughput rate required to transmit the TCP packet carrying the service is determined according to the bit rate, and the value of the target throughput rate corresponding to the service is greater than the value. The value of the bit rate of the service.
  • the bit rate parsed by the terminal from the TCP packet satisfies the bit rate required by the service, that is, the bit of the service parsed by the terminal. Rate is greater than or equal to The service corresponds to the standard bit rate.
  • the algorithm for determining the target throughput rate according to the bit rate of the service parsed from the packet of the TCP packet is: the target throughput rate is equal to a bit rate of the service multiplied by an expansion factor Wherein the expansion factor is greater than one.
  • FIG. 5 is a workflow for updating the target throughput rate, but for convenience of description, FIG. 5 only shows portions related to the embodiment.
  • the method further includes step D501, step D502. , step D503.
  • Step D501 Detect an actual throughput rate of sending the TCP packet in the network.
  • Step D502 if the ratio of the actual throughput rate to the target throughput rate is greater than a first threshold, and the network sends the fourth round-trip delay of the TCP packet and the round-trip delay detected when the network is lightly loaded. If the difference is less than the second threshold, increasing the target throughput rate;
  • Step D503 if the ratio of the actual throughput rate to the target throughput rate is less than a third threshold, and the difference between the fourth round trip delay and the round trip delay detected when the network is lightly loaded is greater than a fourth threshold. , then the target throughput rate is reduced.
  • the third threshold is smaller than the first threshold, and the second threshold is smaller than the fourth threshold.
  • the congestion window of the TCP stream corresponding to the service may be adjusted according to the target throughput rate, so that the TCP packet of the service is sent in the adjusted congestion window. It is possible to meet the target throughput rate required by the business as much as possible.
  • the foregoing embodiment only introduces the parameter of the bit rate of the service when determining the target throughput rate, and does not consider the network condition. In this embodiment, when determining the target throughput rate, not only the bit rate of the service but also the bit rate of the service is considered. Current network status.
  • the process of sending the TCP data packet by the network detecting an RTT that currently sends a TCP data packet, using the currently detected RTT as a fourth round-trip delay, and detecting that the TCP data is currently sent in the network.
  • the throughput rate of the packet, the currently detected throughput rate being the actual throughput rate.
  • the factors determining the actual throughput rate include the network status, the throughput rate of the transmitting end, and the throughput rate of the receiving end. Taking the system shown in FIG. 1A as an example, the factors determining the actual throughput rate include the network status of the network 103, and the server 101. Throughput rate and terminal 102 swallowing Taking the system shown in FIG.
  • the factors determining the actual throughput rate include the network status of the network 203, the throughput rate of the first proxy device 204, and the throughput rate of the terminal 202.
  • the system shown in FIG. 1C is
  • the factors determining the actual throughput rate include the network status of the network 303, the throughput rate of the first proxy device 304, and the throughput rate of the second proxy device 305.
  • the frequency and time of simultaneously detecting the fourth round trip delay and the actual throughput rate are not limited.
  • the fourth round trip delay and the actual throughput rate may be simultaneously detected every time interval.
  • step D502 may be performed to increase the target throughput rate, and the protocol in the TCP Updating the target throughput rate in the stack; optionally, the first threshold is a value close to 1, for example the first threshold is 90%; alternatively, the second threshold is a value close to 0; optionally In the case where the target throughput rate after the step D502 is increased is smaller than the network bandwidth, the step size of increasing the target throughput rate in step D502 is not limited.
  • step D503 may be performed to decrease the target throughput rate, and the protocol in the TCP Updating the target throughput rate in the stack; optionally, the third threshold is less than or equal to 50%, for example, the first threshold is 20%; alternatively, the second threshold is a larger delay value; optionally In the case that the target throughput rate after the step D502 is reduced is less than the network bandwidth, the step size of reducing the target throughput rate in step D503 is not limited.
  • the current network condition is also considered, and the target throughput rate in the protocol stack is adjusted in real time according to the current network condition.
  • the value of the network can be avoided by increasing the probability of TCP packet loss due to the congestion window determined by the excessive target throughput. It can also increase the target throughput by too small to increase the congestion window and effectively utilize the physical bandwidth of the network. .
  • the target throughput rate is delivered to the TCP protocol stack through the target throughput rate parameter.
  • a target throughput rate parameter is added to the TCP protocol stack of the server 101 in FIG. 1A, and the target throughput rate parameter is assigned to the target throughput rate.
  • the TCP at the first proxy server 204 The target throughput rate parameter is added to the protocol stack, and the target throughput rate parameter is assigned to the target throughput rate.
  • a target throughput rate parameter is added to the TCP protocol stack of the first proxy server 304, and the target throughput rate parameter is assigned the target throughput rate.
  • the present embodiment further simplifies and refines the foregoing embodiments and embodiments of the present invention based on the application of the method to the server 101 in FIG. 1A; the method includes:
  • the server parses the bit rate of the service from the message of the TCP packet, and multiplies the bit rate by an expansion factor to obtain the target throughput rate.
  • the server determines a field in the packet included in the TCP data packet for the TCP data packet that is sent to the terminal and carries the service, and records the required field for the service in the field.
  • the bit rate if the service parsed by the terminal from the TCP packet can satisfy the bit rate required by the service, the representative server successfully provides the service to the terminal normally.
  • the target throughput rate calculated according to the third algorithm is The value is greater than the value of the bit rate required for the service.
  • the first in FIG. 1B is adopted.
  • the proxy device 204 proxies the server 201, which supports protocol stack modification of TCP over it (including support for adding the target throughput rate to the protocol stack of the TCP).
  • a first proxy device 204 is added to the network, and the proxy server 201 transmits the TCP packet carrying the service to the terminal 202 by the first proxy device 204.
  • FIG. 1B the above is based on the The method is applied to the server embodiment of the server 101 in FIG. 1A as an example, and is not applied to the server 201 in FIG. 1B, but to the first proxy device 204 in FIG. 1B.
  • the foregoing embodiments and implementations of the present invention provide a method for sending a TCP packet to a first proxy device, where the first proxy device proxy server sends the TCP packet to a terminal; the target throughput rate is determined by the terminal, and is determined.
  • the method includes: parsing a bit rate of the service from a message of the TCP packet, multiplying the bit rate by an expansion factor to obtain the target throughput rate; and acquiring, by the first proxy device, the target from the terminal Target throughput rate.
  • the server has a protocol stack that does not support modified TCP
  • the first proxy device has a protocol stack that supports modified TCP
  • the terminal has a protocol stack of TCP.
  • the server may generate a TCP packet carrying the service according to the protocol stack of the TCP, and perform TCP packet interaction between the server and the first proxy device based on the TCP/IP protocol.
  • the first proxy device modifies the TCP packet based on its TCP protocol stack for the TCP packet carrying the service received from the server (may not modify the service contained in the packet included in the TCP packet), and will modify
  • the subsequent TCP packet is sent to the terminal, and the terminal parses the TCP packet based on the protocol stack of the TCP, parses the packet carrying the service, and parses the data related to the service from the packet.
  • the packet also records the bit rate required for the service. After the terminal parses the packet from the TCP packet, the terminal can parse the bit rate required for the service from the packet, and the bit rate is obtained. Multiplying the expansion factor to obtain the target throughput rate;
  • the terminal modifies the socket of the TCP, adds a parameter of the target throughput parameter, and assigns the target throughput parameter to the target throughput rate, and the target throughput parameter and the assignment thereof through the interface (the target)
  • the throughput of the protocol stack of the TCP transmitted to the terminal; for example, adding the target throughput parameter of "target_throughput” to the set function of "setsockopt()", and calculating the bit rate and the expansion coefficient according to the bit rate and the expansion coefficient
  • the "target_throughput” is assigned with the calculated target throughput rate, and the "target_throughput” and its assignment are transmitted to the TCP protocol stack of the terminal through "setsockopt()";
  • the terminal generates a message, expands the TCP option of the packet, and adds a target throughput parameter and its assignment in the TCP option (the target throughput parameter of the protocol stack transmitted by the socket to the TCP and its assignment);
  • the first proxy device can parse the TCP packet by sending a TCP packet to the first proxy device (the TCP packet includes the packet having the option of the TCP (with the target throughput parameter and its assignment)) Then, the target throughput parameter included in the TCP option in the message and its assignment are obtained; further, the first proxy server may add the target throughput parameter and its assignment in the protocol stack of the TCP.
  • the embodiment may add the target throughput rate parameter and its assignment to the terminal after receiving the target throughput rate parameter and the assignment thereof through the socket.
  • the embodiment may not directly add the target throughput parameter and its assignment to the TCP protocol stack of the terminal, but directly generate The message (the target throughput parameter and the assignment thereof are added to the TCP option of the message), the terminal sends the TCP packet carrying the packet to the first proxy device, so that the first proxy device uses the target throughput rate parameter. And its assignment is added to its TCP stack.
  • the embodiment and the embodiment of the invention described above based on the application of the method to the server 101 in FIG. 1A are optionally optimized. If the server 101 in FIG. 1A does not support the TCP on it. Protocol stack modification (including support for adding the target throughput rate to the protocol stack of the TCP), then the first proxy device 304 is used to proxy the server 101 in FIG. 1C, and the first proxy device 304 supports the protocol for TCP on it. Stack modification (including support for adding the target throughput rate to the TCP protocol stack); in addition, a second proxy device 305 can also be added to the network, as shown in FIG.
  • Protocol stack modification including support for adding the target throughput rate to the protocol stack of the TCP
  • Stack modification including support for adding the target throughput rate to the TCP protocol stack
  • a second proxy device 305 can also be added to the network, as shown in FIG.
  • the second proxy device 305 proxyes the terminal 302 and the A proxy device 304 performs the interaction of TCP packets.
  • One reason for adding the second proxy device 305 is that the first proxy device 304 shares the TCP protocol stack with the second proxy device 305, including the second proxy device 305, which will carry the target throughput rate parameter and its assigned message as TCP data.
  • the method of the packet is sent to the first proxy device 304; another reason for adding the second proxy device 305 is that the plurality of terminals 302 are represented by the same second proxy device 305, and the second proxy device 305 represents each terminal 302 and the second A proxy device 304 performs the interaction of TCP packets.
  • the above-described embodiment and embodiment of the invention described based on the application of the method to the server 101 in FIG. 1A are no longer applied to the server 101 in FIG. 1C, but are applied to FIG. 1C.
  • the first proxy device 304 when the inventive embodiment and embodiment described by the method applied to the server 101 in FIG. 1A is applied to the first proxy device 304 in FIG. 1C, the target is calculated by the terminal backup 302.
  • the above-described inventive embodiments and embodiments can be applied to the first proxy device 304; the details are as follows:
  • the foregoing embodiments and implementations of the present invention provide a method for transmitting a TCP packet to be applied to a first proxy device; the first proxy device proxy server sends the TCP packet to a second proxy device to be used by the second proxy device The TCP packet is forwarded to the terminal. Then, the terminal determines the target throughput rate and sends the target throughput rate to the second proxy device, and the method for determining the target throughput rate includes: from the TCP data The bit rate of the service is parsed in the packet of the packet, and the bit rate is multiplied by the expansion factor to obtain the target throughput rate. Then, the first proxy device acquires the target throughput rate from the second proxy device.
  • the server has a protocol stack that does not support modified TCP
  • the first proxy device has a protocol stack that supports the modified TCP
  • the second proxy device has a protocol stack of TCP
  • the terminal has a protocol stack of TCP.
  • the server may generate a TCP packet carrying the service according to the protocol stack of the TCP, and perform TCP packet interaction between the server and the first proxy device based on the TCP/IP protocol.
  • the first proxy device modifies the TCP packet based on its TCP protocol stack for the TCP packet carrying the service received from the server (the service included in the packet included in the TCP packet may not be modified), and the modified packet will be modified.
  • the TCP packet is sent to the second proxy device; the second proxy device re-modifies the TCP packet based on the protocol stack of its TCP, and sends the modified TCP packet to the terminal, and the terminal re-modifies the TCP packet. Parsing and parsing the message carrying the service, and parsing the data related to the service from the message.
  • the packet also records the bit rate required for the service. After the terminal parses the packet from the TCP packet, the terminal can parse the bit rate required for the service from the packet, and the bit rate is obtained. Multiplying the expansion factor to obtain the target throughput rate;
  • the terminal modifies the TCP socket interface, adds a parameter of the target throughput rate parameter, and assigns the target throughput rate parameter to the target throughput rate calculated according to the bit rate required by the service, and the The target throughput parameter and its assignment (the target throughput rate) are transmitted to the TCP protocol stack of the terminal; for example, the target throughput parameter of "target_throughput” is added to the set function of "setsockopt()", After calculating the target throughput rate based on the throughput required by the service, the terminal assigns the target_throughput to the calculated target throughput rate, and transmits the "target_throughput" and its assignment to the TCP protocol stack of the terminal through "setsockopt()". ;
  • the terminal generates a first packet, and expands a TCP option of the first packet, and adds a target throughput parameter and an assignment thereof to the TCP option (a target throughput parameter of the protocol stack transmitted to the TCP through the socket and Its assignment); the terminal sends a TCP packet to the second proxy device (the TCP packet includes the first message with the option of the TCP (with the target throughput parameter and its assignment));
  • the second proxy device After the TCP packet is parsed, the second proxy device obtains the target throughput rate parameter included in the TCP option in the first packet and its assignment; the second proxy device generates a second packet, and the second packet
  • the TCP option is extended to add the target throughput parameter and its assignment (the target throughput parameter in the first packet and its assignment) in the TCP option; the second proxy device sends the TCP to the first proxy device.
  • Packet (the TCP data)
  • the packet includes the second message having the option of the TCP (having the target throughput parameter and its assignment);
  • the first proxy device parses the TCP packet received from the second proxy device, parses out the second packet, and obtains the target throughput rate parameter and its assignment from the second packet, where the first proxy device TCP The target throughput rate parameter and its assignment are added to the protocol stack;
  • the target throughput rate parameter and the assignment thereof may be added to the TCP protocol stack of the second proxy device;
  • the embodiment may not modify the protocol stack of the second proxy device's TCP (that is, add the target in the TCP protocol stack of the second proxy device).
  • the throughput parameter and its assignment directly generate a second packet (the target throughput parameter is added to the TCP option of the second packet and its assignment), and the second proxy device sends the identifier to the first proxy device.
  • the TCP packet of the message causes the first proxy device to add the target throughput rate parameter and its assignment to its TCP protocol stack.
  • the second proxy device is added, because the routing path of the first proxy device and the second proxy device is determined, so when the network is under light load, the TCP packet is from the first proxy device to the second proxy device.
  • RTT is basically unchanged. Therefore, when the second proxy device represents a plurality of terminals, the RTT of the existing TCP flow between the second proxy device and the first proxy device is determined to be at the time of the light load of the network, and the existing TCP is determined.
  • the lower bound of the delay interval corresponding to the flow (the existing TCP stream is RTT when the network is lightly loaded); if a TCP stream of a certain service is newly added between the second proxy device and the first proxy device, there is no need to go through
  • the RTT of the congestion window is used to filter the minimum RTT in the RTT of the congestion window as the lower bound of the delay interval corresponding to the newly added TCP stream, but directly to the delay corresponding to the existing TCP stream.
  • the lower bound of the interval is as follows: the lower bound of the delay interval corresponding to the newly added TCP stream of the service; that is, the RTT of the existing TCP stream when the network is lightly loaded is used as: the newly added service of the service
  • the TCP stream is RTT when the network is lightly loaded.
  • the target throughput rate is calculated based on the bit rate of the service (the bit rate is multiplied by the expansion factor to obtain the target throughput rate).
  • the service is refined as follows:
  • the bit rate of the service is an audio bit rate of the audio service
  • the bit rate of the service is a video code rate of the video service
  • the bit rate of the service is an audio and video bit of the audio and video service. rate.
  • the server when the server provides the service to the terminal, the packet carrying the service is generated; and at the same time, a field is determined in the message, and the bit rate required for the service is recorded in the field.
  • the service is an audio service
  • the audio bit rate required for the audio service is recorded in the field of the message
  • the record is recorded in the field of the message.
  • the video bit rate required for the video service wherein, if the service is an audio and video service, the audio and video bit rate required for the audio and video service is recorded in the field of the message.
  • FIG. 6 is a schematic diagram showing the logical structure of a transmission control protocol TCP packet sending apparatus 600 according to an embodiment of the present invention. As shown in FIG. 6, the apparatus 600 includes at least a delay determining unit 601. The window adjusting unit 602 and the packet transmitting unit 603.
  • the delay determining unit 601 is configured to acquire a first round-trip delay for sending a TCP packet in the network, and determine a second round-trip delay, where the second round-trip delay is a congestion window determined according to the first algorithm, and according to the second The algorithm determines that the congestion window has a round-trip delay when the size is equal, wherein the first algorithm determines a growth stride of the congestion window according to the first round-trip delay, and the second algorithm is configured according to the first round-trip delay And a target throughput rate determining a growth step of the congestion window, the target throughput rate being a desired throughput rate of the service corresponding to the TCP packet;
  • the window adjustment unit 602 is configured to: if the first round trip delay is greater than the second round trip delay, the congestion window determined by the first algorithm is used as the first congestion window, if the first round trip delay is less than or equal to The second round trip delay, the congestion window determined by the second algorithm is used as the first congestion window;
  • the data packet sending unit 603 is configured to send the TCP data packet in the first congestion window.
  • the growth stride of the congestion window determined in the second algorithm is positively correlated with the target throughput rate, and negatively related to the first round trip delay, and the congestion window determined in the first algorithm is The growth step is inversely related to the first round trip delay.
  • the target throughput rate is determined according to a bit rate of the service parsed from a message of the TCP data packet.
  • the algorithm for determining the target throughput rate according to the bit rate of the service parsed from the packet of the TCP packet is: the target throughput rate is equal to a bit rate of the service multiplied by an expansion factor Wherein the expansion factor is greater than one.
  • the window adjusting unit 602 is further configured to: if a packet loss occurs in the process of sending the TCP packet, adjust a congestion window of the TCP packet transmission to a second congestion window according to a third algorithm. ,among them, The second congestion window is determined according to a third round trip delay when the TCP packet is lost, and the congestion window of the TCP packet transmission is adjusted to a lower stride of the second congestion window and the third round trip. Time delay is negatively correlated;
  • the data packet sending unit 603 is further configured to send the TCP data packet in the second congestion window.
  • the window adjustment unit 602 is further configured to adjust the congestion window of the TCP packet transmission to a second congestion window according to a third algorithm, specifically:
  • the window adjusting unit 602 is configured to: if the third round-trip delay is equal to a lower limit of the delay interval, use a congestion window when packet loss occurs in the TCP packet as the second congestion window, if the If the three round-trip delay is equal to the upper limit of the delay interval, the preset congestion window is used as the second congestion window, where the upper limit of the delay interval is a timeout retransmission of the TCP in the network. RTO, the lower limit of the delay interval is a round trip delay when the network is lightly loaded.
  • the window adjustment unit 602 is configured to use the congestion window determined by the first algorithm as the first congestion window, and specifically includes:
  • the window adjustment unit 602 is configured to obtain the first congestion window by taking a growth step of the congestion window as a fast recovery value if the first round trip delay is equal to a round trip delay of the network light load.
  • the fast recovery value is of the same order of magnitude as the slow start;
  • the window adjusting unit 602 is configured to: if the first round trip delay is equal to the timeout retransmission RTO of the TCP in the network, set the growth step of the congestion window to a maximum segment length MSS. Obtaining the first congestion window;
  • the window adjusting unit 602 is configured to: if the first round-trip delay is in a round trip delay of the network light load and a timeout retransmission RTO interval of the TCP in the network changes, the growth step of the congestion window is The amplitude value is between 1 MSS and the fast recovery value and is negatively correlated with the first round trip delay to obtain the first congestion window.
  • the window adjustment unit 602 is configured to use the congestion window determined by the second algorithm as the first congestion window, specifically:
  • the window adjustment unit 602 is configured to calculate a target window according to the target throughput rate and the first round trip delay, and take the growth step of the congestion window as the target window and the network in the network. The difference in the current congestion window of TCP determines the first congestion window.
  • the device further includes a throughput rate detecting unit 604 and a target throughput rate adjusting unit 605.
  • the throughput detection unit 604 is configured to detect an actual throughput rate of the TCP packet sent by the network.
  • the target throughput rate adjustment unit 605 is configured to: if the ratio of the actual throughput rate to the target throughput rate is greater than a first threshold, and the fourth round-trip delay of the network sending the TCP packet and the network light load detection If the difference between the round trip delays is less than the second threshold, the target throughput rate is increased, if the ratio of the actual throughput rate to the target throughput rate is less than a third threshold, and the fourth round trip delay is When the difference between the round trip delay detected when the network is lightly loaded is greater than the fourth threshold, the target throughput rate is decreased.
  • the target throughput rate is delivered to the TCP protocol stack through the target throughput parameter.
  • FIG. 8 is a schematic diagram showing the hardware structure of a transmission control protocol TCP packet transmission apparatus 800 according to the present embodiment, and shows a hardware configuration of the transmission apparatus 800.
  • the transmitting device 800 includes a processor 801, a memory 802, and a network interface 804, and the processor 801 is respectively connected to the memory 802 and the network interface 804 through the bus 803;
  • the transmitting device 800 accesses the network 103 through the network interface 804 to send/receive the TCP data packet;
  • the memory 802 is configured to store computer execution instructions, and when the transmitting device 800 is running, the processor 801 reads the computer execution instructions stored by the memory 802 to perform application to the transmitting device 800.
  • the transmission control protocol TCP packet transmission method provided by the above embodiments and embodiments of the invention.
  • the processor 801 can be a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more integrated circuits for executing related programs.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the memory 802 can be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). Memory 802 can store operating systems and other applications.
  • ROM read only memory
  • RAM random access memory
  • Memory 802 can store operating systems and other applications.
  • the program code for implementing the technical solution provided by the embodiment of the present invention is stored in the memory 802, including the above-mentioned invention implementation to be applied to the transmitting device 800.
  • the program code of the transmission control protocol TCP packet transmission method provided by the example and the embodiment is stored in the memory 802 and executed by the processor 801.
  • the network interface 804 implements network communication between the transmitting device 800 and other devices or communication networks using, for example, but not limited to, transceivers such as transceivers; alternatively, the network interface 804 may be for access Various interfaces of the network, such as an Ethernet interface for accessing Ethernet, including but not limited to RJ-45 interface, RJ-11 interface, SC optical interface, FDDI interface, AUI interface, BNC interface, and Console Interface, etc.
  • an Ethernet interface for accessing Ethernet including but not limited to RJ-45 interface, RJ-11 interface, SC optical interface, FDDI interface, AUI interface, BNC interface, and Console Interface, etc.
  • the bus 803 can include a path for communicating information between various components (e.g., processor 801, memory 802, and network interface 804) in the transmitting device 800.
  • various components e.g., processor 801, memory 802, and network interface 804 in the transmitting device 800.
  • the sending device 800 further includes an input/output interface 805 for receiving input data and information, and outputting an operation result and the like.
  • the transmitting device 800 shown in FIG. 8 only shows the processor 801, the memory 802, the network interface 804, and the bus 803, in a specific implementation process, those skilled in the art should understand that the transmitting Device 800 also contains other devices necessary to achieve proper operation. In the meantime, those skilled in the art will appreciate that the transmitting device 800 may also include hardware devices that implement other additional functions, depending on the particular needs. Moreover, those skilled in the art will appreciate that the transmitting device 800 may also only include the components necessary to implement the embodiments of the present invention, and does not necessarily include all of the devices shown in FIG.
  • a system 100 is provided.
  • the system 100 includes a server 101 and a terminal 102.
  • the server 101 is communicably connected to the terminal 102 through a network 103.
  • the server 101 is as described above.
  • the transmitting device 800 transmitting the control protocol TCP packet transmits a TCP packet to the terminal 102 via the network 103.
  • a system 200 is provided.
  • the system 200 includes a server 201, a first proxy device 204, and a terminal 202.
  • the first proxy device 204 and the server 201 are respectively
  • the terminal 202 is in communication connection;
  • the server 201 is configured to send, by the first proxy device 204, a TCP data packet to the terminal 202;
  • the first proxy device 204 is the foregoing transmission control protocol TCP packet sending device 800, configured to receive the TCP packet sent by the server 201 to the terminal 202, and proxy the server 201 to the The terminal 202 transmits the TCP packet.
  • the terminal 202 is configured to pass the target throughput rate by the TCP protocol stack of the terminal 202 to the TCP protocol stack of the first proxy device 204 by using a target throughput rate parameter.
  • a system 300 is provided.
  • the system 300 includes a server 301, a first proxy device 304, a second proxy device 305, and a terminal 302.
  • the first proxy device 304 and the server 301 respectively. Communicating with the second proxy device 305; the server 301, configured to pass the first proxy device 304
  • the proxy sends a TCP packet to the terminal 302;
  • the first proxy device 304 is the foregoing transmission control protocol TCP packet sending device 800, configured to receive the TCP packet sent by the server 301 to the terminal 302, and proxy the server 301 to the
  • the second proxy device 305 sends the TCP data packet;
  • the second proxy device 305 is configured to receive the TCP packet and forward it to the terminal 302.
  • the terminal 302 is configured to pass the target throughput rate to the TCP protocol stack of the second proxy device 305 by using the target throughput parameter from the TCP protocol stack of the terminal 302;
  • the second proxy device 305 is further configured to pass the target throughput rate to the TCP protocol stack of the first proxy device 304 by using the target throughput rate parameter from a TCP protocol stack of the second proxy device 305. in.
  • the disclosed systems, devices, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules and units is only a logical functional division, and may be implemented in another manner, such as multiple modules or units or components. It can be combined or integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or module, and may be electrical, mechanical or otherwise.
  • the modules described as separate components may or may not be physically separated, and the components of the modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of hardware plus software function modules.
  • the above-described modules implemented in the form of software function modules can be stored in a computer readable storage medium.
  • the software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform some of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a mobile hard disk, a read-only memory (English: Read-Only Memory, ROM for short), a random access memory (English: Random Access Memory, RAM for short), a magnetic disk or an optical disk, and the like. The medium of the code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)

Abstract

本发明实施例公开了一种传输控制协议TCP数据包的发送方法、发送装置和系统。该发送方法包括:获取在网络中发送TCP数据包的第一往返时延,确定第二往返时延;如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口;如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;将所述TCP数据包以所述第一拥塞窗口进行发送。通过本发明公开的技术方案,从获取所述第一往返时延时的当前拥塞窗口一步到位地增长到第一拥塞窗口,更能够满足业务的吞吐率需求,也更有效地利用网络带宽。

Description

传输控制协议TCP数据包的发送方法、发送装置和系统
本申请要求于2015年3月2日提交中国专利局、申请号为201510093011.3、发明名称为“传输控制协议TCP数据包的发送方法、发送装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及网络技术领域,尤其涉及传输控制协议TCP数据包的发送方法、发送装置和系统。
背景技术
网络拥塞(congestion),是指网络中传送的报文过多,但网络中存储转发节点的资源有限,导致网络传输性能下降。发生网络拥塞时,一般会出现数据丢失、时延增加、吞吐量下降等现象,网络拥塞严重时会导致拥塞崩溃(congestion collapse)。伴随着高吞吐率应用(例如音视频的在线播放应用)的业务量快速增多,需通过网络传输的数据量也急剧增长,要求网络保持高吞吐率,如果用于协调网络资源的拥塞控制措施不够合理,即使网络带宽足够也会严重影响高吞吐率应用的良好使用。
传输控制协议(Transmission Control Protocol,简称TCP)是一种面向连接的、可靠的、基于字节流的传输层通信协议,由IETF的RFC 793定义。自TCP诞生至今,诸多研究者提出了一系列TCP的拥塞控制机制,其目标是为了保证负载不能超过网络的最大承受能力而设置的自我调节和恢复的机制。TCP的拥塞控制包括拥塞避免和拥塞恢复;其中,拥塞避免是一种预防机制,为尽量避免网络进入拥塞状态,保持网络在高吞吐量和低时延的状态下工作;拥塞恢复则是一种恢复机制,即拥塞已经发生了,需要将网络从拥塞状态中恢复出来,重新进入高吞吐量和低延迟的状态。
截至目前,TCP的拥塞控制的一种成熟的实现方式是,调节拥塞窗口(congestion window,简称CWND)的大小来控制TCP报文的吞吐率。拥塞窗口的大小是指在一个往返时延(Round Trip Time,RTT)内可以最多发送的TCP数据包数。拥塞窗口越大,数据发送的速率也就越快,吞吐率也就越大,但也越有可能发生网络拥塞;反之,拥塞窗口越小,数据发送的速率也就越慢,吞吐率也就越小,发生网络拥塞的可能性也就越 小,例如:拥塞窗口为1最大报文段长度(Maximum Segment Size,简称MSS),每发送1个报文需要等到接收方确认才能发送第2个报文,这必然不会造成网络拥塞,但是吞吐率极低。TCP的拥塞控制就是为了调节出最优的拥塞窗口值,使得吞吐率最大化且不产生拥塞。目前,随着对吞吐率需求的发展,已有多种成熟的窗口调整算法,包括Reno算法和CUBIC算法等。
例如,Reno算法是应用最广泛且成熟的TCP拥塞控制算法,该算法所包含的慢启动、拥塞避免和快速重传、快速恢复机制,是现有的众多算法的基础。在Reno算法的运行机制中,为维持一个动态平衡,必须周期性地产生一定量的TCP数据包丢失,再加上AIMD(全称:Additive Increase Multiplicative Decrease)机制(即加性增,乘性减),一个TCP数据包的丢失所带来的拥塞窗口缩小需花费较长时间来恢复,带宽利用率不高,尤其是在大拥塞窗口的情况下,这种弊端越发明显。对于Reno算法,当检测到一个TCP数据包丢失时立刻将拥塞窗口减小至一半大小,在拥塞恢复阶段,每一轮拥塞窗口中数据传输的RTT后将拥塞窗口增加1MSS(即增幅为1MSS),将TCP数据包丢失时的拥塞窗口从一半大小恢复耗时较长;以网络具有100兆的网络带宽和100毫秒的时延为例,吞吐率逼近网络带宽时拥塞窗口值约为863MSS,Reno算法需经过431轮RTT才能将TCP数据包丢失时的拥塞窗口从一半大小恢复,大约耗时43.1秒。相比于Reno算法,在拥塞窗口增长方面CUBIC算法有所改进,CUBIC算法会记录TCP数据包丢失时的拥塞窗口,在未达到记录的拥塞窗口时以近似慢启动的指数方式增加窗口,当靠近记录的拥塞窗口时,大幅减小拥塞窗口的增长步幅,维持一段时间后拥塞窗口的增长步幅重新调整为近似指数的快速增长,如果仅是偶然维持了该段时间,CUBIC算法仍在该段时间后快速增长拥塞窗口必然在再次网络拥塞时造成更多TCP数据包的丢失,造成网络状况进一步恶化。
所以,上述两种算法(包括Reno算法和CUBIC算法)在进行TCP拥塞控制都存在以下相同缺点:按照预设的固定值进行拥塞窗口增长,不能有效利用当前良好的网络带宽,甚至在调节拥塞窗口时可能做出与实际网络状况截然相反的调整策略,影响应用对吞吐量的要求。
发明内容
有鉴于此,本发明实施例提供了一种传输控制协议TCP数据包的发送方法、发送装置和系统,根据业务所期望获得的吞吐率和发送所述TCP数据包的往返时延调整拥塞窗口,以调整后的拥塞窗口控制TCP数据包的发送能够尽量满足业务的吞吐率。
第一方面,本发明实施例提供了一种传输控制协议TCP数据包的发送方法,所述方法包括:
获取在网络中发送TCP数据包的第一往返时延;
确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率;
如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口;
如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;
将所述TCP数据包以所述第一拥塞窗口进行发送。
结合第一方面,在第一种可能的实现方式中,所述第二算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
结合第一方面或第一方面的以上任一项可能的实现方式,在第二种可能的实现方式中,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
结合第一方面或第一方面的以上任一项可能的实现方式,在第四种可能的实现方式中,所述方法还包括:
如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中,所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥 塞窗口的降低步幅与所述第三往返时延负相关;
将所述TCP数据包以所述第二拥塞窗口进行发送。
结合第一方面的第四种可能的实现方式,在第五种可能的实现方式中,所述按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体包括:
如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传RTO,所述时延区间的下限为在所述网络轻载时的往返时延。
结合第一方面或第一方面的以上任一项可能的实现方式,在第六种可能的实现方式中,所述以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度MSS来获得所述第一拥塞窗口;
若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
结合第一方面或第一方面的以上任一项可能的实现方式,在第七种可能的实现方式中,所述以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体包括:
根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
结合第一方面或第一方面的以上任一项可能的实现方式,在第八种可能的实现方式中,所述方法还包括:
检测在所述网络发送所述TCP数据包的实际吞吐率;
如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述TCP数据包的第四往返时延与网络轻载时检测到的往返时延的差值小于第二阈值,则增大所述目标吞吐率;
如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延 与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
结合第一方面或第一方面的以上任一项可能的实现方式,在第九种可能的实现方式中,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。
第二方面,本发明实施例提供了一种传输控制协议TCP数据包的发送装置,所述装置包括:
时延确定单元,用于获取在网络中发送TCP数据包的第一往返时延,确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率;
窗口调整单元,用于如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口,如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;
数据包发送单元,用于将所述TCP数据包以所述第一拥塞窗口进行发送。
结合第二方面,在第一种可能的实现方式中,所述第二算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
结合第二方面或第二方面的以上任一项可能的实现方式,在第二种可能的实现方式中,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
结合第二方面的第二种可能的实现方式,在第三种可能的实现方式中,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
结合第二方面或第二方面的以上任一项可能的实现方式,在第四种可能的实现方式中,所述窗口调整单元,还用于如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中,所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口的降低步幅与所述第三往返时延负相关;
所述数据包发送单元,还用于将所述TCP数据包以所述第二拥塞窗口进行发送。
结合第二方面的第四种可能的实现方式,在第五种可能的实现方式中,所述窗口调 整单元,还用于按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体为:
所述窗口调整单元,用于如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传RTO,所述时延区间的下限为在所述网络轻载时的往返时延。
结合第二方面或第二方面的以上任一项可能的实现方式,在第六种可能的实现方式中,所述窗口调整单元,用于以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
所述窗口调整单元,用于若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
所述窗口调整单元,用于若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度MSS来获得所述第一拥塞窗口;
所述窗口调整单元,用于若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
结合第二方面或第二方面的以上任一项可能的实现方式,在第七种可能的实现方式中,所述窗口调整单元,用于以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体为:
所述窗口调整单元,用于根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
结合第二方面或第二方面的以上任一项可能的实现方式,在第八种可能的实现方式中,所述装置还包括:
吞吐率检测单元,用于检测在所述网络发送所述TCP数据包的实际吞吐率;
目标吞吐率调整单元,用于如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述TCP数据包的第四往返时延与网络轻载时检测到的往返时 延的差值小于第二阈值,则增大所述目标吞吐率,如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
结合第二方面或第二方面的以上任一项可能的实现方式,在第九种可能的实现方式中,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。
第三方面,本发明实施例提供了一种传输控制协议TCP数据包的发送装置,所述发送装置包括处理器、存储器和网络接口,所述处理器分别与所述存储器和所述网络接口通过所述总线连接;
所述存储器用于存储计算机执行指令,当所述发送装置运行时,所述处理器读取所述存储器存储的所述计算机执行指令,以执行第一方面或基于第一方面的以上任一项可能的实现方式所述的传输控制协议TCP数据包的发送方法。
第四方面,本发明实施例提供了一种系统,所述系统包括服务器和终端,所述服务器通过网络与所述终端通信连接;所述服务器为第二方面或基于第二方面的以上任一项可能的实现方式或第三方面提供的传输控制协议TCP数据包的发送装置,通过所述网络向所述终端发送TCP数据包。
第五方面,本发明实施例提供了一种系统,所述系统包括服务器、第一代理设备和终端,所述第一代理设备分别与所述服务器和所述终端通信连接;
所述服务器,用于经所述第一代理设备代理向所述终端发送TCP数据包;
所述第一代理设备为第二方面或基于第二方面的以上任一项可能的实现方式或第三方面提供的传输控制协议TCP数据包的发送装置,用于接收所述服务器向所述终端发送的所述TCP数据包,并代理所述服务器向所述终端发送所述TCP数据包。
结合第五方面,在第一种可能的实现方式中,所述终端,用于将目标吞吐率通过目标吞吐率参数由所述终端的TCP协议栈传递给所述第一代理设备的TCP协议栈。
第六方面,本发明实施例提供了一种系统,所述系统包括服务器、第一代理设备、第二代理设备和终端,所述第一代理设备分别与服务器和第二代理设备通信连接;
所述服务器,用于经所述第一代理设备代理向所述终端发送TCP数据包;
所述第一代理设备为第二方面或基于第二方面的以上任一项可能的实现方式或第三方面提供的传输控制协议TCP数据包的发送装置,用于接收所述服务器向所述终端发送的所述TCP数据包,并代理所述服务器向所述第二代理设备发送所述TCP数据包;
所述第二代理设备,用于接收所述TCP数据包并转发至所述终端。
结合第六方面,在第一种可能的实现方式中,所述终端,用于将目标吞吐率通过目标吞吐率参数由所述终端的TCP协议栈中传递给所述第二代理设备的TCP协议栈中;
所述第二代理设备,还用于将所述目标吞吐率通过所述目标吞吐率参数由所述第二代理设备的TCP协议栈中传递给所述第一代理设备的TCP协议栈中。
通过上述方案,根据目标吞吐率和反应当前网络状况的第一往返时延确定第一拥塞窗口,以第一拥塞窗口更新当前的拥塞窗口,在当前网络状况下以第一拥塞窗口控制TCP数据包的发送能够尽量满足业务所期望获得的吞吐率;跟随目标吞吐率和网络状况,从当前的拥塞窗口直接一步到位地增长到第一拥塞窗口,更能够满足业务的吞吐率需求,也更有效地利用网络带宽。
附图说明
图1A为传输控制协议TCP数据包的发送方法的应用场景的一种系统逻辑结构示意图;
图1B为传输控制协议TCP数据包的发送方法的应用场景的又一种系统逻辑结构示意图;
图1C为传输控制协议TCP数据包的发送方法的应用场景的又一种系统逻辑结构示意图;
图2为传输控制协议TCP数据包的发送方法的一种流程图;
图3为数据包丢失后基于TCP数据包的发送方法的一种工作流程图;
图4为基于图2所示的传输控制协议TCP数据包的发送方法的一种可选的优化流程图;
图5为传输控制协议TCP数据包的发送方法中更新目标吞吐率的一种流程图;
图6为传输控制协议TCP数据包的发送装置600的逻辑结构示意图;
图7为基于图6所示的传输控制协议TCP数据包的发送装置600的一种优化逻辑结构示意图;
图8为依据本发明一实施例提供的传输控制协议TCP数据包的发送装置800的一种硬件结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1A是本发明实施例提供的传输控制协议TCP数据包的发送方法的应用场景的一种系统逻辑结构示意图,为便于说明,仅提供了与本发明实施例相关的部分。如图1A所示,参见图1A提供的系统100,该系统100包括服务器101、终端102和网络103;服务器101通过网络103与终端102互连,通过该网络103实现所述服务器101与所述终端102之间的数据交互;服务器101可通过该网络103向终端102发送TCP数据包,反之,终端102也可通过该网络103向服务器101发送TCP数据包;该网络103为基于传输控制协议/因特网互联协议(Transmission Control Protocol/Internet Protocol,简称TCP/IP协议)建立的,该传输控制协议/因特网互联协议又名网络通讯协议。
可选地,该网络103可以包括交换机、路由器等转发TCP数据包的转发设备,通过该转发设备转发所述服务器101与所述终端102之间交互的TCP数据包。
可选地,本发明实施例所述的服务器为采用电子器件构成的具有数据处理功能的电子设备;该电子设备由集成电路、晶体管、电子管等电子元器件组成;可在该电子设备上运行由程序指令组成的软件,实现数据处理、控制其他设备等等功能。如为该电子设备安装操作系统后,如果该电子设备已安装网卡并完成网络配置,该电子设备可接入基于TCP/IP协议组建的网络,与其他电子设备(如终端)之间进行TCP数据包的交互以实现数据交互。
可选地,与服务器类似地,本发明实施例所述的终端为采用电子器件构成的具有数据处理功能的电子设备;该终端可接入基于TCP/IP协议组建的网络,与其他电子设备(如服务器)之间进行TCP数据包的交互以实现数据交互。
可选地,服务器101与终端102直接通信连接,服务器101通过网络103与终端102之间交互的TCP数据包不需经过网络103中的转发设备(如路由器)进行TCP数据包的转发。
可选地,服务器101可与终端102进行对等的TCP数据包交互。服务器101可向 终端102发送TCP数据包,对应地,终端102也可向服务器101发送TCP数据包。
可选地,图1A提供的系统100,系统100中,服务器101通过网络103与终端102进行主从式的TCP数据包交互;其中,该服务器101作为服务端,该终端102作为主从式通信中与该服务端对应的客户端。服务器101可向终端102发送TCP数据包,如终端102从服务器101下载音视频文件时服务器101向终端102发送载有该音视频文件的所述TCP数据包;对应地,终端102也可向服务器101发送TCP数据包,如终端102向服务器101上传文本文件时终端102向服务器101发送载有该文本文件的所述TCP数据包。
图1B是本发明实施例提供的传输控制协议TCP数据包的发送方法的应用场景的又一种系统逻辑结构示意图,为便于说明,仅提供了与本发明实施例相关的部分。参见图1B提供的系统200,该系统200包括服务器201、终端202、网络203和第一代理设备204;服务器201通过网络203与终端202进行TCP数据包交互的过程中,如果服务器201的TCP的协议栈不支持被修改,则添加第一代理设备204,第一代理设备204支持修改其的TCP的协议栈,由该第一代理设备204代理该服务器201与该终端202进行所述TCP数据包的交互,图1B中第一代理设备204与终端202的TCP的数据包交互与图1A中服务器101与终端102的TCP的数据包交互类似;当然也可因其他因素添加该第一代理设备204,例如为减小该服务器201因发送TCP数据包所带来的负荷而添加该第一代理设备204;优选地,所述第一代理设备204采用代理服务器实现;优选地,所述第一代理设备204是路由器上的业务板卡,通过对该业务板卡进行逻辑编程以实现上述功能。
图1C是本发明实施例提供的传输控制协议TCP数据包的发送方法的应用场景的又一种系统逻辑结构示意图,为便于说明,仅提供了与本发明实施例相关的部分。参见图1C提供的系统300,该系统300包括服务器301、终端302、网络303和第一代理设备304;服务器301通过网络303与终端302进行TCP数据包交互的过程中,如果服务器301的TCP的协议栈不支持被修改,则添加第一代理设备304,该第一代理设备304支持修改其的TCP的协议栈,由该第一代理设备304代理该服务器301进行所述TCP数据包的交互,另外,还可以添加第二代理设备305,由该第二代理设备305代理该终端302进行所述TCP数据包的交互,从而在所述第一代理设备304与所述第二代理设备305之间进行所述TCP数据包的交互,图1C中第一代理设备304与第二代理设备305 的TCP的数据包交互与图1A中服务器101与终端102的TCP的数据包交互类似;当然也可因其他因素同时添加该第一代理设备304和该第二代理设备305,参见上述,在此不再赘述。其中,添加第二代理设备305的一种原因:该终端302的TCP的协议栈不支持被修改;该第二代理设备305支持修改其的TCP的协议栈,由该第二代理设备305代理该终端302进行所述TCP数据包的交互,从而在所述第一代理设备304与所述第二代理设备305之间进行所述TCP数据包的交互。优选地,所述第一代理设备304采用代理服务器实现;优选地,所述第一代理设备304是路由器上的业务板卡,通过对该业务板卡进行逻辑编程以实现上述功能。优选地,所述第二代理设备305采用代理服务器实现;优选地,所述第二代理设备305是路由器上的业务板卡,通过对该业务板卡进行逻辑编程以实现上述功能。
本发明实施例中,服务器在网络向所述终端发送TCP数据包的过程中为了尽量满足业务所期望获得的吞吐率,设计了所述TCP的TCP数据包的发送方法。需说明的是,在图1A中,本发明实施例提供的方法应用于服务器101;在图1B中,本发明实施例提供的方法应用于第一代理设备204;在图1C中,本发明实施例提供的方法可应用于第一代理设备304。
下面以本发明实施例提供的方法应用于服务器101为例,对本发明实施例提供的所述TCP的TCP数据包的发送方法进行详细描述,图2示出了该方法的基本实现流程,但为了便于描述,图2仅示出了与本发明实施例相关的部分。
如图2所示的所述TCP的TCP数据包的发送方法,包括:步骤A201、步骤A202、步骤A203、步骤A204和步骤A205。
步骤A201,获取在网络中发送TCP数据包的第一往返时延(Round Trip Time,RTT)。
以本发明实施例提供的方法应用于图1A的服务器101为例,详述步骤A201:
服务器向终端提供业务时,会针对提供的每个业务分别建立一条TCP流,在每条TCP流中发送载有对应业务的TCP数据包。在发送所述TCP数据包的过程中,步骤A201每接收到一个该TCP数据包的确认响应(ACK),就计算发送该TCP数据包所需的RTT,将计算出的RTT作为第一往返时延;可选地,采用RFC6289提供的算法(Jacobson/Karels算法)计算该TCP数据包的RTT。
步骤A202,确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根 据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率。
需说明的是,本发明实施例对所述业务的具体形式不做限定,包括:音频业务、视频业务、音视频业务、在线查杀病毒业务、即时通信业务以及在线应用业务等各种应用业务。
服务器向终端提供该业务的过程中,如果期望正常提供该业务,则服务器发送载有该业务的所述TCP数据包需要一定的吞吐率,本发明实施例将该吞吐率定义为目标吞吐率。服务器向终端提供该业务的过程中,会为该业务建立一条TCP流,并为该条TCP流建立一个拥塞窗口,在该拥塞窗口的控制下通过该TCP流发送载有该业务的所述TCP数据包,服务器能够提供的吞吐率由该拥塞窗口确定。从而,为了在网络的当前网络状况下,尽量达到业务所需的吞吐率,需通过调整该拥塞窗口实现。以此类推,假如服务器同时向终端提供多个业务,分别确定服务器向终端提供每个业务分别所需的目标吞吐率;为每个业务分别建立TCP流,针对每条TCP流分别设置一个拥塞窗口,还针对每个业务,分别调整对应的拥塞窗口以提供业务所期望获得的吞吐率。
本发明实施例提供了第一算法和第二算法,根据网络的网络状况和业务所需的目标吞吐率,采用第一算法或第二算法来调整拥塞窗口的大小,采用调整得到的该拥塞窗口来控制所述TCP数据包的发送,使得终端获得期望的吞吐率。具体地,步骤A202是根据第一算法或第二算法确定该拥塞窗口的增长步幅,以实现对拥塞窗口的大小的调整。
另外,具体是采用第一算法或采用第二算法来调整拥塞窗口,根据反映网络状况的RTT确定,步骤A202确定了第二往返时延。如果反映网络状况的RTT小于或等于该第二往返时延,即反映网络状况的RTT较小,当前网络状况良好,则采用第二算法来确定拥塞窗口的增长步幅;在采用第二算法确定拥塞窗口的增长步幅时,需同时根据所述反映网络状况的RTT和所述目标吞吐率确定该增长步幅,即第二算法同时考虑网络状况和目标吞吐率,通常情况下,以第二算法确定的拥塞窗口控制所述TCP数据包的发送时,服务器能够提供业务所需的目标吞吐率。如果反映网络状况的RTT大于该第二往返时延,即反映网络状况的RTT较大,当前网络状况有一定拥塞,采用第一算法来确定拥塞窗口的增长步幅;在采用第一算法确定拥塞窗口的增长步幅时,根据反映网络状况的RTT时延确定该增长步幅,即第一算法更多地考虑网络状况,通常情况下,以 第二算法确定的拥塞窗口控制所述TCP数据包的发送时,服务器提供的吞吐率达不到业务所需的目标吞吐率。
步骤A203,如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口。
步骤A204,如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口。
步骤A205,将所述TCP数据包以所述第一拥塞窗口进行发送。
具体地,在所述网络发送载有业务的所述TCP数据包的过程中,如果当前需根据目标吞吐率和反应当前网络状况的RTT调整拥塞窗口以服务器提供尽量满足业务的吞吐率,首先执行步骤A201计算反应当前网络状况的RTT,将计算出的RTT作为所述第一往返时延。
如果反映当前网络状况的所述第一往返时延较大,该第一往返时延大于所述第二往返时延,表示当前网络状况存在一定程度的拥塞,则执行步骤A203采用第一算法来确定拥塞窗口的增长步幅,根据所述第一往返时延确定拥塞窗口的增长步幅;在检测第一往返时延时所对应的拥塞窗口的基础上,增加根据第一算法确定的该增长步幅得到第一拥塞窗口。
如果反映当前网络状况的所述第一往返时延较小,该第一往返时延小于或等于所述第二往返时延,表示当前网络状况良好,则执行步骤A204采用第二算法来确定拥塞窗口的增长步幅,同时根据所述第一往返时延和所述目标吞吐率确定拥塞窗口的增长步幅;在检测第一往返时延时所对应的拥塞窗口的基础上,增加根据第二算法确定的该增长步幅得到第一拥塞窗口。
在某个业务的TCP流中,对于控制该业务的TCP数据包发送的拥塞窗口,每执行步骤A204或每执行步骤A205重新确定一次第一拥塞窗口,便以该第一拥塞窗口更新一次该业务对应的拥塞窗口,以第一拥塞窗口替代更新前的拥塞窗口;在该业务的TCP流中,更新为以第一拥塞窗口控制所述TCP数据包的发送个数。
值得说明的是,对于在某个业务的TCP流中用于控制该业务的TCP数据包发送的拥塞窗口,本发明实施例对在哪个时刻或哪种条件触发根据所述第一往返时延和目标吞吐率确定第一拥塞窗口并以该第一拥塞窗口更新当前的拥塞窗口不做限定;例如,可实时根据所述第一往返时延和目标吞吐率确定第一拥塞窗口,并以该第一拥塞窗口更新该 业务的TCP流的当前拥塞窗口;再例如,可每间隔特设时间根据所述第一往返时延和目标吞吐率确定第一拥塞窗口,并以该第一拥塞窗口更新当前的拥塞窗口;再例如,在所述TCP数据包发生丢包并将拥塞窗口减小至第二拥塞窗口之后,如果正常接收到第一轮以第二拥塞窗口控制发送所述TCP数据包的确认响应(ACK),才根据所述第一往返时延和目标吞吐率确定第一拥塞窗口,并以该第一拥塞窗口更新当前的拥塞窗口。
因此,在第一往返时延反应的当前网络状况下,本发明实施例能够根据业务期望获得的目标吞吐率,调整出尽量满足该目标吞吐率的第一拥塞窗口,以第一拥塞窗口控制发送TCP数据包,能够尽可能提供该业务期望获得的目标吞吐率。
下面基于图1A的服务器101,在服务器向终端提供业务的TCP流中发生丢包后采用本发明实施例提供的方法调整拥塞窗口为例,详述本发明实施例提供的方法,详述如下:
首先,获取在网络中发送业务的所述TCP数据包所需的目标吞吐率,并在所述TCP的协议栈中添加所述目标吞吐率。
具体地,如果服务器向终端正常提供该业务,服务器发送载有该业务的所述TCP数据包需要一定的吞吐率,将该业务所需的吞吐率定义为目标吞吐率。
因服务器向终端提供业务时,会为该业务建立一条TCP流,服务器在该条TCP流中发送载有该业务的TCP数据包。另外,服务器向终端提供该业务的过程中,还为该条TCP流建立一个拥塞窗口;在该拥塞窗口的控制下控制该条TCP流中载有该业务的所述TCP数据包的发送,因该拥塞窗口确定了最多能发送的TCP数据包的个数,所以可以通过调整该拥塞窗口的大小来调整服务器为该业务提供的吞吐率。
由于现有的TCP协议栈中没有目标吞吐率这一参数,为根据该目标吞吐率调整拥塞窗口,需对服务器的TCP协议栈进行修改,添加目标吞吐率这一参数;图1A的服务器101支持对其的所述TCP的协议栈进行修改。另外,在服务器向终端提供业务的过程中,服务器直接向终端发送载有该业务的所述TCP数据包,而不是如图1B由第一代理设备代理该服务器向该终端发送载有该业务的所述TCP数据包;这样,该服务器仅向终端提供某个业务时,由服务器根据TCP的协议栈中的目标吞吐率和反应当前网络状况的第一往返时延调整拥塞窗口。
可选地,作为在图1A的服务器101的所述TCP的协议栈中添加所述目标吞吐率的 一种可选实施方式,对套接口进行修改,增添目标吞吐率参数这一参数,待根据服务器向终端提供业务所需的目标吞吐率之后,将确定的目标吞吐率对该目标吞吐率参数赋值,并通过该套接口将该目标吞吐率参数及其对应的赋值(即业务期望获得的目标吞吐率)向服务器的所述TCP的协议栈传输,再在所述TCP的协议栈中添加该目标吞吐率参数及其对应的赋值;
举例说明,在“setsockopt()”这一套接口函数中添加“target_throughput”这一目标吞吐率参数,待根据服务器向终端提供业务所需的吞吐率确定该业务期望获得的目标吞吐率之后,以确定的该目标吞吐率对该“target_throughput”赋值,通过“setsockopt()”将“target_throughput”及其赋值传输至服务器的所述TCP的协议栈,再在所述TCP的协议栈中添加“target_throughput”这一目标吞吐率参数及其赋值。
然后,在所述网络发送所述TCP数据包的过程中,如果发生所述TCP数据包的丢失,依次执行步骤B301、步骤B302、步骤B303和步骤B304,参见图3。
步骤B301,在所述网络发送所述TCP数据包的过程中,如果所述TCP数据包丢失,则将所述TCP数据包丢失时的拥塞窗口调整为第二拥塞窗口,在所述网络控制所述TCP数据包以所述第二拥塞窗口发送。
具体地,服务器向终端提供业务的TCP流中发送载有该业务的所述TCP数据包的过程中,终端正确顺序接收某个TCP数据包时会向服务器反馈与该TCP数据包对应的确认响应(Acknowledgement,ACK),服务器在接收到该TCP数据包的确认响应时,才将缓存中的该个TCP数据包删除,在该缓存中添加待发送的其它TCP数据包;本发明实施例在基于该缓存进一步设置了拥塞窗口,通过该拥塞窗口进行拥塞控制。在业务的TCP流发送该拥塞窗口中的TCP数据包的过程中,如果该拥塞窗口中的某个TCP数据包丢失,则减小该TCP流的拥塞窗口,将所述拥塞窗口调整为第二拥塞窗口,该第二拥塞窗口小于该TCP数据包丢失时的拥塞窗口,具体根据哪种算法设置该第二拥塞窗口不做限定,如现有的Reno算法或者CUBIC算法。另外,对于服务器如何确定该拥塞窗口中的某个TCP数据包已丢失不做限定,对触发TCP数据包丢失的场景也不做限定;如,当服务器发送该TCP数据包之后超过预设时间仍未收到终端对该TCP数据包的确认响应;再如,服务器发送该TCP数据包之后在网络中传输该TCP数据包时丢失该TCP数据包,举例,在无线网络中传输TCP数据包时随机丢失的TCP数据包;再如,服务器发送该TCP数据包之后终端未及时反馈该TCP数据包的确认响应;再如,服务 器向终端依次发送多个TCP数据包,TCP数据包到达终端的时间乱序,终端已收到排序在后的多个TCP数据包(例如,排序在后的三个TCP数据包)之后仍未接收到排序在前的某个TCP数据包,则终端会在每收到排序在后的每个TCP数据包时向服务器发送请求排序在先的该个TCP数据包的确认响应(ACK响应),服务器在连续多次(例如,三次)接收到该确认响应,则服务器判定该个排序在先的该个TCP数据包已丢失。
发生所述TCP数据包丢失之后,步骤B301将所述TCP数据包丢失时的拥塞窗口调整为第二拥塞窗口,以所述第二拥塞窗口控制所述TCP数据包的发送,服务器最多可向所述网络发送的TCP数据包的个数由所述第二拥塞窗口确定。
需说明的是,如果网络状况持续恶化(如网络带宽持续减小)或者终端出现问题而导致持续的TCP数据包丢失,可能依次执行多个步骤B301。当然,如果网络状况不变或优化(如网络带宽不变或者增大)、且终端没问题,则会进入步骤B302。
步骤B302,如果接收到以所述第二拥塞窗口发送所述TCP数据包的确认响应,则确定与所述第二拥塞窗口对应的第一往返时延。
需说明的是,“第一拥塞窗口”中的“第一”、“第二拥塞窗口”中的“第二”均为代指,仅用于相互区分。
具体地,步骤B301将所述TCP数据包丢失时的所述TCP的拥塞窗口调整为第二拥塞窗口之后,服务器在业务的TCP流中以所述第二拥塞窗口控制该业务的TCP数据包向终端发送,如果服务器完成第一轮将所述第二拥塞窗口中的所述TCP数据包发送至终端、并接收到第一轮发送的所有TCP数据包的确认响应,则计算第一轮的第二拥塞窗口中最后一个TCP数据包的RTT,将计算出的该RTT定义为第一往返时延;
举例说明,在服务器完成第一轮将所述第二拥塞窗口中的所述TCP数据包发送至终端、并接收到第一轮发送的所有TCP数据包的确认响应时,采用RFC6289提供的算法(Jacobson/Karels算法)计算最后接收到的确认响应(第一轮发送的所有TCP数据包的确认响应中,最后接收到的确认响应)的RTT,将计算出的RTT作为第一往返时延。
步骤B303,基于所述目标吞吐率和所述第一往返时延,根据第一算法或第二算法确定第一拥塞窗口。
具体地,在网络并非轻载而是具有有一定负荷的情况下,所述第一往返时延是与反应的当前网络状况(如网络中转发TCP数据包的网络路径长度)强相关的,网络状况 越好,所述第一往返时延相对越小;极端情况下,网络严重超负荷时,所述第一往返时延与该网络的超时重传机制(Retransmission TimeOut,简称RTO)相等,该网络处于严重拥塞。
如果步骤B302接收到第一轮的第二拥塞窗口中的TCP数据包的确认响应,代表网络可能允许传输更多TCP数据包;这时,可适当增大所述第二拥塞窗口,将所述第二拥塞窗口增大至所述第一拥塞窗口。
本发明实施例是采用第一算法或第二算法确定该第一拥塞窗口;尤其在该第二算法中,引入了所述目标吞吐率这一参数,同时引入了所述第一往返时延这一参数;因根据第一往返时延确定的当前网络状况较好,对于根据该第二算法确定的该第一拥塞窗口,以该第一拥塞窗口控制TCP数据包的发送能够使得:服务器在发送载有业务的所述TCP数据包时终端能够获得期望的所述目标吞吐率;另外,以根据第一算法确定的第一拥塞窗口控制TCP数据包的发送,也能在当前的网络状况下,尽量提高服务器提供业务时的吞吐率,使得终端在当前的网络状况下获得最大的吞吐率。
步骤B304,将所述第二拥塞窗口调整为所述第一拥塞窗口,在所述网络控制所述TCP数据包以所述第一拥塞窗口发送。
具体地,步骤B303以所述第一拥塞窗口替代第二拥塞窗口之后,步骤B304中服务器以所述第一拥塞窗口控制向终端发送的载有业务的所述TCP数据包,即在某个时刻,服务器最多向终端发送所述第一拥塞窗口中包含的所有TCP数据包(载有业务的所述TCP数据包)。
在本实施例中,确定服务器发送载有该业务的TCP数据包所需的目标吞吐率,并将该目标吞吐率写入TCP的协议栈。进而,服务器向终端发送该TCP数据包的TCP流中,如果该TCP数据包的发生丢失,将TCP的拥塞窗口调整为第二拥塞窗口;如果成功向终端发送第一轮的第二拥塞窗口中的TCP数据包(即收到终端对第一轮的第二拥塞窗口中的TCP数据包的确认响应),则确定与该第一轮的第二拥塞窗口对应的第一往返时延;从而,步骤B303能够根据该第一往返时延确定网络状况(发送该第一轮的第二拥塞窗口中TCP数据包的网络状况),在该网络状况下确定尽量满足目标吞吐率的所述第一拥塞窗口;从而在根据第一往返时延确定该网络状况之后,步骤B304能够一步到位地将第二拥塞窗口调整到尽量满足目标吞吐率的所述第一拥塞窗口,而不是根据现有算法(例如:Reno算法和CUBIC算法)渐进地对第二拥塞窗口进行增大直到增大到 第一拥塞窗口。
值得说明的是,本方法尤其适用于网络带宽大的场合,随着网络带宽的增大,相比于现有算法更加能够满足业务,并且还提高了网络带宽的利用率;详细对比如下:
Reno算法和CUBIC算法等现有的窗口调整算法在发生TCP数据包丢失时,将TCP的拥塞窗口大幅减小,例如Reno算法在TCP数据包丢失时将拥塞窗口减小一半,再例如CUBIC算法在TCP数据包丢失时将拥塞窗口减小至717/1024(减小了接近三分之一);但将拥塞窗口减小后,服务器采用现有的窗口调整算法调整拥塞窗口时,需经过一轮又一轮试探性地发送TCP数据包,每一轮发送TCP数据包之后,如果该轮发送的TCP数据包又有丢失的则再一次大幅度减小拥塞窗口,如果成功收到终端反馈的确认响应则增大一次拥塞窗口;但每一轮成功收到终端反馈的确认响应之后,现有算法都没有考虑业务期望获得的目标吞吐率,而是根据算法逐渐一轮又一轮地增大拥塞窗口,慢慢地达到当前网络状况下能够为业务提供的最大拥塞窗口(该第一拥塞窗口);
对应地,预先确定该业务期望获得的目标吞吐率。即使服务器向终端发送该TCP数据包的过程中发生该TCP数据包的丢失,并将TCP的拥塞窗口调整为第二拥塞窗口;如果成功向终端发送第一轮的第二拥塞窗口中的TCP数据包,则确定与该第一轮的第二拥塞窗口对应的第一往返时延,并根据该第一往返时延确定网络状况,再确定该网络状况下尽量满足目标吞吐率的所述第一拥塞窗口;将拥塞窗口一次性地从第二拥塞窗口增加到第一拥塞窗口以尽量满足该业务对应的目标吞吐率;其中,如果发送第一轮的第二拥塞窗口中的TCP数据包时的网络状况好,在网络带宽满足该目标吞吐率的情况下,以所述第一拥塞窗口发送TCP数据包能够满足该目标吞吐率;其中,即使网络状况不好,结合该第一往返时延和目标吞吐率确定的第一拥塞窗口,也是在网络状况下最大程度满足该业务的比特率的拥塞窗口。
可选地,本方法尤其适用于网络带宽大的无线网络;相对于有线网络,无线网络发生随机丢包的概率较大,每次发生随机丢包时,现有算法都会减小拥塞窗口,但每次拥塞窗口减小后只能缓慢地逐渐增加到第一拥塞窗口,恢复到能够正常向终端提供业务的时间延迟较长,不利于服务器正常向终端提供业务。相比之下,即使在无线网络中发生随机丢包,每次随机丢包之后,本发明实施例为支持该业务,经过两步便将发生丢包时的拥塞窗口调整为第一拥塞窗口,该两步包括:从随机丢包时的拥塞窗口调整到第二拥塞窗口,从第二拥塞窗口调整到第一拥塞窗口,能够有效利用网络带宽,尽可能及时地 支持业务。
图4为基于图2所述的基于TCP的TCP数据包的发送方法的一种可选工作流程,但为了便于描述,图4仅示出了与实施例相关的部分。
在本发明一实施例中,基于上述的本发明实施例和实施例,对在所述网络发送所述TCP数据包的过程中发生丢包时调整拥塞窗口做一可选细化,所述方法还包括步骤C401和步骤C402。
步骤C401,如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中,所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口的降低步幅与所述第三往返时延负相关。
具体地,对于如何识别所述TCP数据包已丢失的方法,详见上述步骤B301中的相应描述;例如,服务器向终端依次发送多个TCP数据包,TCP数据包到达终端的时间乱序,终端已收到排序在后的多个TCP数据包(例如,排序在后的三个TCP数据包)之后仍未接收到排序在前的某个TCP数据包,则终端会在每收到排序在后的每个TCP数据包时向服务器发送请求排序在先的该个TCP数据包的确认响应(ACK响应),服务器在连续多次(例如,三次)接收到该确认响应,则服务器判定该个排序在先的该个TCP数据包已丢失。需说明的是,本实施例在服务器发送该TCP数据包之后超过所述RTO(属于上述的预设时间)仍未收到终端对该TCP数据包的确认响应,则判定为该TCP数据包已丢失。
本实施例在发生所述TCP数据包丢失时,检测所述TCP数据包丢失时的RTT,将所述TCP数据包丢失时检测到的RTT作为第三往返时延,因此,第三往返时延反映了所述TCP数据包丢失时的网络状况。值得说明的是,如果所述TCP数据包是随机丢失的,检测到的第三往返时延等于网络轻载时的RTT。
步骤C401在发送数据包丢失时,基于第三往返时延根据第三算法确定所述第二拥塞窗口。值得说明的是,所述第三算法中所确定的第二拥塞窗口与所述第三往返时延负相关;具体地,第三往返时延作为第三算法的输入,随着第三往返时延的增大,根据第三算法确定的第二拥塞窗口会越小。在第三算法具有以上功能的基础上,本实施例对第三算法的具体实现形式或步骤均不做限定;如,可根据业务需要,确定在发生TCP数 据包丢失时需减少拥塞窗口的幅度,设计该第三算法;再如,可采用现有算法(例如:Reno算法或CUBIC算法)作为第三算法。
可选地,在步骤C401中,所述按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体包括:
如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传(Retransmission Time Out,简称RTO),所述时延区间的下限为在所述网络轻载时的往返时延。其中,预设的拥塞窗口是指用户预先设定的拥塞窗口。
具体地,对于载有业务的TCP流,TCP协议针对该TCP流定义了一个RTO;可选地,该RTO可人为修改,或者该RTO根据当前网络的实验数据设定。本实施例将RTO确定该时延区间的上界。所述服务器向终端发送载有业务的TCP数据包之后,超过该特定时间仍未收到该终端对该TCP数据包的确认响应,也会判定为丢包。
另外,所述网络轻载时的往返时延,是指:在网络不存在网络拥塞(即处于网络轻载)的情况下,所述服务器向终端发送载有业务的TCP数据包之后,收到该终端对该TCP数据包的确认响应时计算的该TCP数据包的RTT。可选地,在具体实施中,所述服务器该网络处于网络轻载时向终端发送载有业务的TCP数据包,检测每个TCP数据包的RTT,从检测的RTT中筛选最小的RTT;本实施例确定该时延区间的下界为:检测的RTT中筛选最小的RTT。
服务器以所述TCP的拥塞窗口控制向终端发送载有业务的TCP数据包的过程中发生TCP数据包丢失,如果发生该TCP数据包丢失时的第三往返时延等于所述时延区间的下界,代表网络状况良好,该TCP数据包的丢失仅是偶然因素(例如随机丢包),则不减小丢失TCP数据包时的拥塞窗口,即,将丢失TCP数据包时的拥塞窗口作为第二拥塞窗口,对丢失的TCP数据包重新发送即可。这样,在网络状况良好的情况下,如果仅是偶然丢包(例如随机丢包),则不需要减小拥塞窗口;相比于现有技术一旦检测到TCP数据包丢失就大幅度减小拥塞窗口,本实施例能够更加有效地利用网络带宽,尽量支持业务。
服务器以所述TCP的拥塞窗口控制向终端发送载有业务的TCP数据包的过程中发生所述TCP数据包丢失,如果检测到发送该TCP数据包丢失时的第三往返时延等于所 述时延区间的上界,即所述第三往返时延等于RTO,代表网络已严重拥塞,则需要减小拥塞窗口,将TCP数据包丢失时的拥塞窗口减小至预设的拥塞窗口。需说明的是,第三往返时延延达到RTO就判定为TCP数据包丢失,不再继续等待检测该TCP数据包的第三往返时延,因此第三往返时延最大也只能为RTO。
可选地,如果第三往返时延属于所述时延区间的下界与上界之间,则根据第三算法确定第二拥塞窗口大于该预设的拥塞窗口、且小于TCP数据包丢失时的拥塞窗口,并且随着第三往返时延的增大,根据第三算法确定的第二拥塞窗口越小。
步骤C402,将所述TCP数据包以所述第二拥塞窗口进行发送。
具体地,在业务的TCP流中,如果发送所述TCP数据包丢失,步骤C401确定第二拥塞窗口,步骤C402以该第二拥塞窗口替换所述TCP数据包丢失时的拥塞窗口,以进行拥塞窗口的更新替换,再以所述第二拥塞窗口控制该业务的所述TCP数据包的发送。
本发明一实施例,基于上述的本发明实施例和实施例,对所述第二算法作进一步可选细化,所述第一算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
具体地,如果反应网络状况的所述第一往返时延小于或等于所述第二往返时延,网络没有出现网络拥塞,代表TCP数据包丢失时的网络带宽能够满足目标吞吐率,可根据第二算法增大拥塞窗口;具体地,根据所述第二算法确定所述第一拥塞窗口时,随着所述目标吞吐率的增大,根据第二算法确定的第一拥塞窗口也越大,随着所述第一往返时延的增大,根据第二算法确定的第一拥塞窗口越小。可选地,所述第一往返时延小于或等于所述第二往返时延,代表没有出现网络拥塞,在根据第二算法确定所述第一拥塞窗口时目标吞吐率相比于第一往返时延的权重大,服务器以确定的第一拥塞窗口向终端发送载有业务的TCP数据包时能够达到该目标吞吐率,使得终端从该TCP数据包中解析出的业务所具有的比特率满足:该业务所需的比特率;服务器能够正常向终端提供该业务。
如果反应网络状况的所述第一往返时延大于所述第二往返时延,代表TCP数据包丢失时的网络带宽已不能满足目标吞吐率,网络出现网络拥塞;这时,采用第一算法确 定第一拥塞窗口,第一算法根据第一往返时延确定第一拥塞窗口时第一拥塞窗口,随着所述第一往返时延的增大,根据第二算法确定的第一拥塞窗口越小。服务器以确定的根据第一算法确定的第一拥塞窗口控制向终端发送的TCP数据包时,根据第一算法确定的该第一拥塞窗口提供的吞吐率不能够达到该目标吞吐率,仅能尽量减小与该目标吞吐率的差距,使得:终端从该TCP数据包中解析出的业务所具有的比特率不能满足该业务所需的比特率,但可以在当前网络状况下最大程度地支持该业务,减小与该目标吞吐率的差距,尽量支持服务器向终端提供该业务。
本发明一实施例,基于上述的本发明实施例和实施例,对所述第二算法作进一步可选细化,所述以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度(Maximum Segment Size,简称MSS)来获得所述第一拥塞窗口;
若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
具体地,预先为拥塞窗口设定慢启动阈值(slow start threshold,简称ssthresh)。如果所述第一往返时延等于所述网络轻载的往返时延,代表网络状况良好,这时如果检测到所述第一往返时延时的拥塞窗口小于所述慢启动阈值,则在根据第一算法确定该拥塞窗口的增长步幅时,根据第一算法确定的快速恢复值(根据第一算法确定的增长步幅)与根据慢启动算法确定的增长步幅属于同一数量级,将检测到所述第一往返时延时的拥塞窗口增加该快速恢复值以得到所述第一拥塞窗口。需说明的是,本实施例对慢启动以及对应的慢启动算法均不做限定,可采用现有慢启动及其慢启动算法实现。
如果检测到的所述第一往返时延已达到所述RTO,代表所述第一往返时延反映的当前网络状况已出现严重拥塞,这时第一算法将拥塞窗口的增长步幅取值为1个MSS;这种情况下,即使成功发送一轮拥塞窗口的TCP数据包,也仅对当前的拥塞窗口增加1个MSS来获得所述第一拥塞窗口,以所述第一拥塞窗口更新替换当前的拥塞窗口。
如果检测到的所述第一往返时延属于所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间,第一算法相应地将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间,但根据第一算法确定的该增长步幅与所述第一往返时延是负相关的;这种情况下,即使成功发送一轮拥塞窗口的TCP数据包,对当前的拥塞窗口增加该增长步幅来获得所述第一拥塞窗口。
本发明一实施例,基于上述的本发明实施例和实施例,对所述第二算法作进一步可选细化,所述以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体包括:
根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
具体地,如果所述第一往返时延小于或等于所述第二往返时延,代表所述第一往返时延反映的网络状况良好,网络带宽大于或等于该目标吞吐率,可增大拥塞窗口来提高为该业务提供的吞吐率;为了一次性将当前拥塞窗口增大到能够当前网络状况下提供目标吞吐率的目标窗口,将拥塞窗口的增长步幅取值为:所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值。这样,将检测所述第一往返时延时的拥塞窗口增加该增长步幅而得到的第一拥塞窗口直接等于目标窗口;从而能够直接在当前网络状况下直接以尽量满足业务的目标吞吐率的目标窗口控制TCP数据包的发送,尽最大可能地提供该业务。
本发明一实施例,基于上述的本发明实施例和实施例作进一步可选细化,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
具体以图1B为例,在服务器向终端提供业务的过程中,服务器可解析其向终端提供每个业务分别所需的比特率,再根据解析到的比特率计算出服务器向终端提供每个业务分别所需的该目标吞吐率,对于解析的具体实现方式,本发明实施例不做限定。举例说明,通常每个业务都有标准的对应比特率,根据与该比特率对应确定发送载有该业务的TCP数据包所需的目标吞吐率,与该业务对应的目标吞吐率的值大于该业务的比特率的值,这样,服务器以该目标吞吐率向终端提供业务时,终端从该TCP数据包中解析出的比特率满足该业务所需该比特率,即终端解析出的业务的比特率大于或等于与该 业务对应标准的比特率。
可选地,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
图5为更新目标吞吐率的一种工作流程,但为了便于描述,图5仅示出了与实施例相关的部分。
本发明一实施例,基于上述的本发明实施例和实施例,从根据当前的网络状况更新目标吞吐率的角度作进一步可选细化,参见图5,所述方法还包括步骤D501、步骤D502、步骤D503。
步骤D501,检测在所述网络发送所述TCP数据包的实际吞吐率;
步骤D502,如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述TCP数据包的第四往返时延与网络轻载时检测到的往返时延的差值小于第二阈值,则增大所述目标吞吐率;
步骤D503,如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
需说明的是,所述第三阈值小于所述第一阈值,所述第二阈值小于所述所述第四阈值。
具体地,在所述网络发送该业务的TCP数据包的过程中,可根据该目标吞吐率调整与该业务对应的TCP流的拥塞窗口,使得以调整出的拥塞窗口发送该业务的TCP数据包时能够尽可能满足该业务所需的该目标吞吐率。但上述实施例在确定所述目标吞吐率时仅引入了业务的比特率这一参数,并未考虑网络状况这一因素,本实施例在确定目标吞吐率时不但考虑业务的比特率,还考虑当前的网络状况。
具体地,在所述网络发送所述TCP数据包的过程中,检测当前发送TCP数据包的RTT,将当前检测到的RTT作为第四往返时延,检测当前在所述网络发送所述TCP数据包的吞吐率,该当前检测到的吞吐率为所述实际吞吐率。其中,决定该实际吞吐率的因素包括网络状况、发送端的吞吐率和接收端的吞吐率;以图1A所示的系统为例,决定该实际吞吐率的因素包括网络103的网络状况、服务器101的吞吐率和终端102的吞 吐率;以图1B所示的系统为例,决定该实际吞吐率的因素包括网络203的网络状况、第一代理设备204的吞吐率和终端202的吞吐率;以图1C所示的系统为例,决定该实际吞吐率的因素包括网络303的网络状况、第一代理设备304的吞吐率和第二代理设备305的吞吐率。另值得说明的是,本实施例对同时检测第四往返时延和实际吞吐率的频次和时间不做限定,如可每间隔一定时间同时检测一次第四往返时延和实际吞吐率。
计算所述实际吞吐率与所述目标吞吐率的比值,计算所述第四往返时延与基准往返时延(即在所述网络处于网络轻载的网络状况下发送所述TCP数据包时检测到的RTT)的差值。
如果该比值大于第一阈值、并且该差值小于第二阈值,代表网络状况良好,网络的物理带宽大于目标吞吐率,可执行步骤D502增大所述目标吞吐率,并在所述TCP的协议栈中更新所述目标吞吐率的赋值;可选地,第一阈值为接近于1的值,例如第一阈值为90%;可选地,第二阈值为接近于0的值;可选地,保证执行步骤D502增大后的所述目标吞吐率小于网络带宽的情况下,对于步骤D502增大所述目标吞吐率的步长大小不做限定。
如果该比值小于第三阈值、并且该差值大于第四阈值,代表网络状况差,网络的物理带宽小于目标吞吐率,可执行步骤D503减小所述目标吞吐率,并在所述TCP的协议栈中更新所述目标吞吐率的赋值;可选地,第三阈值为小于或等于50%,例如第一阈值为20%;可选地,第二阈值为较大的延迟值;可选地,保证执行步骤D502减小后的所述目标吞吐率小于网络带宽的情况下,对于步骤D503减小所述目标吞吐率的步长大小不做限定。
本实施例在根据业务所需的比特率确定发送该业务的TCP数据包所需的目标吞吐率的基础上,还考虑当前的网络状况,实时根据当前的网络状况调整协议栈中的目标吞吐率的值,从而能够避免因根据过大的目标吞吐率确定的拥塞窗口增大TCP数据包丢失的概率,还能够将过小的目标吞吐率增大以增大拥塞窗口,有效利用网络的物理带宽。
作为本发明一实施例,基于上述的本发明实施例和实施例,作为一可选实施方式,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。具体地,对于基于图1A提供的系统,在图1A中的服务器101的TCP协议栈中添加目标吞吐率参数,并对该目标吞吐率参数赋值为该目标吞吐率。对于图1B提供的系统,在第一代理服务器204的TCP 协议栈中添加目标吞吐率参数,并对该目标吞吐率参数赋值为该目标吞吐率。对于图1C提供的系统,在第一代理服务器304的TCP协议栈中添加目标吞吐率参数,并对该目标吞吐率参数赋值为该目标吞吐率。
需说明的是,上述发明实施例和实施例都是基于图1A提供的系统为例进行的描述,其中上述发明实施例和实施例适用于图1A中的服务器101;须知的是,对于图1B提供的系统,上述发明实施例和实施例适用于第一代理服务器204;须知的是,对于图1C提供的系统,上述发明实施例和实施例适用于第一代理服务器304。
作为本发明一实施例,本实施例在基于所述方法应用于图1A中服务器101的基础上,对上述的发明实施例和实施例作进一步可选细化;所述方法包括:
服务器从所述TCP数据包的报文中解析出所述业务的比特率,将所述比特率乘以扩大系数得到所述目标吞吐率。
具体地,服务器向终端提供业务的过程中,对于其向终端发送的载有该业务的TCP数据包,服务器在该TCP数据包包括的报文中确定字段,在该字段记录该业务所需的比特率,如果终端从该TCP数据包解析出的业务能够满足该业务所需的比特率,则代表服务器正常地成功向该终端提供了该业务。
需说明的是,因是在TCP数据包包括的报文中添加该业务,具体是在该报文中添加该业务相关的数据,因此从数值上看,根据第三算法算出的目标吞吐率的值大于该业务所需的比特率的值。
在本发明一实施例中,如果图1A中服务器101不支持对其上的TCP的协议栈修改(包括不支持将目标吞吐率添写到该TCP的协议栈),则图1B中采用第一代理设备204代理该服务器201,该第一代理设备204支持对其上的TCP的协议栈修改(包括支持将目标吞吐率添写到该TCP的协议栈)。如图1B所示,在网络中添加第一代理设备204,由该第一代理设备204代理服务器201向终端202发送载有业务的TCP数据包;这时,在图1B中,上述基于所述方法应用于图1A中服务器101为例来描述的发明实施例和实施例,不再应用于图1B中的服务器201,而是应用于图1B中的第一代理设备204。
将以所述方法应用于图1A中服务器101为例来描述的发明实施例和实施例应用于图1B中的第一代理设备204时,除了由终端202计算目标吞吐率这一区别之外,其他 步骤中的服务器替换为第一代理设备204之后,上述发明实施例和实施例即可应用于第一代理设备204;区别具体详述如下:
上述发明实施例和实施方式提供TCP数据包的发送方法应用于第一代理设备,所述第一代理设备代理服务器向终端发送所述TCP数据包;所述目标吞吐率由所述终端确定,确定的方法包括:从所述TCP数据包的报文中解析出所述业务的比特率,将所述比特率乘以扩大系数得到所述目标吞吐率;第一代理设备从所述终端获取所述目标吞吐率。
具体地,服务器具有不支持修改的TCP的协议栈,第一代理设备具有支持修改的TCP的协议栈,终端具有TCP的协议栈。服务器可根据其的所述TCP的协议栈生成载有业务的TCP数据包,在服务器与第一代理设备之间基于TCP/IP协议进行TCP数据包的交互。第一代理设备对于其从服务器接收到的载有业务的TCP数据包,基于其TCP的协议栈修改该TCP数据包(可不修改该TCP数据包包括的报文所载有的业务),将修改后的TCP数据包向终端发送,终端基于其TCP的协议栈对该TCP数据包进行解析,解析出载有该业务的报文,在从该报文中解析出与该业务相关的数据。
其中,该报文还记载有该业务所需的比特率,终端从该TCP数据包中解析出报文之后,可从该报文中解析出该业务所需的比特率,将所述比特率乘以扩大系数得到所述目标吞吐率;
然后,终端对TCP的套接口进行修改,增添目标吞吐率参数这一参数,并以该目标吞吐率对该目标吞吐率参数赋值,通过该套接口将该目标吞吐率参数及其赋值(该目标吞吐率)传输至终端的TCP的协议栈;举例说明,在“setsockopt()”这一套接口函数中添加“target_throughput”这一目标吞吐率参数,根据所述比特率和扩大系数计算得到所述目标吞吐率之后,以计算出的目标吞吐率对该“target_throughput”赋值,通过“setsockopt()”将“target_throughput”及其赋值传输至终端的TCP的协议栈;
终端生成一报文,对该报文的TCP的选项进行扩展,在该TCP的选项中添加目标吞吐率参数及其赋值(套接口传输至TCP的协议栈的目标吞吐率参数及其赋值);终端通过向第一代理设备发送TCP数据包(该TCP数据包包括具有该TCP的选项(具有该目标吞吐率参数及其赋值)的该报文),第一代理设备可对该TCP数据包解析后,获取该报文中该TCP的选项包括的该目标吞吐率参数及其赋值;进而,第一代理服务器可在其TCP的协议栈中添加该目标吞吐率参数及其赋值。
可选地,如果终端的TCP的协议栈支持修改,本实施例可以在通过套接口接收到该目标吞吐率参数及其赋值之后,将“目标吞吐率参数及其赋值添写到终端的所述TCP的协议栈中;当然,即使在通过套接口接收到该目标吞吐率参数及其赋值之后,本实施例可以不在终端的TCP的协议栈中添加该目标吞吐率参数及其赋值,而直接生成报文(该报文的TCP的选项中添加该目标吞吐率参数及其赋值),终端向第一代理设备发送载有该报文的TCP数据包,使得第一代理设备将该目标吞吐率参数及其赋值添加到其的TCP的协议栈中。
作为本发明一实施例,对于上述基于所述方法应用于图1A中服务器101为例来描述的发明实施例和实施例作可选优化,如果图1A中服务器101不支持对其上的TCP的协议栈修改(包括不支持将目标吞吐率添写到该TCP的协议栈),则图1C中采用第一代理设备304代理该服务器101,该第一代理设备304支持对其上的TCP的协议栈修改(包括即支持将目标吞吐率添写到该TCP的协议栈);另外,还可以在网络中添加第二代理设备305,如图1C,由该第二代理设备305代理终端302与第一代理设备304进行TCP数据包的交互。添加第二代理设备305的一个原因是,第一代理设备304与第二代理设备305共享TCP的协议栈,包括第二代理设备305将载有目标吞吐率参数及其赋值的报文以TCP数据包的方法发送至第一代理设备304;添加第二代理设备305的又一个原因是,多个终端302由同一第二代理设备305代理,由该第二代理设备305代理每个终端302与第一代理设备304进行TCP数据包的交互。
这时,在图1C中,上述基于所述方法应用于图1A中服务器101为例来描述的发明实施例和实施例,不再应用于图1C中的服务器101,而是应用于图1C中的第一代理设备304;将以所述方法应用于图1A中服务器101为例来描述的发明实施例和实施例应用于图1C中的第一代理设备304时,除了由终端备302计算目标吞吐率这一区别之外,其他步骤中的服务器替换为第一代理设备304之后,上述发明实施例和实施例即可应用于第一代理设备304;区别具体详述如下:
上述发明实施例和实施方式提供TCP数据包的发送方法应用于第一代理设备;所述第一代理设备代理服务器向第二代理设备发送所述TCP数据包,以由所述第二代理设备将所述TCP数据包转发至终端。然后,所述终端确定所述目标吞吐率并发送所述目标吞吐率至所述第二代理设备,确定所述目标吞吐率的方法包括:从所述TCP数据 包的报文中解析出所述业务的比特率,将所述比特率乘以扩大系数得到所述目标吞吐率。再然后,第一代理设备从所述第二代理设备获取所述目标吞吐率。
具体地,服务器具有不支持修改的TCP的协议栈,第一代理设备具有支持修改的TCP的协议栈,第二代理设备具有TCP的协议栈,终端具有TCP的协议栈。服务器可根据其的所述TCP的协议栈生成载有业务的TCP数据包,在服务器与第一代理设备之间基于TCP/IP协议进行TCP数据包的交互。第一代理设备对于其从服务器接收到的载有业务的TCP数据包,基于其TCP的协议栈修改该TCP数据包(可不修改该TCP数据包包括的报文所载有业务),将修改后的TCP数据包向第二代理设备发送;第二代理设备基于其TCP的协议栈对该TCP数据包进行再次修改,将再次修改的TCP数据包发送至终端,终端对该再次修改的TCP数据包解析,解析出载有该业务的报文,在从该报文中解析出与该业务相关的数据。
其中,该报文还记载有该业务所需的比特率,终端从该TCP数据包中解析出报文之后,可从该报文中解析出该业务所需的比特率,将所述比特率乘以扩大系数得到所述目标吞吐率;
然后,终端对TCP的套接口进行修改,增添目标吞吐率参数这一参数,并以根据业务所需的比特率计算出的该目标吞吐率对该目标吞吐率参数赋值,通过该套接口将该目标吞吐率参数及其赋值(该目标吞吐率)传输至终端的TCP的协议栈;举例说明,在“setsockopt()”这一套接口函数中添加“target_throughput”这一目标吞吐率参数,待根据终端根据业务所需的吞吐率计算出目标吞吐率之后,以计算出的目标吞吐率对该“target_throughput”赋值,通过“setsockopt()”将“target_throughput”及其赋值传输至终端的TCP的协议栈;
终端生成第一报文,对该第一报文的TCP的选项进行扩展,在该TCP的选项中添加目标吞吐率参数及其赋值(通过套接口传输至TCP的协议栈的目标吞吐率参数及其赋值);终端通过向第二代理设备发送TCP数据包(该TCP数据包包括具有该TCP的选项(具有该目标吞吐率参数及其赋值)的该第一报文);
第二代理设备可对该TCP数据包解析后,获取该第一报文中该TCP的选项包括的该目标吞吐率参数及其赋值;第二代理设备生成第二报文,对该第二报文的TCP的选项进行扩展,在该TCP的选项中添加目标吞吐率参数及其赋值(第一报文中的目标吞吐率参数及其赋值);第二代理设备通过向第一代理设备发送TCP数据包(该TCP数据 包包括具有该TCP的选项(具有该目标吞吐率参数及其赋值)的该第二报文);
第一代理设备对从第二代理设备接收的TCP数据包进行解析,解析出第二报文;再从第二报文中获取该目标吞吐率参数及其赋值,在第一代理设备的TCP的协议栈中添加该目标吞吐率参数及其赋值;
可选地,第二代理设备从第一报文获取到目标吞吐率参数及其赋值之后,可以将该目标吞吐率参数及其赋值添加到第二代理设备的TCP的协议栈中;当然,第二代理设备从第一报文获取到目标吞吐率参数及其赋值之后,本实施例可以不修改第二代理设备的TCP的协议栈(即在第二代理设备的TCP的协议栈中添加该目标吞吐率参数及其赋值),而直接生成第二报文(该第二报文的TCP的选项中添加该目标吞吐率参数及其赋值),第二代理设备向第一代理设备发送载有该报文的TCP数据包,使得第一代理设备将该目标吞吐率参数及其赋值添加到其的TCP的协议栈中。
值得说明的是,添加第二代理设备,因第一代理设备与第二代理设备的路由路径是确定的,所以在网络处于轻载时,TCP数据包从第一代理设备到第二代理设备的RTT是基本不变的。从而,当第二代理设备代理一个多个终端时,在已确定第二代理设备与第一代理设备之间已有的TCP流在所述网络轻载时的RTT,确定与该已有的TCP流对应的时延区间的下界(已有的TCP流在所述网络轻载时的RTT);如果在第二代理设备与第一代理设备之间新添加某个业务的TCP流,不需要经过一轮拥塞窗口的RTT,筛选该轮拥塞窗口的RTT中的最小的RTT作为新添的该条TCP流对应的时延区间的下界,而是直接将与该已有的TCP流对应的时延区间的下界作为:与新添的该个业务的TCP流对应的时延区间的下界;即,直接采用已有的TCP流在所述网络轻载时的RTT作为:新添的该个业务的TCP流在所述网络轻载时的RTT。
作为本发明一实施例,基于上述发明实施例和实施例,无论所述方法应用于图1A中的服务器101或者应用于图1B的第一代理设备204或者图1C中的第一代理设备304,都会基于业务的比特率计算目标吞吐率(将所述比特率乘以扩大系数得到所述目标吞吐率),本实施例对所述业务做如下细化:
如果所述业务为音频业务,则所述业务的比特率为所述音频业务的音频比特率;
如果所述业务为视频业务,则所述业务的比特率为所述视频业务的视频码率;
如果所述业务为音视频业务,则所述业务的比特率为所述音视频业务的音视频比特 率。
具体地,服务器向终端提供业务时,会生成载有该业务的报文;同时,在该报文中确定一字段,在该字段中记录该业务所需的比特率。其中,如果所述业务为音频业务,则在报文的该字段中记录该音频业务所需的音频比特率;其中,如果所述业务为视频业务,则在报文的该字段中记录所述视频业务所需的视频码率;其中,如果所述业务为音视频业务,则在报文的该字段中记录该音视频业务所需的音视频比特率。
作为本发明一实施例,图6是依据本发明一实施例的传输控制协议TCP数据包的发送装置600的逻辑结构示意图,如图6所示,所述装置600至少包括:时延确定单元601、窗口调整单元602和数据包发送单元603。
时延确定单元601,用于获取在网络中发送TCP数据包的第一往返时延,确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率;
窗口调整单元602,用于如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口,如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;
数据包发送单元603,用于将所述TCP数据包以所述第一拥塞窗口进行发送。
可选地,所述第二算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
可选地,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
可选地,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
可选地,所述窗口调整单元602,还用于如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中, 所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口的降低步幅与所述第三往返时延负相关;
所述数据包发送单元603,还用于将所述TCP数据包以所述第二拥塞窗口进行发送。
可选地,所述窗口调整单元602,还用于按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体为:
所述窗口调整单元602,用于如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传RTO,所述时延区间的下限为在所述网络轻载时的往返时延。
可选地,所述窗口调整单元602,用于以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
所述窗口调整单元602,用于若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
所述窗口调整单元602,用于若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度MSS来获得所述第一拥塞窗口;
所述窗口调整单元602,用于若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
可选地,所述窗口调整单元602,用于以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体为:
所述窗口调整单元602,用于根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
可选地,从根据网络状况更新目标吞吐率的对装置作一可选优化,参见图7,所述装置还包括吞吐率检测单元604和目标吞吐率调整单元605。
吞吐率检测单元604,用于检测在所述网络发送所述TCP数据包的实际吞吐率;
目标吞吐率调整单元605,用于如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述TCP数据包的第四往返时延与网络轻载时检测到的往返时延的差值小于第二阈值,则增大所述目标吞吐率,如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
可选地,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。
作为本发明一实施例,图8是本实施例提供的传输控制协议TCP数据包的发送装置800的硬件结构示意图,示出了所述发送装置800的一种硬件结构。如图8所示,所述所述发送装置800包括处理器801、存储器802和网络接口804,所述处理器801分别与所述存储器802和所述网络接口804通过所述总线803连接;所述所述发送装置800通过所述网络接口804接入所述网络103以发送/接收所述TCP数据包;
所述存储器802用于存储计算机执行指令,当所述所述发送装置800运行时,所述处理器801读取所述存储器802存储的所述计算机执行指令,以执行应用于所述发送装置800的上述发明实施例和实施例提供的传输控制协议TCP数据包的发送方法。
其中,处理器801可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本发明实施例所提供的技术方案,包括执行上述发明实施例和实施例提供的传输控制协议TCP数据包的发送方法。
其中,存储器802可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器802可以存储操作系统和其他应用程序。在通过软件或者固件来实现本发明实施例提供的技术方案时,用于实现本发明实施例提供的技术方案的程序代码保存在存储器802中,包括将应用于所述发送装置800的上述发明实施例和实施例提供的传输控制协议TCP数据包的发送方法的程序代码保存在存储器802中,并由处理器801来执行。
其中,网络接口804使用例如但不限于收发器一类的收发装置,来实现所述发送装置800与其他设备或通信网络之间的网络通信;可选地,网络接口804可以是用于接入网络的各种接口,如用于接入以太网的以太网接口,该以太网接口包括但不限于RJ-45接口、RJ-11接口、SC光纤接口、FDDI接口、AUI接口、BNC接口和Console接口等。
其中,总线803可包括一通路,用于在所述发送装置800中各个部件(例如处理器801、存储器802和网络接口804)之间传送信息。
可选地,所述发送装置800还包括输入/输出接口805,输入/输出接口805用于接收输入的数据和信息,输出操作结果等数据。
应注意,尽管图8所示的所述发送装置800仅仅示出了处理器801、存储器802、网络接口804以及总线803,但是在具体实现过程中,本领域的技术人员应当明白,所述发送装置800还包含实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当明白,所述发送装置800还可包含实现其他附加功能的硬件器件。此外,本领域的技术人员应当明白,所述发送装置800也可仅仅包含实现本发明实施例所必须的器件,而不必包含图8中所示的全部器件。
作为本发明一实施例,提供一种系统100,参见图1A,所述系统100包括服务器101和终端102,所述服务器101通过网络103与所述终端102通信连接,所述服务器101为上述的传输控制协议TCP数据包的发送装置800,通过所述网络103向所述终端102发送TCP数据包。
作为本发明一实施例,提供一种系统200,参见图1B,所述系统200包括服务器201、第一代理设备204和终端202,所述第一代理设备204分别与所述服务器201和所述终端202通信连接;所述服务器201,用于经所述第一代理设备204代理向所述终端202发送TCP数据包;
所述第一代理设备204为上述的传输控制协议TCP数据包的发送装置800,用于接收所述服务器201向所述终端202发送的所述TCP数据包,并代理所述服务器201向所述终端202发送所述TCP数据包。
可选地,所述终端202,用于将目标吞吐率通过目标吞吐率参数由所述终端202的TCP协议栈传递给所述第一代理设备204的TCP协议栈。
作为本发明一实施例,提供一种系统300,参见图1C所述系统300包括服务器301、第一代理设备304、第二代理设备305和终端302,所述第一代理设备304分别与服务器301和第二代理设备305通信连接;所述服务器301,用于经所述第一代理设备304 代理向所述终端302发送TCP数据包;
所述第一代理设备304为上述的传输控制协议TCP数据包的发送装置800,用于接收所述服务器301向所述终端302发送的所述TCP数据包,并代理所述服务器301向所述第二代理设备305发送所述TCP数据包;
所述第二代理设备305,用于接收所述TCP数据包并转发至所述终端302。
可选地,所述终端302,用于将目标吞吐率通过目标吞吐率参数由所述终端302的TCP协议栈中传递给所述第二代理设备305的TCP协议栈中;
所述第二代理设备305,还用于将所述目标吞吐率通过所述目标吞吐率参数由所述第二代理设备305的TCP协议栈中传递给所述第一代理设备304的TCP协议栈中。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块和单元的划分,仅仅为一种逻辑功能划分,实现时可以有另外的划分方式,例如多个模块或单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
上述以软件功能模块的形式实现集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:移动硬盘、只读存储器(英文:Read-Only Memory,简称ROM)、随机存取存储器(英文:Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管 参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的保护范围。

Claims (26)

  1. 一种传输控制协议TCP数据包的发送方法,其特征在于,所述方法包括:
    获取在网络中发送TCP数据包的第一往返时延;
    确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率;
    如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口;
    如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;
    将所述TCP数据包以所述第一拥塞窗口进行发送。
  2. 根据权利要求1所述的方法,其特征在于,所述第二算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
  3. 根据权利要求1或2所述的方法,其特征在于,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
  4. 根据权利要求3所述的方法,其特征在于,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:
    如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中,所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥 塞窗口的降低步幅与所述第三往返时延负相关;
    将所述TCP数据包以所述第二拥塞窗口进行发送。
  6. 根据权利要求5所述的方法,其特征在于,所述按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体包括:
    如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传RTO,所述时延区间的下限为在所述网络轻载时的往返时延。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
    若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
    若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度MSS来获得所述第一拥塞窗口;
    若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,所述以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体包括:
    根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    检测在所述网络发送所述TCP数据包的实际吞吐率;
    如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述 TCP数据包的第四往返时延与网络轻载时检测到的往返时延的差值小于第二阈值,则增大所述目标吞吐率;
    如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。
  11. 一种传输控制协议TCP数据包的发送装置,其特征在于,所述装置包括:
    时延确定单元,用于获取在网络中发送TCP数据包的第一往返时延,确定第二往返时延,所述第二往返时延为按照第一算法确定的拥塞窗口与按照第二算法确定的拥塞窗口具有同等大小时的往返时延,其中,所述第一算法根据所述第一往返时延确定拥塞窗口的增长步幅,所述第二算法根据所述第一往返时延和目标吞吐率确定拥塞窗口的增长步幅,所述目标吞吐率为所述TCP数据包对应的业务所期望获得的吞吐率;
    窗口调整单元,用于如果所述第一往返时延大于所述第二往返时延,则以第一算法确定的拥塞窗口作为第一拥塞窗口,如果所述第一往返时延小于或等于所述第二往返时延,则以第二算法确定的拥塞窗口作为所述第一拥塞窗口;
    数据包发送单元,用于将所述TCP数据包以所述第一拥塞窗口进行发送。
  12. 根据权利要求11所述的装置,其特征在于,所述第二算法中所确定的拥塞窗口的增长步幅与所述目标吞吐率正相关、与所述第一往返时延负相关,所述第一算法中所确定的拥塞窗口的增长步幅与所述第一往返时延负相关。
  13. 根据权利要求11或12所述的装置,其特征在于,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定。
  14. 根据权利要求13所述的装置,其特征在于,所述目标吞吐率根据从所述TCP数据包的报文中解析出的所述业务的比特率确定的算法为:所述目标吞吐率等于所述业务的比特率乘以扩大系数,其中,所述扩大系数大于1。
  15. 根据权利要求11至14任一项所述的装置,其特征在于,
    所述窗口调整单元,还用于如果在发送所述TCP数据包的过程中发生丢包,则按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,其中,所述第二拥塞窗口根据所述TCP数据包发生丢包时的第三往返时延确定,且所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口的降低步幅与所述第三往返时延负相关;
    所述数据包发送单元,还用于将所述TCP数据包以所述第二拥塞窗口进行发送。
  16. 根据权利要求15所述的装置,其特征在于,所述窗口调整单元,还用于按照第三算法将所述TCP数据包传输的拥塞窗口调整为第二拥塞窗口,具体为:
    所述窗口调整单元,用于如果所述第三往返时延等于时延区间的下限,则将所述TCP数据包发生丢包时的拥塞窗口作为所述第二拥塞窗口,如果所述第三往返时延等于所述时延区间的上限,则将预设的拥塞窗口作为所述第二拥塞窗口,其中,所述时延区间的上限为在所述网络中所述TCP的超时重传RTO,所述时延区间的下限为在所述网络轻载时的往返时延。
  17. 根据权利要求11至16任一项所述的装置,其特征在于,所述窗口调整单元,用于以第一算法确定的拥塞窗口作为第一拥塞窗口,具体包括:
    所述窗口调整单元,用于若所述第一往返时延等于所述网络轻载的往返时延,将拥塞窗口的增长步幅取值为快速恢复值来获得所述第一拥塞窗口,所述快速恢复值与慢启动的数量级相同;
    所述窗口调整单元,用于若所述第一往返时延等于所述网络中所述TCP的超时重传RTO,将拥塞窗口的增长步幅取值为1个最大报文段长度MSS来获得所述第一拥塞窗口;
    所述窗口调整单元,用于若所述第一往返时延在所述网络轻载的往返时延和所述网络中所述TCP的超时重传RTO区间变化时,将拥塞窗口的增长步幅取值在1个MSS和所述快速恢复值之间且与所述第一往返时延负相关来获得所述第一拥塞窗口。
  18. 根据权利要求11至17任一项所述的装置,其特征在于,所述窗口调整单元,用于以第二算法确定的拥塞窗口作为所述第一拥塞窗口,具体为:
    所述窗口调整单元,用于根据所述目标吞吐率和所述第一往返时延计算目标窗口,将所述拥塞窗口的增长步幅取值为所述目标窗口与在所述网络所述TCP的当前拥塞窗口的差值来确定所述第一拥塞窗口。
  19. 根据权利要求11至18任一项所述的装置,其特征在于,所述装置还包括:
    吞吐率检测单元,用于检测在所述网络发送所述TCP数据包的实际吞吐率;
    目标吞吐率调整单元,用于如果所述实际吞吐率与所述目标吞吐率的比值大于第一阈值、且所述网络发送所述TCP数据包的第四往返时延与网络轻载时检测到的往返时延的差值小于第二阈值,则增大所述目标吞吐率,如果所述实际吞吐率与所述目标吞吐率的比值小于第三阈值、且所述第四往返时延与所述网络轻载时检测到的往返时延的差值大于第四阈值,则减小所述目标吞吐率。
  20. 根据权利要求11至19任一项所述的装置,其特征在于,所述目标吞吐率通过目标吞吐率参数传递到TCP协议栈中。
  21. 一种传输控制协议TCP数据包的发送装置,其特征在于,所述发送装置包括处理器、存储器和网络接口,所述处理器分别与所述存储器和所述网络接口通过所述总线连接;
    所述存储器用于存储计算机执行指令,当所述发送装置运行时,所述处理器读取所述存储器存储的所述计算机执行指令,以执行权利要求1至10任一项所述的传输控制协议TCP数据包的发送方法。
  22. 一种系统,所述系统包括服务器和终端,所述服务器通过网络与所述终端通信连接,其特征在于,
    所述服务器为权利要求11至21任一项所述的传输控制协议TCP数据包的发送装置,通过所述网络向所述终端发送TCP数据包。
  23. 一种系统,所述系统包括服务器、第一代理设备和终端,所述第一代理设备分别与所述服务器和所述终端通信连接;其特征在于,
    所述服务器,用于经所述第一代理设备代理向所述终端发送TCP数据包;
    所述第一代理设备为权利要求11至21任一项所述的传输控制协议TCP数据包的发送装置,用于接收所述服务器向所述终端发送的所述TCP数据包,并代理所述服务器向所述终端发送所述TCP数据包。
  24. 根据权利要求23所述的系统,其特征在于,所述终端,用于将目标吞吐率通过目标吞吐率参数由所述终端的TCP协议栈传递给所述第一代理设备的TCP协议栈。
  25. 一种系统,所述系统包括服务器、第一代理设备、第二代理设备和终端,所述第一代理设备分别与服务器和第二代理设备通信连接;其特征在于,
    所述服务器,用于经所述第一代理设备代理向所述终端发送TCP数据包;
    所述第一代理设备为权利要求11至21任一项所述的传输控制协议TCP数据包的发送装置,用于接收所述服务器向所述终端发送的所述TCP数据包,并代理所述服务器向所述第二代理设备发送所述TCP数据包;
    所述第二代理设备,用于接收所述TCP数据包并转发至所述终端。
  26. 根据权利要求23所述的系统,其特征在于,
    所述终端,用于将目标吞吐率通过目标吞吐率参数由所述终端的TCP协议栈中传递给所述第二代理设备的TCP协议栈中;
    所述第二代理设备,还用于将所述目标吞吐率通过所述目标吞吐率参数由所述第二代理设备的TCP协议栈中传递给所述第一代理设备的TCP协议栈中。
PCT/CN2015/099278 2015-03-02 2015-12-28 传输控制协议tcp数据包的发送方法、发送装置和系统 WO2016138786A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2017546188A JP6526825B2 (ja) 2015-03-02 2015-12-28 伝送制御プロトコルtcpデータパケットを送信する方法及び装置、並びにシステム
EP15883836.7A EP3255847B1 (en) 2015-03-02 2015-12-28 Transmission control protocol data packet transmission method, transmission device and system
KR1020177027525A KR102030574B1 (ko) 2015-03-02 2015-12-28 전송 제어 프로토콜(tcp) 데이터 패킷을 송신하는 방법 및 장치 그리고 시스템
US15/694,581 US10367922B2 (en) 2015-03-02 2017-09-01 Method and apparatus for sending transmission control protocol TCP data packet and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510093011.3A CN105991462B (zh) 2015-03-02 2015-03-02 传输控制协议tcp数据包的发送方法、发送装置和系统
CN201510093011.3 2015-03-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/694,581 Continuation US10367922B2 (en) 2015-03-02 2017-09-01 Method and apparatus for sending transmission control protocol TCP data packet and system

Publications (1)

Publication Number Publication Date
WO2016138786A1 true WO2016138786A1 (zh) 2016-09-09

Family

ID=56849185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099278 WO2016138786A1 (zh) 2015-03-02 2015-12-28 传输控制协议tcp数据包的发送方法、发送装置和系统

Country Status (6)

Country Link
US (1) US10367922B2 (zh)
EP (1) EP3255847B1 (zh)
JP (1) JP6526825B2 (zh)
KR (1) KR102030574B1 (zh)
CN (1) CN105991462B (zh)
WO (1) WO2016138786A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113271316A (zh) * 2021-06-09 2021-08-17 腾讯科技(深圳)有限公司 多媒体数据的传输控制方法和装置、存储介质及电子设备
CN115022247A (zh) * 2022-06-02 2022-09-06 成都卫士通信息产业股份有限公司 流控制传输方法、装置、设备及介质
CN115514710A (zh) * 2022-11-08 2022-12-23 中国电子科技集团公司第二十八研究所 一种基于自适应滑动窗的弱连接流量管控方法

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102363534B1 (ko) * 2015-06-08 2022-02-17 삼성전자주식회사 통신 시스템에서 tcp 기반의 전송 제어 방법 및 장치
CN105827537B (zh) * 2016-06-01 2018-12-07 四川大学 一种基于quic协议的拥塞改进方法
WO2018041366A1 (en) * 2016-09-02 2018-03-08 Telefonaktiebolaget Lm Ericsson (Publ) Tcp proxy using a communication distance indicator
CN108023686B (zh) * 2016-11-02 2022-03-25 中兴通讯股份有限公司 一种tcp延时处理方法、装置及系统
CN106713432B (zh) * 2016-12-13 2019-11-05 深信服科技股份有限公司 数据缓存方法及网络代理设备
CN109274704B (zh) * 2017-07-17 2021-06-29 中国电信股份有限公司 Tcp加速方法和装置、加速效果判断控制器和网关
CN109510801B (zh) * 2017-09-15 2021-08-31 北京华耀科技有限公司 显式正向代理与ssl侦听集成系统及其运行方法
CN108075988A (zh) * 2017-11-16 2018-05-25 华为技术有限公司 数据传输方法和装置
CN111010261A (zh) * 2018-10-08 2020-04-14 西安旌旗电子股份有限公司 智能远控水表系统及其方法
US11082883B2 (en) * 2018-12-20 2021-08-03 Verizon Patent And Licensing Inc. Providing passive bandwidth estimation of a wireless link in a transmission control protocol (TCP) slow start state
CN110138608B (zh) * 2019-05-09 2022-08-30 网宿科技股份有限公司 网络业务服务质量管理的方法及服务器
CN110120921B (zh) * 2019-05-13 2022-07-01 深圳市赛为智能股份有限公司 拥塞避免方法、装置、计算机设备及存储介质
US10999206B2 (en) * 2019-06-27 2021-05-04 Google Llc Congestion control for low latency datacenter networks
US11329922B2 (en) * 2019-12-31 2022-05-10 Opanga Networks, Inc. System and method for real-time mobile networks monitoring
CN117676695A (zh) * 2020-02-19 2024-03-08 航天恒星科技有限公司 Tcp传输方法、装置和系统
CN111404783B (zh) * 2020-03-20 2021-11-16 南京大学 一种网络状态数据采集方法及其系统
CN113556213B (zh) * 2020-04-23 2022-12-06 华为技术有限公司 超时重传时间rto确定方法及相关装置
EP3907943B1 (en) * 2020-05-05 2022-04-27 Axis AB Round-trip estimation
US11483249B2 (en) * 2020-09-29 2022-10-25 Edgecast Inc. Systems and methods for dynamic optimization of network congestion control
CN112511451B (zh) * 2020-11-24 2022-11-08 南京邮电大学 控制bbr收敛周期长度的方法及服务器
CN112702362B (zh) * 2021-03-24 2021-06-08 北京翼辉信息技术有限公司 Tcp/ip协议栈的增强方法、装置、电子设备及存储介质
CN114268988B (zh) * 2021-12-30 2022-10-21 广州爱浦路网络技术有限公司 基于5g的低轨卫星拥塞控制方法、系统、装置及介质
CN114401224B (zh) * 2022-01-19 2023-07-11 平安科技(深圳)有限公司 一种数据限流方法、装置、电子设备以及存储介质
CN115766519A (zh) * 2022-10-24 2023-03-07 株洲华通科技有限责任公司 便携通信设备的数据传输方法及系统
CN116055402A (zh) * 2023-01-05 2023-05-02 果子(青岛)数字技术有限公司 用于大数据和边缘计算的高速通信网络优化方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1836418A (zh) * 2003-08-14 2006-09-20 国际商业机器公司 分组重新排序期间的改进的传输控制协议性能
US20090073975A1 (en) * 2007-09-19 2009-03-19 Nec Corporation Communication method and communication device
US20120213069A1 (en) * 2011-02-23 2012-08-23 Fujitsu Limited Transmission control method, transmission control system, communication device and recording medium of transmission control program
CN102739515A (zh) * 2010-04-13 2012-10-17 北京英华高科科技有限公司 异构网络的tcp拥塞控制
US20150049611A1 (en) * 2005-11-30 2015-02-19 Cisco Technology, Inc. Transmission control protocol (tcp) congestion control using transmission delay components

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7423977B1 (en) * 2004-08-23 2008-09-09 Foundry Networks Inc. Smoothing algorithm for round trip time (RTT) measurements
WO2009146726A1 (en) * 2008-06-06 2009-12-10 Telefonaktiebolaget Lm Ericsson (Publ) Technique for improving congestion control
US8877880B2 (en) * 2010-11-17 2014-11-04 Exxonmobil Chemical Patents Inc. Method for controlling polyolefin properties
KR102016446B1 (ko) * 2011-12-28 2019-10-21 씨디에프 케 유안 지연이 큰 네트워크들에 대한 tcp 혼잡 제어
JP5867160B2 (ja) * 2012-02-28 2016-02-24 富士通株式会社 通信制御装置、通信制御方法および通信制御プログラム
US8711690B2 (en) * 2012-10-03 2014-04-29 LiveQoS Inc. System and method for a TCP mapper
JP6173826B2 (ja) 2013-08-07 2017-08-02 日本放送協会 パケット送信装置およびそのプログラム
CN104158760B (zh) 2014-08-29 2018-08-03 中国科学技术大学 一种广域网tcp单边加速的方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1836418A (zh) * 2003-08-14 2006-09-20 国际商业机器公司 分组重新排序期间的改进的传输控制协议性能
US20150049611A1 (en) * 2005-11-30 2015-02-19 Cisco Technology, Inc. Transmission control protocol (tcp) congestion control using transmission delay components
US20090073975A1 (en) * 2007-09-19 2009-03-19 Nec Corporation Communication method and communication device
CN102739515A (zh) * 2010-04-13 2012-10-17 北京英华高科科技有限公司 异构网络的tcp拥塞控制
US20120213069A1 (en) * 2011-02-23 2012-08-23 Fujitsu Limited Transmission control method, transmission control system, communication device and recording medium of transmission control program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CARDONA, E. ET AL.: "A Uniform Resource Name (URN) Namespace for CableLabs", INTERNET ENGINEERING TASK FORCE (IETF), REQUEST FOR COMMENCE: 6289, 30 June 2011 (2011-06-30), XP015076065, ISSN: 2070-1721 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113271316A (zh) * 2021-06-09 2021-08-17 腾讯科技(深圳)有限公司 多媒体数据的传输控制方法和装置、存储介质及电子设备
CN115022247A (zh) * 2022-06-02 2022-09-06 成都卫士通信息产业股份有限公司 流控制传输方法、装置、设备及介质
CN115022247B (zh) * 2022-06-02 2023-10-20 成都卫士通信息产业股份有限公司 流控制传输方法、装置、设备及介质
CN115514710A (zh) * 2022-11-08 2022-12-23 中国电子科技集团公司第二十八研究所 一种基于自适应滑动窗的弱连接流量管控方法
CN115514710B (zh) * 2022-11-08 2023-03-10 中国电子科技集团公司第二十八研究所 一种基于自适应滑动窗的弱连接流量管控方法

Also Published As

Publication number Publication date
CN105991462A (zh) 2016-10-05
KR20170121269A (ko) 2017-11-01
JP6526825B2 (ja) 2019-06-05
CN105991462B (zh) 2019-05-28
EP3255847A4 (en) 2018-02-28
KR102030574B1 (ko) 2019-10-10
JP2018508151A (ja) 2018-03-22
EP3255847A1 (en) 2017-12-13
US20170366650A1 (en) 2017-12-21
US10367922B2 (en) 2019-07-30
EP3255847B1 (en) 2020-08-05

Similar Documents

Publication Publication Date Title
WO2016138786A1 (zh) 传输控制协议tcp数据包的发送方法、发送装置和系统
US10462707B2 (en) Data transmission method and apparatus
Briscoe et al. Reducing internet latency: A survey of techniques and their merits
US10715282B2 (en) Method and related device for improving TCP transmission efficiency using delayed ACK
US9444749B2 (en) Apparatus and method for selectively delaying network data flows
US10560382B2 (en) Data transmission method and apparatus
US20180349803A1 (en) Dynamically optimized transport system
US11088957B2 (en) Handling of data packet transfer via a proxy
US11496403B2 (en) Modifying the congestion control algorithm applied to a connection based on request characteristics
Shen et al. On TCP-based SIP server overload control
JP2010504688A (ja) ネットワーク・プロトコルスタックのハンドオフおよび最適化を実装するための方法およびモジュール
CN104683259A (zh) Tcp拥塞控制方法及装置
Mudambi et al. A transport protocol for dedicated end-to-end circuits
CN108337171B (zh) 与dtn网络兼容的ip分组转发方法、网络节点及存储介质
US10015288B2 (en) Communication apparatus and control method of communication apparatus
Nikitinskiy et al. A stateless transport protocol in software defined networks
Ahsan et al. Performace evaluation of TCP cubic, compound TCP and NewReno under Windows 20H1, via 802.11 n Link to LTE Core Network
Xie et al. NLPC: A nimble low-priority congestion control algorithm for high-speed and lossy networks
Ogura et al. Congestion Control with Two Fair Allocation Modes to Achieve RTT-Fairness
Tekala et al. Dynamic adapting of Scalable TCP congestion control parameters
Davern et al. Optimising Internet Access over Satellite Backhaul
Tullimalli Multimedia streaming using multiple TCP connections

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15883836

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017546188

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015883836

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20177027525

Country of ref document: KR

Kind code of ref document: A