MX2007006395A - Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san. - Google Patents

Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san.

Info

Publication number
MX2007006395A
MX2007006395A MX2007006395A MX2007006395A MX2007006395A MX 2007006395 A MX2007006395 A MX 2007006395A MX 2007006395 A MX2007006395 A MX 2007006395A MX 2007006395 A MX2007006395 A MX 2007006395A MX 2007006395 A MX2007006395 A MX 2007006395A
Authority
MX
Mexico
Prior art keywords
tcp
ack
packet
rtt
packets
Prior art date
Application number
MX2007006395A
Other languages
Spanish (es)
Inventor
Bob Tang
Original Assignee
Bob Tang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0426176A external-priority patent/GB0426176D0/en
Priority claimed from GB0501954A external-priority patent/GB0501954D0/en
Priority claimed from GB0504782A external-priority patent/GB0504782D0/en
Priority claimed from GB0512221A external-priority patent/GB0512221D0/en
Priority claimed from GB0520706A external-priority patent/GB0520706D0/en
Application filed by Bob Tang filed Critical Bob Tang
Priority claimed from PCT/IB2005/003580 external-priority patent/WO2006056880A2/en
Publication of MX2007006395A publication Critical patent/MX2007006395A/en

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Various techniques of simple modifications to TCP/IP protocol & other susceptible protocols and related network's switches/routers configurations, are presented for immediate ready implementations over external Internet of virtually congestion free guaranteed service capable network, without requiring use of existing QoS/ MPLS techniques nor requiring any of the switches/routers softwares within the network to be modified or contribute to achieving the end-to-end performance results nor requiring provision of unlimited bandwidths at each and every inter-node links within the network.

Description

IMMEDIATE AVAILABLE IMPLEMENTATION OF NETWORK DESIGNED FOR SERVICES GUARANTEED AND VIRTUALLY FREE OF CONGESTION; NETWORK OF STORAGE AREA (SAN) TREATABLE WITH PROTOCOL OF TRANSMISSION CONTROL (TCP) (SQUARE WAVE FORM) OF TRANSMISSION CONTROL PROTOCOL (TCP) OF NEXT GENERATION OF EXTERNAL INTERNET Description of the Invention Currently, switching implementations of RSVP / QoS / TAG, etc., to facilitate multimedia / voice / fax / IP applications in real time on the Internet to ensure the quality of service suffer from complete implementations. Additionally, there are a multitude of vendor implementations such as the use of ToS (Service field tips in the data package), TAG-based technologies, source IP addresses, MPLS, etc .; in each of the QoS capable routers crossed by data packets, they need to be examined by the switch / router for any of the implemented fields of the previous vendors (here, it needs to be stored in memory / queued) before it is can send the data package. Assuming a terabits link that has QoS data packets at the maximum transmission speed, the router will thus need to examine (and store in memory / queue) each data packet REF .: 182795 that up and spend CPU processing time to examine any of the several fields above (for example, the IP address table of QoS priority source to check for itself what can amount to several tens of thousands). In this way, the specified performance capability of the router manufacturer (to send normal data packets) can not be achieved under heavy load of QoS data packets, and some QoS packets will suffer several delays or drops although the total loads of the data packets have not exceeded the link bandwidth or the normal throughput capacity of data packets specified by the manufacturer of the router. Also, the lack of interoperable standards means that the promised capacity of some IP technologies to support these QoS value-added services is not yet fully realized. Methods for guaranteeing quality of service for multimedia / voice / fax / real-time applications, etc., which are better or similar to the end-to-end reception qualities in the Internet / Internet segment are described herein. of ownership / AN / LAN, without requiring that the switches / routers that are traversed by the data packets need the capability of RSVP / flag switching / QoS, to ensure better service guarantee of the current state of implementation of QoS of the technique. Additionally, the data packets will not necessarily need to be buffered / queued for the purpose of examining any of the existing implementation fields of the QoS vendors, thus avoiding the previous scenarios of fall or delay, possible , mentioned above, facilitating the full performance capability specified by the switch / router manufacturer while sending these guaranteed service data packets even at full link bandwidth transmission rates. Modification of the existing TCP / IP stack for better recovery / avoidance / congestion prevention, and / or allows the guaranteed service-free TCP / IP capability virtually free of congestion, than the existing mechanism of packet transmission and decreasing multiplicative, simultaneous TCP / IP speeds in the RTO timeout, and / or further modified so that simultaneous multiplicative existing rates decrease the waiting interval and the packet transmission waiting interval, known as the interval of RTO wait, are decoupled in separate processes with different values of packet retransmission time interval and speed decrease time interval. The TCP / IP stack is modified so that: - decrease simultaneous RTO speeds and packet retransmission in the RTO timeout events take the form of full "pause" in units of packets / sending data and the transmission of packets for the source TCP-stream particular destination that the RTO has in standby, but that allows 1 or a defined number of packets / data units of the particular TCP flow (which can be RTO packets / data units) to be sent forward during each full interval During the "pause / pause extended" period, the simultaneous RTO rate decreases and the packet retransmission interval for source-destination node pairs where recognition of the corresponding data unit / packet sending has not yet been received return of the destination received by the TCP / IP stack, before the "pause" is made, it is established that it is: (A) decongested RTT between the source and the pairs of destination nodes. n the network multiplier * that is always greater than 1, or decongested RTT between the pair of source and destination nodes plus a sufficient interval to accommodate the delays introduced by ... or (B) decongested RTT between the pair of source nodes -destinate destination in the network with the RTTest decongested * multiplier that is always greater than 1, or decongested RTT between the pair of source-destination nodes more distant in the network with the RTTest decongested, the pair of source-destination nodes more distant in the network with the RTT is decongested plus a sufficient interval to accommodate variable delays introduced by several components, or (C) dynamically derived from the historical values of RTT, according to some algorithm contemplated, for example of the multiplier * that is always greater than 1, or more a sufficient interval to accommodate the delays introduced by the variable delays introduced by the various components, etc., or (D) any value supplied by the user, for example 200 ms for audiovisual perception tolerance or for example 4 seconds for tolerance of downloading the WEB page from http, ... etc. It is pointed out that the critical audio-visual time flows between the pair of source-destination nodes more distant in the world, the decongested RTT can be approximately 250 ms case in which these RTO settings of the long-distance critical time flow will be above of the usual period of audiovisual tolerance and need to be tolerated as in the quality of transcontinental mobile calls of today via satellite; where with RTO interval values in (A) or (B) or (C) or (D) above topped with tolerance limits of Audiovisual perception in real time, for example 200 ms, guaranteed service network performance is achieved virtually free of congestion. It is noted that the TCP / IP modification described above of "pause" is allowed only, 1 or a defined number of packets / units of data to be sent during a full full pause interval or each successive full pause interval, instead of or in exchange for the existing coupled simultaneous RTO speeds are decreased and the retransmission of packets, can improve the recovery / evasion / prevention of faster and better congestions or even allows the guaranteed service capacity virtually congestion-free, in the Internet / subsets of the Internet / WA / LA that the simultaneous multiplicative existing TCP / IP speeds decrease in the RTO mechanism; it is also noted that the simultaneous RTO speeds coupled to the existing TCP / IP stack decrease and the retransmission of packets in separate processes can be decoupled with different packet retransmission timeout and rate decrease wait times. It is also noted that the TCP / IP modifications in the previous paragraph can be implemented increasingly by a small initial minority of users and can not necessarily have any significant effect.
Adverse performance for the modified "pause" TCP adapters, additionally the packets / data units sent using the modified "pause" TCP / IP will only be rarely dropped by the switches / routers along the route, and they can finely adjust / even make them have no packet drops / data units. As modifications are adopted by the majority or universally, the existing Internet will achieve a guaranteed service capacity virtually free of congestion, and / or without packet drops along the route by the switches / routers due to the overflows of the intermediate memories due to congestion. As an example, where all the switches / routers in the Internet / Internet / proprietary / WA / LAN subset have each had a minimum of s equivalent seconds (ie, sum of s seconds * of all physical bandwidths of antennas inbound buffer size), and TCP / IP stack RTO timeout of source sender source or uncoupled rate decrease time interval is set to equal seconds or less (which may be within the audio tolerance or http tolerance period), any packet / data unit sent from the modified TCP / IP source will not fall yet due to overflows by buffering in the intermediate switches / routers and will arrive completely in the worst case within the time period equivalent to the number of s seconds * of nodes traversed, or the sum of all the buffer size equivalents of intermediate nodes in seconds, whichever is greater (preferably this is, or can be made to be, within the defined tolerance period required). Therefore, there will be good practice for buffer sizes of all intermediate node switches / routers that are at least equal to or greater than the equivalent RTO timeout interval or uncoupled rate decrease wait times of the TCP / IP stack modified from source / origin sender sources. The TCP / IP source stack of the originating emitter will enter the RTO timeout interval or decoupled rate decrease waiting interval when the cumulative buffer delays of the intermediate nodes are added to be equal to or greater than the interval RTO standby or the decoupled rate decrease wait interval (in the form of "pause" in the present) of the source emitter source TCP / IP stack, and this RTO timeout or value of waiting interval of decoupled speeds decrease can be adjust / make it within the required defined tolerance tolerance range. This is especially so, where the individual or defined number of packets / data units sent during any period / pause interval will be further excluded from or not allowed to cause any "pause" of RTO or "pause" events of decrease in decoupled speeds even if its corresponding recognition subsequently arrives back later after the RTO time interval or uncoupled rate decrease time interval. In which case, in the worst case of congestion, the TCP / IP stack of the source emitter source will alternate between "pause" and the normal transmission phase of packets each of equal duration?, That is, the TCP / IP source emitter source stack will only "split in two" of its transmission speeds during the worst-case scenario, during the "pause", it sends almost nothing but once assumed when the pause ceases it sends at full speeds allowed under the sliding windows mechanism. Additionally, with all TCP / IP stacks, or most, on the Internet / subsets of Internet / WAN / LAN will be modified in this way all and with the RTO standby interval or the decoupled speed decrease waiting intervals adjusted to a common value, for example, t milliseconds within the defined defined perception tolerance period (where t = decongested RTT of the most distant source-destination node pair in network multiplier m *), all packets sent within the Internet / subsets of Internet / WAN / LAN will arrive at destinations that experience cumulative total buffering delays along the path of only s * number of nodes O (t - RTT decongested) + t, whichever is less. This contrasts favorably with the RFC implementations of existing TCP / IP stacks, which can not guarantee that any packet will fall and additionally can not guarantee that all packets sent arrive within a certain defined period of tolerance. During the "pause", the congestion of the intervention route is helped by this "pause", and the defined small or individual number of packets sent during this "pause" will usefully probe the intervention route to determine whether or not The congestion continues or has stopped, so that the modified TCP / IP stack reacts accordingly.
Next Generation TCP: Additional Enhancements and Modifications External Internet Nodes (which may also be applicable to internal network nodes) The same current packet retransmission timeout mechanism and decrease in transmission speeds / "pause" is coupled ( ACK timeout interval and packet retransmission timeout) applied to ensure service of the Internet / WAN / LAN subsets, can be applied similarly to external nodes in the internal, external / external WAN / external LAN cloud . Here, the decongested RTT (ie, a variable of the smallest later minimum time period for a corresponding return ACK received in this way), is used in place of the known decongested RTT value within the Internet / WAN subset / Guaranteed service LAN: - from the received ACK (which can be ACK for the usual data packets sent, or ICMP probe, or UDP probe), a variable of the most subsequent minimum time period for an ACK to be received (since the corresponding shipping time package) is updated, this decongested RTT serves as the most recent estimate of the decongested RTT value between the source and the destination (better still were the decongested RTT between the external Internet node and the source that it is currently known). HE can make the knowledge of the fact of the most distant decongested RTT on the planet is for example 400 ms, in this way we can make use of the fact that the maximum decongested RTT is for example 400 ms (but care must be taken where both ends are For example, a small 56K modem bandwidth and a large packet, for example, 1500 bytes are transported, since it takes approximately 250 ms for the 1500-byte packet to exit or fully enter the modems, in this way it would be preferable also get the completed time packet that actually comes out of the modem, to adjust the decongested RTT value, accordingly). If any packet RTT (derived from your ACK) * to > to decongested RTT (where a is a multiplier always greater than 1), then "pause" is activated (but 1 or a number of data packets are allowed through, or only probe packets are allowed through, during the intervals of "pause" or "pause" prolonged), or decrease the speeds to certain percentages, for example, 95% of the existing speeds (that can be implemented for example by means of techniques of formation of traffic or diminishing the size of the window of congestion ... etc.), and / or not only by increasing the modified TCP window size / congestion window size in the subsequent ACK, while the RTT * a of the ACK received subsequent / more recent continues to be >; to decongested RTT or during a defined period of time derived based on the algorithm contemplated, or a combination of any of the above. The implementation of decreasing speeds directly in the TCP packet is trivial, but the monitor software / IP / TCP proxy sending module ... etc., can be implemented using the existing velocity-forming techniques / speed controller or when implementing as another window size / congestion window size mechanism for each TCP flow within the monitor software / IP / TCP proxy sending module that simply reflects the most recent effective window size value for flows of particular TCP (and / or suspended operations of this mechanism), but does not reflect / stop the reflection of the most recent effective window size value (that is, start operations of this mechanism) when / while the RTT * a of the most recent ACK received from the particular stream continues to be > to decongested RTT; instead during this time when / while the RTT * a of the most recent received ACK continues to be > Decongested RTT, the window size of the monitor software / congestion window size value for this particular flow will be decreased by%, for example, 95%, of the actual computed window size / reflected, most recent, ie , he lower of the window size value / advertised window size / congestion window size (it is pointed out that the previous operation can be delayed optionally by t seconds, for example 1 second or based on some contemplated algorithms). [NOTE: When implemented in the monitor software, the TCP congestion window size of the emitter is not obtained directly on the windows platforms in the absence of the source code of the TCP stack of windows, thus it needs to be derived of the network, therefore the actual effective window size of the sender's TCP source can be derived (effective window size = min (window size, congestion window size, advertised window size of the receiver). technology existing in the state of the art to derive / approximate the values of the current effective window size / congestion window size of the TCP source of the current emitter As an example, however, it can be assumed when there is no overflow of the connection , that the congestion window size of the TCP source of the sender is the decongested RTT * current sending speed (that is, the current sending speed calculated when picking up a packet you "distinguished" by RTT that monitors your sending time and your return ACK time, current sending speed = (number of bytes in transit between the time of sending and the time of Return ACK) / (return ACK time - send time), it can be assumed that the current congestion window size of the sender's TCP source will be equal to the number of bytes in transit. Another example can similarly derive the current effective window size / current congestion window size from the sender's TCP source by monitoring the total bytes sent by the monitor software within the RTT range]. In the monitor software, the percentage of decreasing speeds may not optionally need to depend on the derivation / estimation of the actual effective window size as in the previous, instead the monitor software may perform the "pause" (and / or allow one or a number of packets to be sent during this pause interval) instead: if the total of periodic paused intervals separated, p * I (I being periodic pause intervals separated, in seconds) within eg 1 second, congestion window effectively = (1 - (p * I)) / l present performance second (current effective window size * current RTT) - therefore when performing 5% speed decrease, (P * I) must be equal to 0.05. This "pause" interval may not need yet being uniformly separated periodically, and / or each "pause" interval may not even need to be of the same pause duration. EXAMPLE: there was a total of 5% less time to transmit during the "pauses", the product of bandwidth delay of the source - destination will now be reduced to 0.95 of the existing value. This is because there is now 5% less in the number of RTT intervals of non-overlap within eg 1 second to transmit to an effective total size value of data byte window for each RTT interval of no previous overlap. The duration of the "pause" interval should preferably be set to at least equivalent to a decongested minimum RTT, but can be made smaller if required: example in VoIP transmissions that send a sampled packet every 20 ms (assuming much more small that the greater decongested RTT) the individual duration can be done in the "pause" interval of 50 ms within eg 1 second (ie, perform speed decrease equivalent to 5% effective decrease in window size) in periodic "pauses" evenly separated within eg 1 second, each of the "pauses" here that is of 10 ms duration (so as not to introduce prolonged delays in the time-critical VoIP packet shipments), or even 10" "periodic" breaks evenly separated within for example 1 second, each of the "pauses" here that is of duration of 5 ms and so on. Additionally, the emitter TCP source code can similarly implement the current effective window size settings that fully utilize the "pause" methods, totally replacing the need for congestion window size settings; in these modified TCPs, the current effective window size at any time will be [min (window size, advertised receiver window size) * ((l- (p * I)) / l second)] (Do not decrease repeatedly when the RTT * a streams of the continuous ACK received continues to be> RTT decongested, but additionally if RTT * b of ACK stream received most recently (b always> a) which for example corresponds to a sent packet since the decrease of speed is more posterior now> to large decongested RTT, the value of the monitor software window size / congestion window size can now be optionally decreased repeatedly to eg 90/95% (L% or%) of the window size value of the monitor software / congestion window size value of L% / m% { b denotes a more severe level of congestion than a, or even packet drop, either or both of a and b may be such that it is very probably significant / packet drop events. The software of The monitor can optionally delay the previous operations for 5 seconds, for example 1 second so that all existing unchanged TCPs will be synchronized in decreases in speed} and / or not increase the size of window / size of congestion window for certain periods based on some algorithm contemplated when they maintain certain conditions, for example while the RTT * a of the ACK received subsequent / most recent flow continues to be > to not congested. When the monitor software is used, the TCP of the course continues making its own slow start / congestion evasion / coupled RTO ... etc. The monitor software can predict / detect a TCP RTO event, for example when a segment ACK sent back after a very long period, for example 1 second, etc., or the sudden division has yet to be received. of flow sending speeds ... etc. The monitor software may additionally choose to decrease its value of reflected window size / congestion window size to eg 90% (n%) of the existing one, and / or not only increase its own value of effective window size / size of congestion window for the particular flow for some period of time derived based on some algorithms contemplated, for example while the RTT * a of the ACK received subsequent / most recent continues to be > RTT is decongested. The monitor software may additionally implement its own packet retransmission delay also, this requires that the monitor software always retain a dynamic window value of the copies of the sent packets and a similar retransmission software module as in TCP , therefore the monitor software can perform the functions of the previous paragraph much faster, not needing to wait for the TCP RTO indications. The monitor software can therefore optionally prevent late ACKs from causing RTO in the TCP, for example by interfering with the ACKs to the TCP, and controlling / placing the TCP through the ACKs generated / interfered with by the TCP, for example when establishing the interfered ACKs with receiver window sizes announced from 0 to TCP of "pause" during the time period or some desired values to decrease the effective TCP window size, the DUP ACKs with the recognition number field value = last value of Seq Not sent to cause the TCP to divide the effective window size in two without necessarily causing retransmissions of the current packets, etc. The monitor software can optionally delay the above operations for t seconds, for example 1 second so that all unmodified TCPs existing ones will be synchronized in several decreases in speeds. The various different algorithms / combinations of different algorithms can be contemplated instead of those illustrated / delineated above. The various existing methods of the state of the art or the component methods can be further incorporated into any of the methods or component methods described herein as improvements. The modified TCP flow (or even modified RTP with respect to modified UDP / UDP ... etc.) here does not need to split the speeds in two, since it does not have to increase the speeds when it is congested (during storage events) in memory) to cause packet drops and the decrease of for example 10% / 5% in transmitter speeds ensures that no new power is needed in the flow (any other existing unmodified TCP flow will ensure a 50% decrease, but will always strive to increase speeds to cause data fall again). The new flows will accumulate their fair share during the time. This pretty much conserves low latencies ... etc., from the existing established flows (suitable for VoIP / multimedia), and reflects the existing traditional PSTN call admission programs.
Modified TCP / RTP modified over UDP / UDP modified here retain their established part, or the majority of their established part, of the link bandwidth, but do not cause congestion / drops of additional packets. The exponential increase of TCP to the threshold, the linear increment during the evasion of congestion after the threshold, the mechanisms of sliding windows / congestion window, etc., ensure the start of a bottleneck of congestion that is gradual, therefore the modified TCP and the existing unmodified TCP can therefore react to eliminate congestion. Modified TCP / RTP modified with respect to UDP / UDP modified here can still employ a rapid flash burst of sufficient additional traffic, for example, when the level of congestion is close to falling packets, to ensure that all or existing selective flows traversed the particular congested links have notifications of packet drop to reduce transmission speeds; existing unchanged TCPs will halve their speeds and take a longer time to build support for the congested transmission speeds, while modified TCPs retain most of their established part bandwidths throughout of the links. This will most usefully encourage adoptions of increase of these simple modifications of TCP decoupled in the public Internet. Modified emitter TCP sources will achieve higher throughput, retain their established part of bottleneck link bandwidths in bottleneck link congestion causing crashes (or only physical transmission errors that cause packet drops) while maintaining the impartiality between the flows (compare existing TCP who lose half of their bandwidth established in an individual packet drop), and in themselves will not cause any packet loss. This modified sender source TCP overcomes the existing recovery problems of TCP speeds, caused only by individual packet drop, in high bandwidth long latency networks. Where the TCP source traffics from the sender originate from external Internet / WAN / LAN nodes and assuming that the external source traffic is marked in time (which allows the receiver's TCP to derive the time of route transmissions or a unidirectional transmission delay of the destination source), previous TCP methods of modified emitter source can be adapted to act as receiver-based methods: - the timestamps of the source source do not need to be exactly synchronized to the receiver. The receptor you can ignore the derivations of the clock marks of the source system. The OTTest (most recent update estimate of the unidirectional transmission latency, of the packets received from the source to the destination, which are the lowest derived value equivalent in this way to the current receiver system time when the packet is received - mark of the transmitter time of the received packet) is derived in the receiver. Any increase in the OTT observed in subsequent received packets will indicate the insipient start of congestions along the route (ie, at least one forward link along the route is now fully used at 100% and the packets are they begin to store in buffer along the route), now it will mean that the emitter TCP source must now activate the modified speeds decrease or the "pause" mechanism. The receiver can signal this to the sender TCP source: - by adjusting the announced window size to zero in the return ACKs for an appropriate period, before reverting back to the same original announced window size after the "pause" "appropriate or appropriate" periodic "breaks. - when adjusting the advertised window size to an appropriately decreased value of the current / estimated effective derived window size of the TCP source of the sender (effective window size = min (window size, congestion window size, receiver window size), for example 95% of the estimated effective window size / current derivative of the sender's TCP source. the sender's TCP source will not continuously increase the effective window size for the received ACKs within each RTT, while the TCP of the modified receiver sends acknowledgments with the same estimated effective window size / announced actual decreased derivative. if the announced receiver window size of the return ACK is now changed subsequently, its increments will not cause any packet drops since the TCP of the modified receiver will ensure that the emitter's TCP source will eventually decrease its effective window size in the next incipient start of congestion along the route Other possible techniques include that the receiver TCP recognize DU P (3 ACK of DUP in succession to activate the reduction to half of the decrease of window of multiplicative congestion of source of TCP of emitter). During the initial TCP connection establishment phase, the modified receiver TCP will negotiate to have the option of time dialing with the sender TCP source. This modified modified receiver / TCP based receiver software does not require that the transmitter TCP be modified.
When both sender and receiver TCPs are modified, along with time stamp options, it will allow a more accurate recognition of the OTTs / OTT variations in both directions (both the modified TCP / modified monitor software can pass the recognition of the OTT in their direction between them in this way modified TCP / modified monitor software can now provide better control using OTT instead of RTT, for example if the OTT of the sent segment does not indicate congestion but the OTT of the return ACK indicates congestion, There is no need to slow down / "pause" even if your RTT as used in the previous RTT-based method will have a waiting interval.The modified TCP based on RTT, when implemented in the sender only, used in conjunction with the time stamp option, will allow the issuer to be similarly in possession of the OTTs of the return ACK and / or the OTT variations to provide Similar best controls. It is pointed out that where the modified TCP techniques are implemented at both ends of intercontinental submarine cables / satellite links / WAN links, they will increase the bandwidth utilization and the performance of the transmission media for the TCP, in effect the widths will be doubled. physical band of the physical link.
Those skilled in the art can make various modifications and changes, but they will fall within the scope of the principles. UDP Prioritization It is pointed out that given the priority of UDP with respect to TCP ... etc., in each node within the Internet / subset of Internet / WAN / LAN it will still result in UDP drops even though UDP traffics do not use more than 100% of the forwarding link bandwidth, due to the node input queue before the existing TCP memory stored packets? delay stored in memory for UDP packets or even drops UDP packets: 1. needs to update / modify the router / switch software to place all UDP packets in front of the node's input queue buffer (and / or prioritize UDP packets in front of the UDP queue's output queue with priority over TCP packets even when TCP packets are already queued in the output queue) pushing all packets of TCP towards the end of the queue (therefore all TCP packets will fall before any UDP packet falls on the input and / or output queue). 2. Update the router / switch software to allow the creation of a UDP input queue separated (which can be very small) and a TCP input queue, the UDP queue that is programmed to the head of the output queue of the TCP packets; and / or implement a UDP high-priority output queue, and a lower priority TCP output queue. The UDP traffic can only exceed the physical bandwidth of the link, it can cause the UDP sending sources to reduce the transmission speed, that is, the resolution qualities and / or the router nodes / make the nodes of the router / switch perform this resolution reduction process on all UDP flows (for example, send only alternate packets of the flow and discard the other alternative UDP packets, or combine two (or several) eg packet data VoIP UDP in a packet of the same size but of lower resolution quality), the nodes can ensure that there is no lack of full TCP power by guaranteeing minimum proportions of the forward link bandwidth for several UDP / TCP flows, etc. Bandwidth Estimates Additional modifications include (and can be used in conjunction with the decongested RTT / RTTest / base RTT / OTTest> / base OTTest / receiver OTTest methods described above, thus enabling the extension of the time for later techniques, which can need some time to provide output results, to complement the previous methods): 1. Use methods such as pipe diagram, trace path, route diagram, p diagram, path charge, probe b, probe c, netest, chirp and similar techniques for determining each bandwidth traversed from the node forwarding link, utilization, performance, queue length, found delay ... etc., for "pause" for appropriate interval derived from algorithm contemplated for purposes / decrease of speeds (according to some optimized algorithm contemplated) when certain conditions found for example use of forwarding link approaches 100% for "pause" / decrease of speeds so that no queue gets packet formed / no packet that is buffered ( that is, pre-empty buffer delays so that all nodes traversed do not introduce buffer delays at all). For example, when the utilization (which can be inclusive of all UDP, ICMP, TCP) on a particular link approaches for example 95% may not only increase the window size of any of the ACK received, and only if / when the subsequent packet falls then decreases for example by only 10% (to allow that there are not "missing" to feed completely new flows of the bandwidth in the particular link) and / or perhaps subsequently do not increase the window size for each ACK. It is not necessary to stop the shrinking of the window size if the packets fall due to physical transmission errors (ie, not due to congestion of buffer overfilling), if the use of link in the particular link throughout of the route is for example below 95% (or specified percentage) of utilization [which solves the problems of recovering TCP speeds of long RTT of high bandwidth]. This will more usefully encourage increased adoptions of these simple modifications of TCP coupled in the public Internet. The new flows (UDP, ICMP, TCP), and / or the unmodified TCP / RTP over the UDP / UDP, will now always have at least a guaranteed bandwidth of no power failure of 5% to grow at all times , since the TCP / RTP modified with respect to the UDP / UDP can for example not increase the transmission speed at all when the use of the link is for example 95%. And if / when subsequently the link drops packets, then the modified TCP / RTP with respect to the UDP / UDP will decrease the window size / transmission speed for example by 10% (or pause by an interval x periodically before transmit at unrestricted speeds allowed by the means of immediate transmission of the emitting source during the period and, such that example x / (x + y) 0.1, that is, equivalent to decreases in the size of sliding window or window of congestion / decrease of speeds of for example 10%). Pausing for the x interval, instead of decreasing the sliding window size / congestion window size / decreasing speeds, will give the fastest possible debugging of congested buffers in the node, and will help keep delays to a minimum of the intermediate memories and on the nodes along the route. The buffer size requirements here are not a very relevant factor at all in the considerations. They can conceivably maintain all traffics within / without exceeding 100% of the physical bandwidths available at all times (it does not need to be buffered to undergo sudden bursts). For VoIP / multimedia (for example, they use RTP with respect to UDP / UDP), or VoIP / multimedia added that cross the same route / same route portions, in a link that starts exceeding for example 95% or even close to 100%, VoIP source / multimedia can now transmit to for example some percentage for example half of the resolution quality and wait until the growth of other traffics now bring support of link usage to eg 95% / 100%, for sudden burst back to the full resolution quality transmission and / or additional resolution plus for example 200% or more (with additional redundant erase encodings ... etc.), to cause immediate flash bursts and dropped buffer packets that trigger other TCP streams ( modified or not) at decreasing speeds (usually in the space of 1 second in existing TCP implementations of RFCs), and when the other flows, for example TCP now slow down, to then immediately reverse back to the quality of 100% original transmission (or maybe still continue to hold as much bandwidth remaining with transmissions with a resolution quality of 200%, depending on the bandwidth of the link / the bandwidth proportions used by the VoIP / multimedia size / buffer in the node ... etc.)? to ensure a minimum of possible delays of the VoIP / multimedia buffer. Perhaps the VoIP / multimedia can still start with a higher resolution transmission quality (for example, 200% of the normal resolutions required, with redundant encodings of erasure ... etc.). This is useful for all flows because it ensures that as few buffer buffer periods are crossed as possible in the nodes, for all flows. The router software can be updated additionally to allow authorized request to drop packets of flow (for example 1 packet of each TCP flow to mean decrease of emitter at speeds), and / or to do this in the detection of for example link utilities of 95% / 100%. The above method can be used in conjunction with existing RIP / BGP router table update packets, and / or similar techniques, to ensure minimal or no buffering delay on all nodes, updated router software makes the update of the link preference routing table to be attributed for example more than 95% / 100% of the particular forwarding links ... and / or propagates this throughout the network not only in the neighboring routers (but it will need to be improved to allow quick updates in real time more frequent). Another next-generation network design may be for routers to signal neighbor routers of particular forwarding link, for example 95% / 100% utilization (100% utilization will indicate imminent start of packet buffering) and / or other configuration details such as natural bandwidth of links / queuing policies / buffer sizes ... etc., so that the neighboring router does not increase the sending speeds existing to this router / or only this link of forwarding and / or by decreasing flow rates / velocity formation of the flows that cross the reported router link by some percentages based on the algorithms contemplated depending on the updated information or even the corresponding "pause" interval x before the unrestricted sending speeds continue for the period y (actually limited only by the link bandwidth between the routers). Any packet of TCP streams that need buffering during the "slowing down" / "pause" will be only as much as the window size at any time, and the RTP / UDP streams can also be stored in memory intermediate? conceivable now to be removed still possibly with any TCP speed limitation mechanism of congestion avoidance. The router can also modify the setting of the window size field advertised in the ACKs that return to the sender's TCP source so that they are zero for certain durations or certain durations periodically (causing "pause" or "periodic pause"), or still modify / adjust the advertised window field value to a certain percentage decreased in the current effective window size estimated / derived from the sender's TCP source (thereby effecting the limitation of traffic speeds of source). The switch / router in the Internet / subset of Internet / WAN / LAN only needs to keep the table of all source-destination addresses of flow and / or ports together with its last sequence number and / or ACK number fields (and / or speeds) of flow forwarding along the link, estimated effective window sizes / current derivatives per flow along the link ... etc.) to allow the router to generate announced window size updates using "pure ACK" and / or "ACK costs" and / or reproduced packages "... etc. (for example, it notifies the source TCPs when "pausing" through the advertised receiver window size of 0 for a certain period before reverting to the existing receiver window size value before the "pause", or reduces the speeds by advertised receiver window size of value defined based on current effective TCP window size derived / estimated). Neighboring routers will reduce / traffic the packets destined along the reported router link of the following router, the IP addresses of certain neighboring known packets are intended to be routed along the next reported router's link of the gate inputs. the routing table, RIP / BGP updates, MIB exchanges ... etc. For example, a flow already paused periodically in the receiver neighbor preceding the notification router (speeds controlled by periodic "pauses") will now further increase the "pause" interval lengths of the affected flows and / or increase the "pause" number within the period. Periodic pauses may cease or decrease in individual pause frequency / interval, for example at some defined period derived from algorithms designed for example when the router routers are notified of updates of the routers indicating the link uses that have relapsed below of a certain percentage, for example below 95%. The RED / ECN mechanism can be modified to test this functionality, that is, instead of monitoping the buffered packets and selectively dropping the packets / notifying the senders, the RED / ECN can base the policies on the uses of link for example when the uses approach some percentages, for example 95% ... etc. The previous bottleneck link utilization estimate, the available bottleneck bandwidth estimation, the bottleneck performance estimate, the bottleneck link bandwidth capacity estimation techniques, can be additionally incorporated in the "pause" / d? smmuc? on methods of speeds described above based on the methods of RTT decongested / RTTest / RTT base / OTTest receiver >; here there will be plenty of time for neckline link utilization estimation techniques, available bottleneck estimation and bandwidth, bottleneck performance estimation, neck link bandwidth capacity estimation of bottle to be derived / estimated for good enough accuracy to further improve the methods of "pause" rate reduction described above based on the decongested RTT / RTTest / RTT base / receiver OTTest methods.> Several additional techniques to complement / provide topology / route configurations can include SNMP / RMON / IPMON / RIP / BGP ... etc. 2. Periodic probes can be in the form of window update probe (to ask receiver window size , even if the receiver has yet to announce a window size of 0) or similar probe packages ... or use current data package as peri probes dicas (when available for transmissions) ... etc., or UDP to destination with unused port number (to obtain destination unreachable msg destination port), and / or more timestamp options of all nodes Or, similarly TCP to the destination with unused port number (the TCP packet can be TCP synchronization to the unused port number).
Varied Notes [Note: If the total paused intervals p * I within eg 1 second, effectively congestion window (p * I) / l present performance second (current effective window size * current RTT)] When detecting applications Congestion time criticisms can be sent burst to make packet drops, or the receiver that detects the congestion of the timestamp to make or notify the server to make bursts perhaps in the form of conveniently larger probes. In addition to the RTTest technique in the external nodes of the Internet, it can be improved using joint bandwidth estimation techniques; for example receiver processor delay, natural bandwidth, available bandwidth, buffer size, congestion level of the buffer, link utilities. The OTTest > receiver-based do not need to deploy GPS synchronization, only need the decongested OTT or decongested base OTT or variations of the known OTT and OTT monitor decongested. Estimates of performance and natural bandwidth based on the sender and / or receiver? link uses.
Use timestamp (emitter and echo generator) so that the sender can block the variations of processing delay of the receiver. Modified monitor software / modified TCP when paused, can generate and optionally send immediately (despite the "pause") a pure ACK that has no payload of data corresponding to each newly arrived data segment with a An established ACK flag (that is, pure ACK or secondary ACK segments, which ignore normal data segments that do not make ACK at all) from the host source TCP that now needs to be buffered. All pure ACKs generated during this extended pause interval / pause interval, which is sent immediately, can have their sequence number field value set to be the same sequence number as that of the 1st data segment stored in memory minus 1 (which can be a normal data segment with or without an established ACK flag, or a pure ACK segment). If the newly arrived segments are pure ACK, all of them need only be stored in memory, and generate / send a pure ACK corresponding to this purely stored ACK just arrived, which sends this pure ACK just arrived at that moment in front of other segments. of data stored in memory may cause TCP reception to now receive a packet with the sequence number greater than its next expected sequence number which will be the same as the last send recognition number. Once generated, the pure ACKs are sent, the corresponding now buffered pure ACK can be optionally removed and discarded from the buffer, since there is no point in the pure duplicate ACK of sending. Instead a pure ACK can be generated corresponding to the segment stored in memory with the highest recognition number among all the packets stored in memory within this extended pause / pause interval period. Modified modified TCP / monitor software may optionally allow segments with urgent / PSH flags, etc., to be sent immediately even during the extended "pause" V 'pause. You can also derive the current speed = bytes transmitted due to the segment ACK send / wait time. The list of events of entries containing sequence number, ACK timeout, bytes in this segment is maintained. Or set the current speed = transmitted bytes due to the segment sending time / (this particular ACK timeout segment sending time - last unrecognized segment sending time, in the list, if there is no last segment in the list with send times = this ACK timeout segment + ACK timeout period OR use the current speed based on the segments sent immediately before with ACK waiting interval period. (It may also be possible to derive current speed = acknowledgments received, ie total bytes that correspond to all those recognized segments) within a waiting interval period of RTT or ACK). The receiver base can distinguish between the loss of congestion and the physical transmission error and detects speeds, OTT or base OTT, start of congestion and separately in any direction much more accurately. Even better the sender receives ACK back with the timestamp of when he first receives the packet, and / or when the receiver touches the last packet (and / or ACK) sends back to the sender (for example IPMP). It is noted that you can also derive performance = window * MSS / RTT bytes / second. Implementations of modified TCP technology for multicasting need hierarchical implementation / coordination in the router's multicast module. The monitor software can coordinate better once the sender and / or receiver identifies the presence among themselves, for example through unique establishments of the port number? the monitor software can then switch to the appropriate mode / combination of operationsmode. You may wish to "pause" if it is sent / received over external nodes, but it is preferable if this preferred inclusion of "pause" is allowed such as when the increasing adoption over the Internet becomes a vast majority (maybe the user's selectable option). You can initially poll the available bandwidth and / or the natural bandwidth capacity of the route (which corresponds to the bottleneck), then the starting TCP window size such that for example 95% of the bandwidth available or for example 95% of the capacity immediately used. The window size can be increased much faster, for example * 1 / CWND ... etc., if RTT continues < ACK waiting interval. It is pointed out that the ACK timeout value (and / or real value of the packet retransmission timeout) can be derived dynamically based on the algorithm contemplated for the purpose, of the real-time RTTs of return similar to the algorithm of existing RTO estimation of historical RTTs. In the RFCs, the DUP ACKs can not be delayed, here they are compiled by sending the pure ACK generated immediately for each cushioned ACK packet or just its highest ACK number.
To avoid the problem of re-routing the routes that can give erroneous estimates of RTTs, an RTT estimate of hop by hop and bandwidth probe can be adopted. Using the active network interconnection technology for practical implementation, a perception dialog is performed between the adjacent nodes that include the routers. Note: In RFC, you should not generate a TCP receiver more than one ACK for each incoming segment, different from updating the window offered as the receiving application consumes new data. You can reduce window sizes / increase the "pause" period depending on DIFF (RTT, decongested RTT / RTTest). The percentage of decreasing speeds / "pause" interval lengths can be adjusted depending on the size from the buffer delay experienced along the route, for example OTT-OTTest > (or known OTT - OTT decongested), or RTT - RTTest (or RTT - known decongested RTT). When the modified receiver TCP receives the pure ACKs generated from the TCP of the modified emitter for ACK packets stored in the transmitter's buffer while it is "paused" (still any and all ACKs), the modified receiver may generate optional / especially 1 byte with the sequence number set to the last number of ACK - 1, that is, to generate a return ACK, in this way the known modified transmitter TCPs must be received definitively (case in which it may be necessary to ensure that each of the packets stored in memory are purely ACKs generated individually, instead of the largest sequence number ACK only); the emitter TCP can infer whether the 1 byte data generated pure ACK not returned by the receiver in "packet replica ACK" (even if the replication packets are not passed to applications in the receiver)? then react accordingly (for example, route congestion / congestion loss / transmission errors, or resending can be reversed, in which case it may be desirable to send the pure 1-byte data ACK generated again ... etc The monitor software at both ends, or the emitter only or the receiver only, which recognizes the ACK (to remove the main cause of RTO, ie lost ACK.) Lost data segments usually get recognized DUP? Fast retransmission) using 1 byte data of the latest sequence number (replicated packet) or latest sequence of the receiver or even the last remote ACK number -1 Based on the receiver: resending the ACKs if the ACKs are not confirmed back received Send the DUP ACK (fast retransmission) to arrive again before by example 1 second since the original segment sending time, to prevent RTO that causes TCP to re-enter slow start with CWND = 1. You can dynamically adjust the receiver window size, as% of the current transmission window size Estimated emitter maximum (corresponding to the current state, can assume this transmission window size that is equivalent to the total packets in flight) during the preceding RTT interval. The future RFCs for TCP must have an ACK field of additional recognition (which recognizes the ACK control feedback loop), this terminates the control circuit (ie, the existing TCPs know nothing as to whether the RTOs are due to the loss of the data segment in the forwarding link or its corresponding loss of ACK in the return link), improves both the TCP recognition of the event states. 0 the monitor software can perform this ACK acknowledgment by ACK with sequence number (replicated segments) ... etc. With the monitor software at both ends, the receiver can coordinate the unidirectional transmission time step, in both directions, to the other. The receiver-based monitor software can derive the OWDs (unidirectional delays) from the external Internet node from the time stamp option requested in the connection establishment. of SYNC. The monitor software based on the transmitter can estimate the OWD to the remote receiver via IPMP, NTP ... whereas the OWD of the receiver to the sender by means of the time stamp option. In these cases, where both ends with cooperative monitor software, the OWD in both directions can be set? together with the ACK recognition circuit, this allows to distinguish the packet loss due to the packet drop of the sending address or the loss of ACK in the return address or physical transmission errors. The OWDs need the time stamp to derive, or the IPMP / ICMP probes / NTP ... etc. With the monitor software at both ends, just the timestamp segment when it is received and when the recognition of the segment sequence number is returned (all these 2 time stamp values, coupled with the send monitor record of Sequence sequence number sending time maintained in the event list, and time above the sequence number ACK provides all OWDs, terminates processing delays, etc. The known OWDs in both directions, for example in cables submarines, WAN links and / or derivations / defaults of known timestamps and / or the known final host / router / switch processing latencies under environmental limits of congestive / non-congestive operations will improve performance. The ICMP of only one packet with ready send marks, reception, return time list that gives the OWD in both directions, in WAN / LAN / small subsets of the Internet traverses some routes such as TCP / UDP in both directions. The RFC for TCP / UDP must allow these timestamps. Regular ICMP probes can complement passive TCP RTT measurements. The IPMP provides similar time stamp capability and crosses the same routes as the TCP segments sent, and can be used as probe packets sent with the same IP addresses as the TCP / IP addresses in the flows but with different Port addresses. Where both ends implement the modified modified / TCP monitor software, the periodic probe packets may take the form of a separate TCP or UDP connection or IPMPs established between the modified two-sided monitor / TCP software with the same IP addresses as the TCP / IP addresses of the flows but with different port addresses, and the modified monitor / TCP software of both ends can now include timestamps of the time when the segment arrives first with the sequence number and / or the time when the segment with the same sequence number is recognized and returned, allowing OWD measurements per both ends Implementation of TCP modifications to work on the external Internet Where one of the source sender or receiver (or both) resides on the external Internet, packet data communications between the sender and the source receiver can be subject to packet drops due to congestion more beyond the control, for example downloading http / FTP WEB pages from external Internet sites. It is noted here that the methods extend to modifications / inventions also that are applicable where either the source emitter or receiver (or both) reside in the external Internet, but can also be applied where both reside within the subset of Internet / WAN / LAN / Internet property as in several methods described above in the body of the description. The above effects of packet drop by congestion will activate the delay interval of RTO packets retransmissions and the accompanying return for "slow start" with CWND, then a segment size is adjusted to the TCP of the source emitter, for the transmission speed TCP of the source transmitter by CWND of RTT / TCP congestion window size to scale back to eg a segment size of 1K * will take approximately 10 exponential increments of the CWND of the initial "slow start" (2"10 = 1K), that is, the source emitter will need to receive 10 consecutively successful uninterrupted ACKs from the receiver (without congestion drops) that with RTT of 200 ms will take 10 * 300 ms = 3 seconds to scale back to CWND of 1K * segment size Once the CWND reaches the value of SSTHRESH the CWND will now be linearly incremented by the RTT instead of the exponential increase by ACK during "slow start." See RFC 2001, http: //www.faqs.org/rfcs/rfc2001.html It is the beginning of the delay interval of retransmissions of RTO packets and the accompanying re-entry in the "slow start" with CWND established to 1 segment, in packet drops congestion, which causes most degradations in end-to-end transfer performance.Thus, it would be advantageous if the TCP of the source emitter was modified to react more quickly to generate DUP ACK to activate fast retransmission c on ... in the TCP of the remote source sender. With the DUP ACK fast retransmission / recovery algorithm now commonly implemented in most TCPs, the emitter source TCP will now be the RTO packet retransmission timeout with the accompanying re-entry in "slow start" "only under two scenarios: (A) The packets of data sent from TCP source in the sender to the receiver (an individual packet or continuous block of packets), that never arrive all are lost / dropped, therefore the receiver TCP will have no way of knowing if these packets were actually sent or not to generate DUP ACK for the following packets of expected sequence number that do not arrive. It is indicated if any of the latter of these packet-sent continuous blocks arrive although some of the previous ones of these packets fell, the receiver TCP will still be in the position of generating DUP ACK to the emitter source TCP to activate the retransmission / fast recovery that only halves the CWND on the other hand, thus avoiding the RTO packet retransmission timeout event of the sender source TCP that will cause the emitter source TCP to re-enter "start" slow "with CWND of 1 segment. It is noted that the existing RFC stipulates the minimum floor of the default RTO waiting interval of 1 second under any circumstance, in this way the retransmission / rapid recovery of the activation of the DUP ACKs, if the subsequent acknowledgments for these packages Retransmitted back to the emitter source TCP within the RTO timeout of for example at least 1 second will bypass the waiting event of retransmissions of normal RTO packets outstanding.
(B) The acknowledgments (ACK) generated by the receiver back to the TCP source of the transmitter will be lost / dropped, thus never arriving back to the sender source TCP, in this way the emitter source TCP will now be - entering the RTO wait interval in "slow start" with CWND with 1 segment size. The above scenario (A) can be prevented by modifying the transmitter source TCP so that for example if the acknowledgment of the data packet sent immediately after is not received back after eg 300 ms (or input value by user, or algorithmically derived value that can be based on the RTTest (min) and / or the OTTest (min) ... etc., 300 ms was chosen here as example that is larger than the maximum period of delayed recognition of 200 ms) of the data packet acknowledgment sent immediately previous that has been received back in for example 300 ms + RTT more later elapsed since the sending time of the data packet sent immediately afterwards is any subsequent (ie, it can be now assume quite securely the packet sent immediately after it was lost / dropped or its acknowledgment of the receiver back to the TCP source sender was lost / dropped), then [refers to later as Algorithm A] (except where all data segments sent / data packets have already returned the return acknowledgment, that is, acknowledgment of the largest valid sequence number sent last = valid "largest" acknowledgment received last) that is, the sender TCP should now continue on change without normally affecting the event of elapsed time interval), the emitter source TCP will now immediately enter a state of "continuous pause" but may for example allow only a regular data packet and / or several transmissions of pure ACK packets for example every 150 ms (or input value of a user, or algorithmic derivative value that can be based on RTTest (min) and / or OTTest (min) ... etc.) that elapsed during this state of "continuous pause" until it is received then an acknowledgment packet / regular data packet back from the receiver TCP (meaning that the round trip route is now fully congested, that is, each packet does not fall it is in the other directions) after which the "continuous pause" ceases immediately reverting to the same transmission speeds / size of CWND as before to the initial 300 ms elapsed that triggers the "continuous pause". The parts of Algorithm A can be adapted differently in several different combinations of the algorithm instead of going into "continuous pause" in the 300 ms after initial, the TCP source emitter only reduces its CWND ax% (for example, 95%, 90%, 50% ... that can be entered by the user or can be based on some algorithms contemplated) and / or 2. instead of entering the "continuous pause" in the 300 ms initial elapsed, the transmitter source TCP only "pause" for "pause interval" that can be entered by the user or derived from some contemplated algorithms (for example, 100 ms pause interval will be equivalent in step 1 previous to reducing CWND to 90%) without changing the size of CWND; and / or 3. In addition to Step 1 and 2 above, instead of entering "continuous pause" in the initial 300 ms elapsed, only immediately "pause" for an "initial pause interval" only that can be entered by the user or derive from some algorithm, for example 500 ms to ensure that all cumulative delays of the stored memory packets along the packet-switched router / switch nodes from the sender-source TCP to the receiver TCP would be purged for this amount of for example 500 ms, reducing the buffer latencies experienced by the packets subsequently sent, and / or 4. In addition to Algorithm A or Steps 1, 2 and 3 above, where the sending speeds of packets to a regular packet of data and / or several pure ACK packets eg for 150 ms of period elapsed during "continuous pause" or "pause interval" or "initial pause interval" as in Algorithm A, TCP The transmitter source now transmits instead at speeds allowed by the new CWND size during the "continuous pause" or "pause interval" or "initial pause interval" or that does not transmit any packets at all; and / or 5. in addition to Algorithm A or Steps 1, 2, 3 or 4 above, when a return knowledge packet is received back from the receiver TCP (which means in this way the round trip route is now not fully concessioned, that is, each of the packets does not fall in any of the directions), after which the "continuous phase" or "pause interval" or "initial pause interval" ceases immediately reversing the same transmission speeds / CWND size as before to the "continuous phase" of activation of for example 300 ms initial elapsed, here the source TCP of emitter source resumes the transmission speeds where it is applicable as it is limited by the new size of CWND. Only an example of a useful combination of the above will be an "initial pause" for example of 500 ms to debug the buffer delays that either do not send packets at all during this for example 500 ms because they allow a regular packet of data and / or several pure ACK packets each for example 150 ms during these for example 500 ms, followed by "pause interval" in eg 500 ms now elapsed whether or not it sends packets at all during this "pause interval" or allowing a regular packet of data and / or several pure ACK packets for example every 50 ms during this "pause interval" of for example 100 ms, then upon receiving a packet of acknowledgment back from the receiver TCP immediately ceases the "pause interval" by reverting the same transmission / size rates of CWND as before to for example the initial elapsed 300 ms event or the new transmission rate as limited by the new CWND size. It is pointed out that the appropriate choice of derivations for example the 500 ms will help other critical time packages such as VoIP / multimedia to not experience severe buffer delays. The timestamp options can allow OTTests information to be used in the TCP decisions of the sender source, the SACK options if used will reduce the occurrences of the DUP ACK events. The sender source TCP can be further modified as before to get rid of the requirement to re-enter the "slow starts" ba or any circumstance if the packet loss is due to drops due to congestion or physical transmission errors ... etc., ie the TCP can now be made to hold for example the transmission speed / CWND at for example 90% of the transmission speed / CWND (or equivalent "pause interval" of 100 ms, without changing CWND) before the waiting interval of the retransmissions of RTO packets or of the fast retransmission of the DUP ACKs, instead of re-entering the "slow start" of RTO, the division by half of the fast transmission speeds ... etc. This would also apply to any of the above methods / sub-component methods described in the body of the description. Here, the additional modified TCP may react much more rapidly to congestion drops, thereby reacting for example by including an "initial pause interval" to debug the cumulative delays stored in memory of the lowest floor by default of the minimum RTO of the existing RFC of one second. Algorithm A above by itself and / or its various modified combinations may be further modified / adapted, but will still fall within the principles described herein. As an example among many, where the modification is implemented within the modified monitor software / modified TCP proxy / modified IP emitter ... etc., instead of directly within the TCP stack itself the modified monitor software / TCP modified proxy / sender Modified IP ... etc., you can keep a copy of the current window value of the transmitted data packets / data packets and perform the fast retransmission of the 3 DUP ACKs and the current RTO packet retransmission (instead of that TCP that now simply will not perform any fast retransmission and retransmission of RTO whatsoever) for example when the modified monitor software / TCP proxy modified / modified IP sender ... etc., perform the particular data segment / sent data packet the ACK has not been returned and the TCP will soon perform the RTO timeout, then "interfere" with the particular acknowledgment for the particular "soon-delayed" data segment / data packet and perform retransmissions here data packet / current data segment, and when receiving the fast retransmission DUP ACKs it does not send these to TCP and instead performs fast retransmission here (in this way this endpoint TCP modified will never reduce its CWND transmission speed which can then be at the maximum TCP window size transmission speed, however, the "pause" period here will adjust the current effective transmission speeds of the transmitter, that is, the limit the part of time available for unrestricted transmissions of TCP within each second). Very often, the modified TCP is installed in the user's local guest PC only, and the remote sender source TCP such as the WEB servers of http / ftp servers / streaming media servers still have to implement the modified TCP above. Therefore, the TCP of the local host PC, modified here will need to act as the receiver-based modified TCP, that is to remotely influence the TCP of the remote sender source. Some of the ways in which the TCP local host can influence TCP congestion controls / evasion remote sender source are by sending receiver window size updates to the TCP remote source sender, which sends the ACKS (acknowledgment). ) from DUP to TCP of remote emitter source for retransmission / fast recovery to avoid the waiting intervals of the retransmissions of RTO packets in the TCP source of remotor emitter ... etc. Here is a sketch for a modified TCP based on a very simplified receiver, implemented in the monitor software (which can be modified / adapted additionally, and can also be implemented directly within the TCP itself instead of the monitor software): 1 When you want the TCP packet from the remote server to be received, verify the address and source port if it is already in the table by the TCP flow, but create a new TCB of TCP flow with several parameters: they need to keep the entries from the previous time / sequence number table for all intercepted packets) - the local system time received from the last packet (received from the remote sender, the pure ACK or the regular data packet), the announced window size of the latest receiver packet (sent by local MSTCP to the remote sender), the ACK number of the last receiver packet, that is, the next expected, expected sequence number of the remote sender (sent by the local MSTCP to the remote sender, required by the incoming and outgoing flow packet inspections, and must now be able to immediately remove the TCP table entry by flow in the FIN / END ACK that not only waits the usual 120 seconds of inactivity ) etc (optional) in the synchronization / synchronization (ACK) recognition terminated, immediately set the CWND of the remote sender to eg 8K. This is preferably done by for example 15 immediate ACKs of DUP with eg ACK number = remote sender sequence number + 1, the divisional ACKs can not work well since some TCPs increment CWND only by the number of bytes recognized in change and the behavior of optimized ACKs can not be identical in all TCP.
Note: alternative would be to wait for the first data packet received from the remote sender to then generate for example 15 acknowledgments (ACK) of DUP with the recognition number set to just the sequence number received from the remote sender (in only 1 byte of expense of unnecessary retransmission), or using divisional ACKs TCP uses a tridirectional communication procedure to establish a connection. A connection is established on the start side that sends a segment with the sync flag (SYN) reset and the initial sequence number proposed in the sequence number field (seq = x). The remote then returns a segment with both the SYN and ACK flags set with the sequence number field set to its own value assigned to the reverse direction (seq = Y) and the knowledge field of X + 1 (ACK = X + 1). On receipt of this, the start side makes a note of Y and returns a segment with only the ACK flag set and a recognition field of Y + 1. 2. If the 300 ms expire without receiving the next packet, then: - it is necessary only within the software to detect the next expected sequence number that does not rise in the 300 ms space of the last received previous packet to generate 3 DUP ACKs with the ACK number set to next expected sequence number that does not go up, and at the same time transport the window update of 1800 bytes within the 3 DUP ACK (equals "emitter pause" + 1 packet): keep sending the update window the same 3 DUP ACKs of 1800 bytes incremented by 1800 bytes each time if for example 100 ms passed without receiving any pure ACP or regular data packet, but if some regular data packet is received then send the same window update usual individual (without the 3 DUP ACKs) that restores the previous window size (ACK number field set to?; last recorded, ACK number sent from local to remote MSTCP, or -1) repeatedly every 100 ms until any ACK or regular data packet is then received again from the remote, then repeat the above, for example the 300 ms expiry detection circuit at a start of step 2 above. It is noted here that 3 DUP ACKs can also be sent in place of the individual window update package but after 2 100 ms additional pass, the individual window update ACK packets will have to total update window update packets. 3 DUP ACK, of course, an alternative here can also be any window update package, for example, the package of update of DUP sequence number window, ... etc. (This ensures scenario A which causes the pending remote MSTCP RTO waiting interval that enters the slow start to be avoided, replacing the pending RTO with the fast DUP ACK retransmission / quick recovery event. packet sent at all, it really does not matter that 3 D ACKs are sent unnecessarily with the ACK number equal to the next expected sequence number, Scenario B takes care to keep the same 3 DUP ACKs every 100 ms, until it is receive a next ACK or remote data packet (ie, bottleneck is now not dropping each of the remote sent packets), after which the sending of the individual window size restoration packet is maintained every 100 ms until a next packet is received (that is, even if in the worst case all the window restoration packets are dropped, 300 ms later the process will be repeated again assuring the window "pause" followed by restoration attempts and window) Note: the announced recipient window size is successively increased, because the remote may have to be used up to the advertised window size of previous available receiver, but the packages sent ones fall if they never reach the receiver. Ensuring that the remote never re-enter slow start, that is, CWND = 1, due to normal RTO, very large time reductions have been achieved in the download of the WEB pages. It is pointed out that the fast retransmission does not cause slow start, the 3 ACKs of DUP only divide the existing CWND of the remote one half.
The above algorithm can be optionally simplified without the need to send the receiver window size update to "pause" the other endpoint TCP as follows: 1. whenever the TCP packet from the remote sender is received, verify the source address and port if it is in the TCP table of flow, but create a new TCP TCB per flow with several parameters: (you do not need to keep the previous time table / sequence number entries for all intercepted packets) - the local system time received from the last packet (received from the remote sender, the pure ACK or the regular data packet), the last receiver packet ACK number is the next expected sequence number from the remote sender (sent by the local MSTCP to the remote sender, requires inspections of incoming and outgoing packets by flow, and must now be able to immediately remove the TCP table entry by flow in the END / END ACK not only waiting for the usual 120-second inactivity) etc (optional) in the synchronization / synchronization ACK terminated, immediately set the CWND of the remote sender to eg 8K. This is preferable by means of, for example, 15 immediate ACK of DUP with ACK number = to the initial sequence number of the remote emitter + 1, the divisional ACKs can not work well since some TCPs increase the CWND only by the number of bytes recognized in change and the behavior of the optimal ACK can not be identical in all TCP. Note: the alternative would be to wait for the first data packet received from the remote sender to generate, for example, 15 acknowledgments (ACK) of DUP with the ACK number set to the same sequence number only received from the remote sender (in only 1 expense of unnecessary 1-byte retransmission), or using divisional ACKs. TCP uses a tridirectional communication procedure to establish a connection. A connection is established by the start side that sends a segment with the sync flag or indicator (SYN) and the initial sequence number proposed in the sequence number field (seq = X). The remote then returns a segment with both the SYN and ACK indicators set with the sequence number field set to its own value assigned to the reverse direction (seq = Y) and the recognition field of X + 1 (ACK = X) + 1) At the reception of this, the start side makes a note of Y and returns a segment with only the ACK indicator set and a recognition field of Y + 1. 2. if the 300 ms expire without receiving the next packet, then: - only it is necessary to be inside the software to detect the next expected sequence number that does not rise in the space of for example the 300 ms of the last received previous packet to generate 3 DUP recognition (ACK) with the ACK number set to the following sequence expected not to go up; - keep sending the same 3 DUP ACKs if for example 100 ms elapsed without receiving any pure ACK or regular data packet, but if an ACK or regular packet of data was received at all, then repeat the above for example the expiration detection circuit of 300 ms at the start of step 2 above. (This ensures scenario A which causes the pending remote MSTCP RTO waiting interval to enter the slow start to be avoided, replacing the pending RTO with the retransmission / fast recovery event of the DUP ACKs. sent for nothing, it really does not matter that 3 DK ACKs are sent unnecessarily with the ACK number equal to the next expected sequence number.
Scenario B takes care to keep sending the same 3 ACUs of DUP every 100 ms, until a next ACK or remote data packet is received (ie, bottleneck now does not drop each of the packets remote sent); after which the sending of the individual window size restoration packet is maintained every 100 ms until any next packet is received (ie even if in the worst case all window restoration packets are dropped, 300 ms later , the process will be repeated, again ensuring the "window pause" followed by restoration and window attempts). The very simplified previous algorithm is derived from the other several similar algorithms here: 1. The receiver-based target is made by TCP from remote emitter source that has not implemented the modifications to behave like the emitter of "mirror image" in base as much as possible (but there are some slight differences that need to be avoided eg those based on the receiver have no way of knowing if the emitter source TCP has already transmitted the next segment of expected sequence number data that does not ... etc.,): the "pauses" based on the sender when the ACK of the regular data packet is late but allows a regular data packet per pause interval that is sent as a probe when the waiting interval is retransmitted.
MSTCP (detected by the sequence number = <last sent sent then "interfere" the ACKs to the MSTCP for the ACKT timeout to put CWND up to the previous level before RTO. Now get a simplified basic version up to the first , to be subsequently improved 2. The regular data packet probe method is sufficiently direct, using the list of main events of send time / sequence number and the list of retransmission events. time negotiated during the synchronization / synchronization ACK, by modifying intercepted synchronization / synchronization ACK packets and / or PC registry settings 3. When the OTTest> OTTest> registered current (min) + 300 above ms, this indicates congestion buffer delays (OTTest > (min) is our last best OTT estimate decongested from the remote sender to us )? send window update of 1800 bytes to allow a regular 1500-bit ethernet packet to be received and also several small pure ACKs. "Pending" retransmissions of remote sender packets are detected whenever the sequence number is above > to the next expected sequence number AND 300 ms now elapsed without numbers pack Absent separation sequence that is received (that is, it can now safely be assumed that the separation packet has been lost, and the remote sender will now have to retransmit with pending slow start at the expiration of the minimum fact of one second of RFC) ? but the MSTCP will be ready on its own to generate 3 DUP ACKs upon receiving 3 packets in order sequence that cause the remote to retransmit quickly without entering the slow start again (if the remote transmitter tries to have only 2 numbers) of sequences out of order to transmit and nothing, this should not interrupt things since it can simply allow the remote to start slow since the remote is not sending much at this time)? it is only necessary to detect the next expected number of sequences that does not rise above the 300 ms space of the previous received packet to generate 3 DUP ACKs with the ACK number set to the unexpected sequence number set here an algorithm (sample only for the method based on the receiver. (It is pointed out that SACK can be useful that reduces the occurrences of the DUP ACKS, the divisional ACKs, the DUP ACKs, the useful optimistic ACK to restore the remote sending speeds similar to the interference of ACKs based on the issuer, see http: // www .2. cs. cmu. edu / ~ kgao / course / network. pdf and http://www-2.cs.cmu.edu/~kgao/course/network.pdf and Google search terms "ACK spoofing") 1. The subnetwork user enters, only monitor TCP flows towards - from specified sub-networks: 2. The TCP flows that comprise the external source / destination will be monitored differently. 2.1 The external source (ie custom TCP acts as a flow controller to a receiver) selects the time stamp option for these flows during connection establishment (you can modify the synchronization packet or you may need to set the PC registry so that all the flows in paragraph 1, 2 above also stack with the timestamp, Window server 2003 only allows the time stamp option if it is started by remote TCP) verifies the incoming packet of this TCP for the TSVal, from remote sender, register this as OTTest > (max) and also OTTest > (min) for the first received own packet (receiver system time present - TSVal). OTTest > it symbolizes the unidirectional travel time estimate that is the maximum or minimum OTT observed so far. The OTTest > (max) and the OTTest > (min) are updated from each subsequent packet received. If the OTTest > less OTTest > (min) the package > to eg 100 ms (user input parameter), then the remote sender must "pause" the custom TCP generates the notice packet of the waste segment window size (or no data) of 1 byte of for example 50 bytes (not necessarily 0, but it allows the remote sender TCP answer / pure ACK), with the sequence number set to the last sent sequence number of the receiver or the last ACK number received -1 (in case the receiver does not send the data segment to the remote sender for any of this way there is no last receiver send sequence number). The receiver continues to send the same generated window warning packet (but the sequence number or the last received ACK number -1 may have changed) until there is a received reply confirmation to one of these packet window update packages replicated, meaning in this way that at least one of these window update packages is received at the sender and its response conformation is now up (it can be lost at any address), or whose OTTest > - OTTest > (min) < for example 100 ms (it does not stop "pause" until there are no congestions). The "pause" can also be stopped in any other packet eg regular data packet arriving within the OTTest > (min) + 100 ms. Where in the receiver sends the same window update package but with a field of window size set to the value immediately before the "pause" (this value is recorded before performing eg a 50-byte message.) 2.2 Remote Destination (ie, custom TCP acts as sender-based) the option of timestamp is not necessary but it is useful to know the unidirectional backward delay to better determine the cause of the RTT <at the waiting interval (caused by reverse route congestion) - in the MSTCP that originates the packets with sequence number lower than the last sequence number sent (packet drop retransmissions), the MSTCP will enter slow start again, the custom TCP will now interfere the ACKs back to the MSTCP for each packet originated by the MSTCP for a period of for example 100 ms. This will cause the backup of the congestion window on the return of the congestion window to, for example, the TCP window size. Subsequent TCP can be retransmitted quickly through the 3 ACUs of the received receiver's DUP (after which the custom TCP may interfere again with the return of the ACKs).
Our algorithm: 1. Whenever the TCP packet is received, verify the source address and the port if it is already in the table of the TCP per flow, but create a new TCP TCB per flow with several parameters: (you do not need to keep the table entries of time / sequence number previous for all intercepted packets). - the local system time received from the last packet (pure ACK or regular data packet), announced window size of the last receiver packet, - ACK number of last receiver packet ie next expected sequence number (requires inspections of Incoming and outgoing packets by flow, now, you must be able to immediately remove the TCP table entry by flow in FIN / END acknowledgment (ACK) not just wait for 120 seconds). 2. If the 300 ms expire without receiving the following packets then: - it is only necessary to be inside the software to detect the next expected sequence number that does not arrive in the space of 300 ms of the last received previous packet to generate 3 acknowledgments (ACK ) of DUP with the acknowledgment number set to the next expected sequence number that is not up, and at the same time transport the window authorization of 1800 bytes with the 3 ACUs of DUP (equivalent to the "pause" of the emitter + 1) package); here you should expect that the 3 ACUs of DUP will be returned again recognized by the remote, keep sending the window authorization of the same 3 DUP ACKs of 1800 bytes increased by 1800 bytes each time if for example 100 ms passed without receiving ACK return, but if any return ACK or any regular data packet was received (despite the OTT time) then send the window authorization of the 3 DUP ACKs which restores the previous window size. - (This ensures scenario A which causes the pending remote MSTCP RTO waiting interval that reenters slow start to be avoided, replacing the pending RTO with the fast retransmission / recovery event of the DUP ACKs. there was no packet sent at all, it does not really matter if the 3 ACK acknowledgments of DUP are sent unnecessarily with the number of ACK = expected sequence number In scenario B, care is taken to keep sending the same 3 ACUs of DUP each 100 ms, until the acknowledgment of the ACK is received, or a next regular data packet is received (ie, the bottleneck that now does not drop each packet sent from the remote), after which the sending of the 3 DUP ACKs that restore the advertised window size every 100 ms until knowledge of the ACK has been received.
As an alternative to send the 3 DUP ACKs for the next expected sequence number segment, you can set the ACK number field in the 3 DUP ACKs to the next expected sequence number -1 instead (in the single expenditure) 1 additional byte retransmitted) case in which it is definitely necessary to establish the field of the sequence number using the expected sequence number next rotational -100, -99, -98 -1 See http://www.cs.rutgers.edu/ ~ muthu / wtcp.pdf where TCP is suggested will be transmitted in this case the start from the minor unrecognized packets or the first packet not sent in the current congestion window. - it is expected to be closer to a specification, the software still maintains "the passive step" not altering any package received and sent. The remote MSTCP now never reenters in RTO to slow start. - for PC-free software, no probe and timestamp feature are needed at all (paragraph 2); Window authorizations can be repeated simply every 100 ms (instead of 3 * OTTest> (min) in paragraph 4) until any pure ACK or regular data packet is received (no matter the reception time). Here when the flow causes the packets to fall, it is known that the MSTCP of the other flows that crosses the same bottleneck where the packet is dropped will be at the speeds of RTO in about the same time as our own MSTCP? The CWND of the remote sender can be safely restored. 1. The objective is to make the remote behave like an "image in the mirror" emitter based on as much as possible, the "pauses" based on the sender when the ACK of the regular data pack is late but allows a regular data packet per pause interval to be sent as a probe, when the MSTCP timeout is retransmitted (detected by the sequence number = <sequence number sent last recorded then "interferes" the ACKs to the MSTCP for the ACKT wait interval to put the CWND up to the previous level before the RTO Now we must obtain a basic receiver-based version mirrored or simplified up to the first, to subsequently improve (for example, the packet feature may be useful Separation of SACK.) 2. The regular data packet probe method is sufficiently direct, using the list of main events of send time / sequence number and event list Retransmission Need to ensure the time stamp option negotiated during recognition (ACK synchronization / synchronization, by modifying SYNC ACK packets (synchronization / synchronization) intercepted and / or PC registry settings 3. [The simplified algorithm is no longer required when above the OTTest > current registered (min) + 300 ms, this points to congestion buffer delays (OTTest (min) is our last best estimate of OTT decongested from the remote sender to the others)? send window update of 1800 bytes to allow a regular 1500-byte ethernet packet to be received and also several small pure ACKs] 4, [It is no longer required in the simplified algorithm. Keep sending the same 1800-byte window update increased by 1800 bytes if the OTTest (min) elapsed without receiving a regular data packet or pure ACK with the arrival of the OTTest > At the current registered OTTest (min) + 300 ms (so that every OTTest (min) that has elapsed, the remote can send a single new regular data packet as a probe). If at any time an OTTest at arrival time = < to OTTest registered current (min) + 300 ms, then immediately send the window update that restores the previous receiver window size, ie the remote now resumes the previous regular dispatch speed]. (NOTE: this tries to prevent packet drops by regulating the speeds so that the remote never needs to start slow again, but being the external Internet does not work really well, it is difficult to know the OTTest just before the packets fall, therefore paragraph 4 above should be replaced by the following paragraph 4 that simply now concentrates on restoring the remote sending speeds as fast as possible in the event of packet loss , that is, it does not care for longer if the packets fall causing slow start in the remote IF can restore remote sending speeds immediately similar to the "interference" based on the sender when detecting the retransmitted packet) 4. The retransmissions " pending "of the remote emitter packet is detected by the software whenever the sequence number arrives > to the next expected sequence number and now 300 ms have elapsed without the absent / packet separation sequence number being received (ie, it can now safely assume that the separation packet has been lost, and the remote sender will now have to retransmit with slow start pending the expiration of the minimum ceiling of 1 second of RFC)? but the MSTCP will be ready in itself to generate 3 DUP ACKs upon receiving 3 packets of sequence number out of order that cause the remote transmits quickly with / without entering the slow start again (if the remote transmitter only tries to have 2). sequence numbers out of order to transmit and nothing, this will not interrupt the things that can simply allow the remote to start slow since the remote is not sending much at this time)? it is only necessary to be inside the software to detect the next expected sequence number that does not rise in the space of 300 ms of the last received previous packet to generate 3 DUP ACKs with the ACK number established to the next expected sequence number of no arrival , and at the same time transport the window update of 1800 bytes with the 3 ACUs of DUP (equivalent to the "pause" of the transmitter + 1 packet); you must wait for the 3 ACU of DUP to be returned again recognized by the remote, keep sending the window update of the same 3 DUP ACK of 1800 bytes incremented by 1800 bytes each time if for example 3 * OTTest passed ( min) without receiving return ACK, but if any return ACK or any regular data packet after received at all (despite the OTT time) then send the DUP 3 ACK window update that restores the window size previous (Here only the packet drop is detected early to update the window size of the receiver, equivalent to the "pause" based on the emitter + 1 packet) 5. The current DUP ACKs that cause the remote to re-transmit fast they are all handled by the same MSTCP. The software only needs to detect 2 ACKs of DUP added from intercepted MSTCP (jointly 3 and includes the previously regularly recognized) to restore then immediately the remote CWND using optimistic DUP / ACK divisional ACK / ACK techniques, see http://arstechnica.com/reviews/2q00/networking/networking-3.html and http://www.usenix.org/events / usits99 / summaries / (Here it is similar to the receiver-based interference ACKs in the MSTCP that send 2 additional DUP ACKs). NOTE: In scenario B, care is taken to keep sending the same 3 DUP ACKs every 100 ms, until knowledge of the ACK is received, or a next regular packet of data is received (ie the bottleneck) now do not drop each remote sending package); after which the sending of the 3 DUP ACKs that restore the advertised window size every 100 ms is maintained until the acknowledgment of the ACK is received in this case: - the MSTCP always recognizes any of the ACKs out of order (ie, ACK that recognizes segments that have yet to be sent), otherwise it would need to include the sequence number field in the 3 ACUs of DUP where the ACK number field are all set to the same next expected sequence number (Note: the DUP sequence number packet is always obtained as recognized in RFC). - it may be desirable to use the method analyzed previously rotational using 100 previous sequence number fields in the DUP ACKs (ie, "recorded" after expected ACK-100) with the ACK number field all adjusted to the same next expected sequence number, so that the DUP ACK will now have each different sequence number field set to any of the following expected sequence number recorded - 100 (not the two DUP ACKs will have the same Seq number) NOTE: Also its 3 DUP ACKs assumed for even the segment not sent does not unnecessarily activate the remote MSTCP that has the CWND and sets the SSTHRESH to 1/2 of CWND present (the packet can either be sent but dropped, in which case it will definitely do the fast retransmission by dividing half of the CWND , but still does not send case in which it can retransmit fast or not unnecessarily dividing half the CWND) but there is a slight damage of unnecessary performance.
Methods using inter-packet arrival delay as indications of congestion In any of the methods, sub-component methods described earlier in the description, congestion indications or packet drops can now be detected in exchange / inferred by modified TCP / Software of modified Monitor / modified proxy / Port sender modified ... etc., by observing the delay between inter-packet arrival for example in particular when the "elapsed time interval" between immediately succeeding packets exceeds a certain user input interval (or derived from some algorithm that can be based on in RTTest, OTTest, RTTest (min), OTTest (min) ... etc.), since the last received packet from the remote sender source TCP or the remote receiver TCP (if pure ACK or regular data packet. .. etc.) . The TCP connection between symmetric with each end capable of sending and receiving at the same time and the data segments sent from the end / data packets and corresponding corresponding return response ACK from the other end is indicated here [refer to below] herein as sub-flow A] can be measured with data segments independently sent from the other end / data packet and their corresponding independent return response ACKs from the other end [refer to hereinafter as sub-flow B ]: in this way the modified TCP / modified Monitor Software / modified proxy / modified Port emitter ... etc., when observing the delay between the inter-packet arrival above "discern" and observing separately the arrives of the inter-packets of the sub-flow A and / or sub-flow B completely independent, so that when the data segments / data packets sent from a end, that is, sub-flow A falls along the forward path to the other end, so that their corresponding return response ACKs will not return from the other end along the return path, regardless of data segments / data packets sent from one end, that is, sub-flow B arriving along the return path (if any) will not now cause this end to now wrongly assume the "time interval" elapsed "for the independent A sub-flow that has not expired. Modified TCP / Modified Monitor Software / Modified Proxy / Modified Port Transmitter ... etc., at one end when acting as the emitter will only observe its own current from the corresponding return response ACKs of the sub-stream A for the Inter-packet arrival delays during the expiration of the "elapsed time interval" by ignoring the segments / packets sent from the independent sub-flow of the other end. Modified TCP / Modified Monitor Software / Modified Proxy / Modified Port Transmitter ... etc., at one end when acting as a receiver will only observe the incoming segments / packets from the other B sub-stream for the delays of the inter-packet arrivals during the expiration of the "elapsed time interval" achieving this own sub-flow A independent of the other end (if any) corresponding to the arrival stream of the returned response ACKs. The task must be simple enough; one end when acting as the sender-based only needs to monitor its own corresponding incoming return response ACKs of the packets sent for the "inter-packet interval" delays during the expiration of the "elapsed time interval", whereas when acts as based on the receiver only needs to monitor the data segments / data packets sent from the other end: in addition were the packets sent from the independent sub-flow of the other end that continue arriving, before the expiration of the "elapsed time interval" of these corresponding return response ACKs of the packets sent from the independent sub-flow from the other end whose "inter-packet interval" delays now have the expired "elapsed time interval", this will provide additional definitive indications / definitive interference the unidirectional route from the other end to this end is "UP" and q The unidirectional route from this end to the other end is "DOWN" to react accordingly. This has the advantage of being able to specify for example the "elapsed time interval" much smaller than the RTTest or OTTest or RTTest (min) or OTTest (min) ... etc., which allows a speed response time much faster being able to detect / prevent congestion and / or packet drop events and / or physical transmission errors (even decongested RTT, OTT, etc., can account for several hundred milliseconds over the Internet and can not be determined , or its maximum limit can not be determined in advance, whereas the previous elapsed time interval from the last received by a packet can be chosen as small as for example 50 ms instead of several hundred milliseconds). During, for example, downloading ftp / http websites, regular data packets are transmitted continuously when not interrupted by the RTO packet retransmission timeout that re-enters slow start with readjustment of CWND alo Segment size. Assuming the lowest bandwidth link of the route crossed by the packets here that is from the first TCP miles of sending source, for example, DSL of 500 Kbs, the transmission time delay for an individual packet to exit completely of the DSL transmission medium from the sending source will not be an important factor here, which is small, for example 24 ms for a packet with an Ethernet size of 1,500 large bytes (1500 * 8/500000 = 24 ms). As for the last mile, a modem dial of 56 Kbs, the transmission delay time for a typical 500-byte packet will take approximately 71 ms (500 * 8/56000 = 71 ms). In today's Internet, the lowest possible bandwidth link along the route crossed by a packet will be 56 Kbs in the worst case scenario. The default packet size is usually about 500 bytes, as usually negotiated by TCP during connection establishment. The "inter-packet arrival" method (and / or "synchronization" packet method, see later sections) may start with "elapsed time interval" value settings and "synchronization" interval value adjustments in Based on assumptions of low bandwidth links 56 Kbs along the route and negotiated larger packet size, then it continuously monitors the current last observed minimum value of the inter-packet received arrival interval between the regular data packets ( or between the ACKs for the current data packets sent) to dynamically adjust the setting of the "elapsed time interval" value and the value settings of the "synchronization" interval for example if the interval of the "inter-packet arrivals" Last minimums are now only 20 ms then the value of the "elapsed time interval" can now be set to for example 80 ms and the value of the "synchronization" interval now ra can be set to for example 40 ms ... etc., or derived based on algorithms contemplated. The inter-packaged spacings when sent continuously packets of data from the sending source TCP, and are received on the receiver TCP, must show the same previous inter-packet arrival spacings that are centered around 24 ms or 71 ms, respectively a total number of intervals due to the individual packet transmission time delay found at each node along the cross-route where the nodes use forward storage and switching (instead of cutting through the switch that can cause the packet to transmit the delay). time encountered in each node, storage and forward), even if the cross-links introduced several delays and / or damping delays since this would affect the data packets evenly and even arrive at the receiver with a separate centering around about 24 ms or 71 ms, respectively, assuming the buffer delays of course that are not immediately added very suddenly and n addition to eg 200 ms to a next packet from the previous packet (ie, additional buffer delays will be added in a continually gradual manner in each successive next packet) and no packets will fall / lose along the route if so, then you can add "infinite" delays to this next packet that is dropped / lost from the packet sent immediately before (you can detect / infer this congestion and / or loss of packet and / or physical transmission error events when observing the inter-packet delay does not suddenly exceed a certain value for example 100 ms, that is, its 100 ms since the last packet was received, that is, 100 ms have now elapsed without receive the immediately following packet, that is, packet with the following expected correct Sequence Number; however, even if other subsequent packets can be received within these 100 ms and only this particular packet was not received immediately thereafter, this congestion of "separation" and / or packet drop and / can be considered similarly. or physical transmission error events and handle in a similar or slightly different way). The total number of intervals due to the delay of transmission time of individual packets found at each node along the cross-route where the nodes use, store and send the switch (instead of cutting through the switch that the delay would return) of individual packet transmission time found in each node, from storage and dispatch) can vary from a few milliseconds if the nodes along the cross-path are of high-bandwidth links (even if switching is implemented). storage and shipping instead of switching from traversing) to several tens or even a few hundred milliseconds if the crossed links are low bandwidth capacity. For example with 500 Kbs in the first mile, over the following link of 10 Mbs, then the next link of 100 Mbs, then the next link of 10 Mbs and finally the link of the last mile of DSL receiver of 500 Kbs, the total transmission termination time delays encountered by an individual packet of 1500 bytes in size at each successive stage of the send links with the nodes that implement all switching storage and sending and switching traversing here not assuming buffer delays of congestion at all in each of the crossed nodes will be approximately 24 ms + 1.2 ms + 0.12 ms + 1.2 ms + 24 ms = 50.52 ms, that is, when they are finally received at the destination the inter-packet arrival interval will be centered around 50.52 ms between the immediately following packets. As for the first mile modem link of 50 Kbs, on the next link of 10 Mbs, then the next link of 100 Mbps, then the next link of 10 Mbps and finally on the last mile modem link of the receiver. 56 Kbs, the total transmission termination time delays found by an individual packet of 500 bytes in size at each successive stage of the send links with the nodes that implement all switching storage and sending and switching traverses, not assuming here buffer delays of congestion for nothing in each of the nodes, will be approximately 71 ms + 0.4 ms + 0.04 ms + 0.4 ms + 71 ms = 142.84 ms, that is, when they are finally received in the destinations, the interval of inter-packet arrivals will center around 50.52 ms between the immediately following packets. Any congestion buffer delay, which increases the time actually takes a packet to finally arrive from the source to the destinations and may cause a packet to be sent much later (ie, not a next packet immediately following the packet sent earlier than reference, for example, spanning several seconds to tens of seconds) to take for example 300 ms more than the reference sent packet long before it actually arrives at the destination receiver caused by the cumulative congestion buffer delays found in the crossed nodes, BUT since between two next successive sent packets and the immediately sent packet above, the "extra" buffer, congestion, cumulative, incremented, delays found by the next packet immediately following its packet sent immediately before it can be just for example 3 ms, it's say, several orders of magnitude much smaller than above for example 300 ms as between two distant packets sent that span several seconds Separate (assuming here that the level of congestion is increasing, similarly applies the same reasoning where the level of congestion is decreasing). These additional "extra" congestion buffer delays will be small as between the next immediately succeeding packets and their packet sent immediately before will only gradually increase between any subsequent pair of the next immediately succeeding packet and its immediately preceding counterpart. This possible small additional amount of congestion buffer delays as between any subsequent pair of immediately succeeding next packets and their immediately preceding counterparts, even though they are small and uniformly neutralized where the level of congestion is uniformly stabilized / softened by other subsequent pairs of the pairs sent to the last one immediately adjacent, however it must / can be factopzado when you choose / derive the value of elapsed time period when the next / immediately following packet is not received from the TCP of emitter source to detect / infer congestions and / or packet drops and / or physical transmission error events. On very rare occasions, however, the level of congestion may accumulate suddenly (not impossible) up to, for example, 200 ms of buffering delays within a short period of time. for example 100 Mbs such as for example when the incoming link is 100 Mbs and the outgoing link is only 10 Mbs ... etc., in which case you can conveniently include the scenario here to provide the elapsed time interval to detect / infer this very sudden and very rare congestion buffer delay event, in addition to congestion and / or packet drops and / or error events by physical transmission. It is noted as between any subsequent subsequent additional sent pair of next successive packets and their immediately preceding counterparts, this very rare and sudden level of congestion will accumulate now it can no longer cause the "elapsed time interval" to expire. neutralizes uniformly in the accumulation of sudden congestion that is uniformly established and smoothed between other pairs of subsequent additional sends of immediately sent last immediately adjacent pairs. It is pointed out that the TCP connection is completely duplex, that is, each of both ends of the connection can be sending and receiving, acting as a sender source TCP and receiver TCP at the same time. Even if only one end of the connection is doing almost all or all of the sending of regular data packets such as downloading FTP files / downloading http ... etc web pages, the receiving endpoint TCP will always be sending Acknowledgments of return in response to the regular data packets received back to the end TCP doing almost all or all of the sending of regular data packets. Therefore, the "elapsed time interval" methods summarized in the preceding paragraphs apply similarly to the endpoint TCP that does almost all or all of the sending of regular data packets, since in "elapsed time interval" expired without receiving pure ACK packets and / or secondary ACK packets from the other endpoint TCP that receives the discharges, the endpoint TCP that does almost all or all of the sending of regular data packets can now infer detection of congestion and / or packet drop and / or physical transmission error and / or congestion level accumulation events "very rare" and "very sudden", and react accordingly. However, here when the receiver end TCP increases the delayed acknowledgment (ACK generated in each different packet or expiration of 200 ms, whichever occurs first) and this effective delayed ACK option for a TCP connection per particular flow, in the adjustment of the value of "elapsed time interval" chosen or derived from algorithmic considerations should be given to include the possible additional delay of 200 ms introduced by the delayed ACK mechanism, for example in cases of delayed ACK, the "interval of elapsed time "should have 200 ms added to it, or optionally instead of adding 200 ms to the "elapsed time interval" to include instead this delay event of 200 ras of the worst case found to be among the various inferrable / detected events at the expiration of the "interval of time elapsed". This event will be rare and occurs as for example when there is little activity in the sender source TCP that sends the packets to the receiver end TCP, in this way it will not impact much on the execution performances due to the ACK scenario Delayed worst case When detecting / inferring the previous events when the "elapsed time interval" expires without receiving the next packet (NOTE, it is not necessary to require any information yet, nor does it need to use the RTT, OTT, etc., not optionally not calculations of RTO based on RTT historical values (instead, the current packet retransmission wait interval can be activated, for example, in certain values entered by the user or derived from algorithms based on, for example, historical inter-date interval values. packages etc., these requirements can be optionally removed from modified TCPs that are redundant in excess of the requirements now), modified TCP / Modified Monitor Software / modified proxy / Modified IP Issuer / software modified block, etc., then it can proceed with the retransmissions of current coupled packets simultaneously with the decrease of CWND / slowdowns, and / or decrease of modified uncoupled CWND / slowdowns only without being accompanied by retransmissions of packets current or actual, and / or various methods of "pause" modified with or without decrease of accompanying CWND / slowdown etc., as described in previous methods / sub-component methods in the body of the descriptions. Once the above processes were activated in the "delays" of the "inter-packet intervals", the "elapsed time interval" expires, when subsequently in an arrival packet then the next one, above from the same sub-flow of the Send source TCP, the activated processes can now be terminated either immediately or optionally after a certain defined interval, and the CWND size / speed limit is optionally restored to the values prior to the expiration of the "elapsed time interval", and / or optionally the "pause" in progress is "without pause" etc. The arrival of this packet now means that the route from the sender source TCP to the receiver TCP is not fully dropped by congestion at all and each packet: optionally it may additionally require that this packet arrive if the regular data should be the same. following appropriate packet expected with the correct next expected Sequence Number and / or if the pure ACK packet should have its Sequence Number field = last valid Sequence Number received, received from the TCP from sender source to the receiver TCP (or the latest largest valid Reconnaissance Number sent from the receiver TCP to the emitter source TCP -1). Similarly, Modified TCP / Modified Monitor Software / Modified Proxy / Modified IP Issuer / Modified Protection Software ... etc., may OPTIONALLY and / or additionally then also proceed to have the other end TCP perform the current coupled packet retransmissions concurrent with the decrease in CWND / slowdowns, and / or decrease in modified CWND decoupled / slowdowns only without being accompanied by retransmissions of current packets, and / or various modified "pause" methods with or without decrease of accompanying CWND / decrease of speeds ... etc., as described in the previous methods / sub-component methods in the body of the descriptions. Or, Modified TCP / Modified Monitor Software / Modified Proxy / Modified IP Issuer / Modified Protection Software ... etc., can OPTIONALLY AND / OR ADDITIONALLY then also proceed ONLY with making the other end TCP (without making the local TCP do it at all, this feature will be useful for example when the other end TCP is doing all or almost all the sending of regular data packets that are in the normal unchanged TCP) make the retransmissions existing packets coupled with simultaneous decrease in CWND / slowdowns, and / or decrease in modified CWND decoupled / slowdowns only without being accompanied by retransmissions of current packets, and / or various "pause" methods modified with or no decrease in accompanying CWND / decrease in speeds ... etc., as described in the previous methods / sub-component methods in the body of the descriptions. Once the above processes were activated in the expired "elapsed time interval", when in an arrival packet it is above the same sub-flow of the other end TCP, the earlier activated processes can now be terminated either immediately or optionally after a certain defined interval, and the CWND size / speed limit is optionally restored to the previous values before the "elapsed time interval" expires, and / or optionally the "pause" in progress is "non-stop". .. etc. It is not easily possible to make the other end TCP, and the other end TCP which is the modified or existing unmodified TCP already specifically to allow this mechanism, for remote TCP / remote applications / remote processes alter the size of CWND / internal transmission speeds of the other end TCP directly via some protocol commands. However, it is easily possible to make the other endpoint TCP, even if the other endpoint TCP which is the existing unmodified TCP or is not already specifically modified to allow this mechanism, cause the other endpoint TCP to "pause" and / or "do not pause" and / or "pause" but allow a defined maximum number of bytes / packets ... etc. to be transmitted, as summarized in the various previous Methods / Methods sub-components in the body of the descriptions for example, it sends an update packet of receiver window size of "0" bytes and / or "1600 bytes" ... etc., to make several "pauses" in the other end TCP, which sends the packet to update size of receiver window of the previous size before the event "activated" to the operations of "no pause" / normal restoration of the other end TCP ... etc. (see also previous section on Implementation of TCP modifications to work on external Internet). Independently and / or optionally in addition to the various previous methods for example the "elapsed time interval" methods, the existing or previously described TCPs / Monitor Software / TCP proxy / IP sender / Protection Software ... etc., can be modify / modify additionally to ensure that each of both modified endpoints of a TCP connection automatically generate "synchronization" data packets to the other modified end (or only a modified end of a TCP connection automatically generates "synchronization" data packets "at the other end modified or unmodified) ensuring that where required there is always a packet sent to the modified TCP from the other end at least every" synchronization "interval period (such as for example half of the chosen value of the" interval "). elapsed time ", or the transmission time delay of the lower bandwidth link of the route crossed by the packets for an individual packet to come out completely on the transmission medium * multiplier, whichever is the greater: it is stated that the value of the "elapsed time interval" here must always be greater than the "synchronization" value above r) for example when generating the "synchronization" packet and for sending the other end TCP whenever it expires at the "synchronization" interval without any individual packets from the same subflow that is sent to the other end TCP. In this way, if both ends were modified and each one is sending "synchronization" packets to the other modified end, each end of the modified endpoints will know / impede / immediately detect the route unidirectional from the other end to the local end TCP in which there are congestions and / or packet drops and / or physical transmission error and / or congestion level accumulation event very rare and very sudden (BUT it does not include the ACK event Delayed 200 ms rare here: additionally if only one of both ends were modified and sending "sync" packets to the other unchanged end TCP for example in the form of the DUP Sequence Number package outside the normal window that produces the return response ACKs back from the other unchanged end TCP, the local end modified TCP will only be able to immediately know / prevent / detect that any of, but which does not know which one, definitely the routes of sending or returning between the modified local end TCP and the other modified end TCP are congestion and / or packet drop and / or physical transmission error and / or event or very rare and very sudden congestion level accumulation, BUT it does not include the delayed ACK of 200 ms rare here), when the "elapsed time interval" of sub-flow has expired and no packet of any type of the same sub -flow (including the type of "sync" packet generated from sub-flow) is being received from the other end TCP. This definitive detection / additional definitive inference of the unidirectional route from one extreme to the other extreme, and / or the Another extreme to this extreme, is definitely "UP" or definitely "DOWN" at this time it would be useful to react better accordingly. This may or may not be practically used in a useful way, noting that perhaps the unidirectional return route occurred to be "DOWN", there is no way to know if the unidirectional forward route is "UP" or "DOWN" at all. It is also pointed out that any "separation" package that is absent, lost / dropped but does not cause inter-packet arrival delays (of physically arriving packages), the "elapsed time period" expires, for example, because the packet that arrives physically out of order, last, different up within the "elapsed time interval", care will normally be taken by means of the usual fast retransmission mechanism of 3 ACUs of DUP; alternatively, the "elapsed time interval" mechanism of inter-packet arrival delays may instead strictly insist on any missing "separation" packet having to trigger the expiration of "elapsed waiting time" if it is not received within of the "elapsed time interval" of the arrival time of its predecessor sent packet in immediate order (as ordered by the packet sequence number ...) ... etc. When the "elapsed time interval" of inter-package arrival delays has expired sub-flow and no packet of any type of the same subflow (BUT excluding the type of "synchronization" packet generated from sub-flow, or where the corresponding return response ACKs of sub-flow are applicable) that are presented in the Modified TCP of local end can either activate immediately and cause the modified TCP of local end (and / or optionally also make "remotely" the other end TCP) make the existing retransmissions of current packets coupled with the decrease of CWND / decrease of speeds, and / or decrease of modified uncoupled CWND / decrease of speeds only without being accompanied by retransmissions of current packets, and / or various methods of "pause" modified with or without decrease of CWND / decrease of companion speeds etc ., as described in the previous methods / sub-component methods in the body of the description, or do so only after a certain period of time. additional time of eg 250 ms (value entered by the user or some derived value based on the algorithm that includes factors such as RTTest, OTTest, RTTest (mm), OTTest (max), etc.) has passed since the last / last packet of any type of the same sub-flow (BUT excluding the type of "synchronization" packet generated from sub-flow, or where the corresponding return response ACKs of sub-flow are applicable) was received from the other TCP modified endpoint (and without a new subsequent intervention arrival packet of any type of the same subflow (BUT excluding the type of "synchronization" packet generated in the sub-flow, or where applicable, the corresponding return response ACKs) of sub-flow) that are received from the modified TCP from the other end during this time of eg 250 ms) ... etc., and / or a full current effective window value of packets from the same sub-stream has been sent and None of the packages have yet been recognized. Where extreme Arabic implements the "inter-packet arrival" method and the "synchronization" packet method, the "synchronization" packets sent to the other modified end TCP can simply be in the form of a packet generated with the same number of Source IP address port and the same destination IP address and port number as the particular TCP connection per stream, together with suitable identifications uniquely identifying packets such as "synchronization" packets: such as, for example, unique identification fixed-length special in the data field portion or inserted "filler" field portion for example containing the source IP address Port number and / or the destination IP address Port number, without requiring that the other modified end TCP of reception generate acknowledgments (ACK) of return response ... etc. They were only one of the end that is modified and the other end that is not modified (BUT it is also applicable where both ends are modified), the "synchronization" packet when it is sent by the modified end to the other unmodified end will need to be in the form of a packet that produces return response ACK from the unmodified receiving end such as for example a packet generated with the same IP address port number and the same destination IP address port number as the connection of Particular TCP per stream together with the field value of Duplicate Sequence Number not within the window that produces a return response ACK of the unchanged end of reception (such as sending for example the Sequence Numbers packet out of order not within the window that the receiving TCP always generates a return ACK of "do nothing" see the topic of Internet discussion forum "Reorder package" knowledge "http: // groups-beta. Google. cora / group / corap. protocols. tcp-ip 1 Phil Karn March 2, 1988 2 CERF March 2, 1988 ..., and Google Search term "ACKing the ACK" ("recognizing recognition"), it is also noted that the sending of ACK Individual DUP does not cause rapid retransmission. 0 alternatively such as sending for example ACK outside order to see the Google Search term "out of order ACK", "eliciting an ACK", "DUP Sequence Number ACK", "ACK for unsent data", "unexpected ACK" ... etc.). The return response ACK produced from the other unmodified end simply has its ACK field value set to be the Next Expected Sequence Number to be received by the other unmodified end of the modified end, upon receipt of this ACK from return response, the modified end only discards and ignores this return response ACK since the next data segment of Expected Sequence Number has not yet been sent. In the very rare scenario of "once in a blue moon" where the next segment of expected sequence number data was actually sent just at the exact moment before receiving the returned response ACK, the modified endpoint now only retransmits quickly " unnecessarily "and after receiving 3 ACK of return response DUP all with the same exact ACK number, which again is also very different since the data segment actually sent only at the exact moment before receiving the ACK of Initial return response and / or the subsequent subsequent data segments sent will not increase the Next Expected Sequence Number of the unmodified end by causing the next return response ACK to now have a different field value of increased ACK number. The immediately preceding paragraphs described scenarios mainly where the TCPs of both ends implement the sending of "synchronization" packets to the other end TCP. This allows each endpoint TCP to be able to definitively determine / mfepr the unidirectional route of the other endpoint TCP to the local endpoint TCP that congests and / or drops data and / or has errors of physical transmission and / or accumulation of data. Very rare and very sudden level of congestion (but the delayed ACK mechanism of 200 ms will not be the cause now, since the "synchronization" packet mechanism is implemented here whenever the "elapsed time interval" without receiving any packets from the same sub-flow (including "synchronization" packets generated for the same sub-flow) from the TCP of the other end. The most complete combination scenarios include the following (it is assumed that both end-modified TCPs also include the "synchronization" packet method): 1. when the "elapsed time interval" in the modified TCP of the local end has expired without receiving no packet of the same sub-flow (which includes both the type of "synchronization" packet generated from the subflow) of the modified TCP of the other end, definitely knows / definitively infers the route unidirectional from the modified TCP from the other end to the modified TCP from the local endpoint is "LOW", the modified TCP from the local end should now react immediately accordingly and / or cause the modified TCP from the other end to react accordingly. 2. when the unidirectional route of the modified TCP from the other end to the modified TCP of the local endpoint is "UP", that is, successive packets (and / or "synchronization" packets) are received from the modified TCP from the other end without causing it to expire the "elapsed time interval", and if no expected acknowledgments are received (for data packets sent by the modified TCP from the local end of return from the modified TCP of the other end within certain criteria (such as delay interval of uncoupled rates, retransmission waiting interval of coupled RTO packets, decoupled ACK waiting interval that causes "pause" ... etc.), THEN the modified TCP of local end must now react immediately and / or cause the modified TCP of the other end to react with the final definitive knowledge / inference that the unidirectional route from the modified end TCP lime to the modified TCP from the other end is "DOWN". Where only one end of a TCP connection implements the "synchronous" packet method, the above it must adapt in this situation by having the modified TCP of the end implementing the "synchronous" packet method send the "synchronous" packets to the unmodified TCP of the other end in the form of "packet" which traditionally produces an acknowledgment response from the unmodified TCP of the other end (such as sending for example the sequence number packet out of order not within the window that receives the TCP, always generating a return ACK of "do nothing" see the group topic of Internet discussion "Acking out of Order packet" http: // groups-beta. google.com / group / comp.protools TCP-ip 1 Phil Karn March 2, 1988 2 CERF March 2, 1988., and term of Google Search "ACKing the ACK" ("recognizing recognition"), it is also pointed out that the sending of individual DUP ACK will not cause rapid retransmission, or alternatively, this sending of for example ACK outside of order of the search term from Google "ou ACK "of order", "eliciting an ACK", "DUP Sequence Number ACK", "ACK for unsent data", "unexpected ACK", etc.). The "synchronization" packet method must ensure that at least one "packet" sent from the modified TCP from the local end to the TCP at the other end (whether modified or not) at intervals smaller than the "time interval" value elapsed "(such as for example half the value of" elapsed time interval "etc.).
Where both ends implement the "synchronization" packet method, both modified TCP protocols may preferably allow the detection of the presence of other parameters of "synchronization agreement" ... etc. intervals, for example during the connection phase of the synchronization. TCP or immediately after ... etc. But here, by not receiving any unmodified TCP packet from the other end within the expiration of the "elapsed time interval", the modified local end TCP can only definitively infer that any of the unidirectional routes (but not definitely which of the modified TCP) from local endpoint) to the unmodified TCP from the other end or from the unmodified TCP from the other end to the modified endpoint TCP is "DOWN" (compare when both ends are modified and implement "synchronization" packet techniques). The various sub-component methods / methods illustrated in the previous body of the description can be adapted by using the "elapsed time interval" method and / or "synchronization" packet method for example instead of decreasing decoupled speeds in the ACK (acknowledgment) waiting interval (ie, instead of monitoring the Recognition for the Sequence Number segment sent, not received within for example RTT decongested * multiplier to react therefore, the "elapsed time interval" for the next received packet is monitored instead. This allows a much faster reaction time ("elapsed time interval") than the decongested RTT possibly much higher * multiplier. Where the time stamp option is selected, this will allow both sides of the unidirectional routes (ie OTTest and OTTest (mm) etc., to be derived instead of just RTTest and RTTest (mm etc.), react better by The SACK option will allow for less unnecessary retransmissions of packets that have already been received out of order. "Synchronization" packets and / or the previous periodic probe packet method can be sent if required independently in the form of a new TCP connection established between the flows by TCP with destination IP address and Port, source IP address without change but the source port now designates a different unused port number Note: the inter-arrival packet method -package "(and / or optionally)" synchronization "within each TCP per flow can be made operational in certain criteria / events that are met, to establish TCP per flow, such as for example or after the initial Synchronization / Synchronization acknowledgments and / or only after a small number n of successive packets from the TCP of the other end (modified or unmodified) and / or only after a small number n of successive packets is received from the TCP of the other end that all arrive within the "elapsed time interval" of each packet preceding immediately preceding the other. When the "synchronization" interval that expired requires the "synchronization" packet to be sent, the modified TCP from the local end can re-send in exchange / re-transmit a packet of previously unreported regular data sent to the TCP at the other end (which also produces a return acknowledgment response from the TCP on the other end) instead of the pure "synchronization" packet. It is noted that the Methods herein extend the modalities / inventions that are also applicable where any of the source sender or receiver (or both) reside on the external Internet, BUT may also apply where both reside within the Internet subsets / WAN / LAN / Internet property as in several Methods described above in the body of the description. A user interface can be provided in the modified TCPs described above / Modified Monitor Software / Modified TCP Issuer / Modified IP Issuer / Modified Protection Software in the body of the description, to allow user inputs to various TCP settings / registration parameters (eg, initial sathresh, initial RTT, MTU, MSS, delay ACK option, SACK option, time stamp option ... etc.), user input addresses of LAN sub-network property / WAN (so that packet traffic with both the source and the destinations within these sub-networks can be determined as "internal traffic" to / from the external Internet) and the ACK timeout interval and / or "elapsed time interval" and / or "pause interval" and / or "synchronization" interval between each of these sub-network addresses (for better performance, instead of using only for example the maximum ACK timeout value such as for example maximum decongested RTT between the most distant pair of nodes within the full sub-network * multiplier), the user inputs of the common TCP ports (so that packet traffics to / from these common ports can be can handle differently) and / or the additional used TCP ports and / or any of the source or destination ports that are to be excluded from this special handling (for example, some multimedia streams use TCP within the port numbers specified instead of UDP) ... etc. Here are some example cases in various scenarios, in outline only, among the many various possibilities of combinations of sub-methods / methods. components described in the body of the description and / or inter-packet arrival methods and / or "synchronization" packet method (where only one end of the TCP connection is modified, both ends were modified this will obviously make the tasks much easier after both ends detected the presence of modification of the other): 1. Modified TCP from the local end, which acts as an emitter source to the external Internet, and TCP stack was modified directly. In the event of "activation" (such as for example "elapsed time interval" of 300 ms, 3 DUP ACKs, current RTO packet retransmission timeout, etc.), among other possibilities this will require that the TCP itself only "pause" (or not pause at all) for a defined pause interval and / or allowing a small number of packet transmissions during the pause to act as probes, then either resume (or continue without pause) without altering the CWND / speed limit or reducing the limit of CWND / speeds by x%, for example 5%, 10%, 50% ... etc. It is indicated here if "paused" implemented in the expiration of "inter-packet arrivals" of 300 ms, the modifications based on the Issuer have the advantage here of knowing if the expiration of the "inter-packet arrivals" of for example 300 ms they were only due to the fact that the Issuer The local endpoint does not have data packets to transmit to the other end in this way it will not need to "pause" unnecessarily and / or react unnecessarily accordingly (compare where the local end acts as the receiver would have no way of knowing whether the expiration of "arrivals" inter-packet "for example 300 ms was due to" activation "events or simply because the other end transmitter does not have additional data packets to transmit temporarily). Inter-packet arrival methods can be used instead of the "RTT decongested * multiplier" methods as activation events to react accordingly, additionally if the "synchronization" packet method (here only generated from TCP source of dispatch) modified local end of what produces response such as for example ACK returns from the unchanged TCP of the other end) and / or timestamp options were incorporated that were capable of definitive detection / definitive interference of which address link it is definitely "DOWN" or definitely "ABOVE". 2. the modified TCP from the local end, which acts as the emitter source to the external Internet, and the TCP stack can not be directly modified. Modified Monitor Software / TCP proxy modified, modified Protection Software ... etc., here you will need to perform the tasks instead of the TCP stack itself. In the event of "activation" (such as for example "elapsed time interval" of 300 ms, 3 DUP ACK, RTO current packet retransmission interval ... etc.), among other possibilities this will only require that the Modified Monitor Software / Modified TCP Proxy / Modified Protection Software ... etc., here only "pause" intercepted TCP packets that are sent for a defined pause interval and / or allow a small number of transmission of packets during the pause to act as probes, then when for example the "interference" of fixed number of ACK is completely summarized that the outgoing TCP packets intercepted arrive (to quickly restore the limit of CWND / TCP speeds that can for example to be readjusted to 1 segment size in the network-input to the "slow start"), and / or even for example to handle all retransmissions of current packets of RTO / 3 ACK timeout of DUPs of retransmis rapid change within Modified Monitor Software / Modified TCP Proxy / Modified Protection Software ... etc., (instead of TCP itself, which will now never be required to retransmit any sent packets) by keeping current copies of the value of window of the transmitted data that suppress all retransmission DUP ACK packets fast by not sending these pure DUP ACKs to the TCP and / or removing the ACK bit in the re-computation checksum of the secondary DUP ACK packets before sending the TCP ACKs and / or "interference" to the TCP just before the TCP will have the waiting interval of RTO ... etc.) ... etc. It is indicated here if the "pause" implemented in the expiration of the "inter-package arrivals" of for example 300 ms, the modifications based on the Issuer have the advantage here of knowing if the expiration of "inter-package arrivals" of for example 300 ms was only due to the fact that the local end emitter does not have data packets to transmit to the other end in this way it will not need to unnecessarily "pause" and / or react accordingly (compare where the local endpoint acts as the receiver will have no way of knowing if the expiration of the "inter-packet arrivals" of for example 300 ms was due to "activation" events or simply because the other end Issuer does not have additional data packets for temporarily transmit). Inter-packet arrival methods can be used instead of the "RTT decongested * multiplier" methods as activation events to react accordingly, additionally if the "synchronization" packet method (here only generated from the modified endpoint software) local but that produce such responses such as ACK return of the unchanged TCP from the other end) and / or time stamp options were incorporated to allow definitive detection / definitive inference of which address link is definitely "DOWN" or 5 definitely "UP". 3. TCP modified from the local end, which acts as a receiver of the external Internet sender source, and the TCP stack is modified directly. Inferred productivity input methods can be used instead of the "decongested RTT * multiplier" methods as trigger events to react accordingly, additionally if the "synchronization" packet method (here only generated from the modified end receiver TCP) local but that produces 15 responses such as for example ACK "recognition" of return of the unchanged TCP from another end) and / or time stamp options were incorporated that would allow the definitive detection / final inference of which address link is definitely " DOWN "or definitely 20" ABOVE ". Additional techniques such as ACK Divisional / ACK of DUP / ACK Optimists can be used to increase TCP transmission / CWND speeds of the unmodified sending source of the other end, whenever it is required, and packetization techniques can be used. window size 25 update to make the TCP of shipping source not modified from the other end do "pause" ... etc. 4. the modified TCP from the local end, which acts as the receiver of the external Internet sender source, and the TCP stack can not be modified directly. Modified Monitor Software / Modified TCP Proxy / Modified Protection Software ... etc., here you will need to perform the tasks instead of the TCP stack itself. In the event of "activation" (such as for example "elapsed time interval" of 300 ms, of the particular sub-flow), among other possibilities this will only require that modified Monitor Software / modified TCP proxy / Protection Software modified ... etc., here to cause only remotely that the transmitter TCP on the other end "pause" the sending of particular sub-flow packets during a defined pause interval and / or allow a small number of packet transmission during the pause act as probes, then when resuming for example the fast sending of a fixed number of DUP ACK to the emitter TCP of the other end (to quickly restore the TCP speed limit / CWND of the other end which can for example have been reset to 1 segment size in the network-entry to the "slow start"). Inter-packet arrival methods can be used instead of the "RTT decongested * multiplier" methods as trigger events to react accordingly, additionally if the method of "synchronization" packets (here only generated from the receiver TCP modified from local end but that produces responses such as for example return ACK from the unmodified TCP from the other end) and / or time stamp options were incorporated would be able to the definitive detection / final inference of which address link is definitely "DOWN" or definitely "ABOVE". Additional techniques such as optimistic DUP / ACK divisional / ACK ACKs can be used to increase TCP transmission / CWND rates of unmodified send source from other endpoint when required, and upgrade packet techniques can be used of window size to make the TCP of unmodified send source of the other end "pause" ... etc. The TCP connection that is symmetric, that is, a local endpoint can be both transmitter and receiver of data at the same time (even if you are not sending real data for anything there is always a return ACK generated towards the other end), the modified TCP of Local end / Modified Monitor Software / Modified TCP Proxy / Modified Protection Software ... etc., can act of course both as sender based and receiver based at the same time. Additionally, where both extremes are modified, each endpoint can act again as well as based on emitter and receiver based at the same time, working jointly; but preferable and / or alternatively once both ends detected from each other the presence of modification, may agree to work each only acting as only based on emitter, or each as only based on receiver, or only one end will act both based in receiver as based on issuer with the modified operations of the other end disabled. An example of the many possible ways to detect the modified presence is, for example, to send a packet to the other end with the special unique fixed length Identification pattern within the padding field or the fixed length data portion.
Example methods derivable from the combination of several methods and / or sub-component methods described in the body of the description (to allow measurements and / or estimates of several OTT Unidirectional Travel Time, OTTest and estimated OTTeSt (min) decongested. .. etc., the time stamp option will be required to be negotiated during the ACK phase of TCP connection establishment synchronization / synchronization, the OTT of unidirectional travel time from the sending source to the receiver for a segment / particular sent packet can be derived by the sender from the various values of ACK timestamp fields corresponding return. Obviously, the values of OTT, OTTest, OTTest (min) when they are made available to either the sending source or the receiver will be capable of better and more efficient transmission controls, since RTT, RTTest, RTTest (min) inherently includes the elements uncertainly introduced by the symmetry of the return and forward routes). (A) Emitter-based monitoring of the last RTTest (min) decongested and / or latest OTTest (min) decongested ... etc., to detect the start of packets that start to be stored in buffer / packet loss, in proprietary networks such as LAN / WAN / Internet patent. In proprietary networks, all that is needed to allow the guaranteed service capability is to have each server / PC ... etc., in the ownership network (or only substantial number of heavy traffic sources) to install any of the modified TCP updates described above or Monitor Software (or the application software that resides on PCs / Servers ... etc., to implement the modifications directly within the applications for example directly within the RTSP continuous flow applications )... etc. Where each of the RTT values decongested from inter-sub-networks or OTT values decongested previously known within the property network (it is noted that decongested RTT values or decongested OTT values may vary for data packets of different sizes especially where medium links are low bandwidth such as ISDN, the size of most TCP packets is pre-negotiated during the TCP connection establishment phase: commonly negotiated Maximum Segment Size MSS values that are approximately 800 bytes, 1500 bytes ... etc.) , each of the modified TCP updates or Monitor Software ... etc., here you can simply regulate the individual's return transmission rates by TCP flow (through periods of "pause", or by decreasing percentages of size) of CWND window, etc.) when for example the decongested RTT of particular source-destination or the time period of OTT decongested + period B of time is specific time elapsed without receiving a corresponding ACK back for the particular sent packets. The period B of time here corresponds to the delay of cumulative total packet buffers introduced and experienced by the packet as it is stored in memory at the various nodes along the cross-route: adjust this value to, small period of example 20 ms here will ensure that other VoIP / Video Conferencing UDP packets critical in time Really enjoy a very good guaranteed level of service, since UDP packets here would probably not find much more than 20 ms of cumulative total memory lag along the various nodes traversed. Setting B = 0 here will ensure that TCP flows always attempt to immediately avoid any start of packet storage delay, keeping the network free of storage and memory delays or only insignificant mobile memory storage delays during occasional intervals when they occur occasionally. The decrease percentage of TCP speed regulation can be adjusted to several fixed or algorithmic values derived from several dynamic values, for example such as (B ms + eg T ms) / 1000 ms and if with B = 50 ms and T = 50 ms, the percentage of decrease of speeds here will be of 10%, that is to say the speeds of transmission of TCP will now be regulated back to 90% of the existing transmission speed; now you can see that the level of performance of the bottleneck link will now be maintained later at approximately 90% of the bandwidth capacity of the bottleneck link assuming the flows that cross the bottleneck link that they do not now increase or decrease their transmission speeds for anything later. Other possible non-exhaustive examples of values derived algorithmically from the Decrease percentage of TCP speed regulation can be simply for example B ms / value of RTT decongested flow per TCP and with B = 50 ms decongested RTT = 400 ms, the percentage of decrease of speeds here will be 12.5%. The period of time T ms was added before / can also be added here so that with the percentage of decrease of higher speeds, the flows that cross the bottleneck link (increase their transit speeds as usual with the TCP ) will now take more time again to achieve link performance levels of 100% or more then requires buffering which will then slightly impact other real-time critical guaranteed service UDP packets. Updates of Modified TCP or Monitor Software ... etc., may at any time require the effect of regulating TCDP flow rates by decreasing percentage of CWND and / or by "pausing" in this way ... etc., to achieve the desired required throughputs of the bottleneck link (eg to subsequently cause 100%, 99%, 95%, 85% ... etc., of bandwidth utilities of bottleneck links , instead of presenting more than 100% level of use with the storage delay in memory of attached packages) subsequent to several activating events specific (for example cumulative total stored delay of B ms not found etc.). The various algorithms and policies and procedures can be additionally contemplated to handle all kinds of events of "activating events" in several different ways. Here it is noted that the updates of modified TCP or Monitor Software etc., does not necessarily require prior knowledge of the decongested RTTs of inter-sub-network nor the OTT decongested inter-sub-network between the various sub-networks within the property network. In contrast, here the modified TCP updates or Monitor Software etc., can keep track of the last current smallest observed RTT value or the last observed smallest OTT value of the individual TCP flows, and treat this as equivalently dynamic to decongested RTT or OTT decongested flows per individual TCP. The upper and lower limits of common perception in these RTTest (mm) or OTTest (rain); for example, its maximum upper ceiling limits can be adjusted to a RTTmax value of known more distant location pairs within the property network etc. (Al) Receiver-based Monitoring of the last RTTest (min) decongested and / or OTTest (mm) decongested final etc., to detect the start of packages that they begin to be stored in memory and / or packet loss, in proprietary networks such as LAN / WAN / Internet owned. (This is sufficiently direct from the previous receiver-based methods / sub-component methods and various methods / sub-component methods described in the sections in the present and the various parts of the Description Body, using Remote ACK / ACK Divisions Multiple DUP / ACK optimistic, and window size updates of various sizes to cause "pause", and produce ACK responses of "do nothing" using the replicated packet method, 3 DUP ACKs to activate fast retransmission to the RTO retransmissions of attribution, and ... etc.) (B) Ultimate Emitter Based Monitoring RTTest (min) decongested and / or last OTTest (min) decongested ... etc., to detect the start of packets that start to be stored in buffer and / or packet loss, in proprietary networks such as LAN / WAN / Internet property and / or external Internet. The external Internet is subjected to other existing flows of unmodified TCP not within the control as a proprietary network. The examples in (A) above will need to be further modified to take this into consideration.
The "activating events" can cause decreases in flow regulation by percentage of decreases of CWND and / or "pause" ... etc., here it needs to be further modified, for example by not increasing during s seconds derivatives in a dynamically algorithmic way or specified after withdrawal to eg 100% / 99% / 95% / 85% ... etc., if again the performance of the bottleneck link performance subsequently reaches back to 100% or more causing the start of delay of storage in packet memory within the previous seconds, then allow transmission speeds to start the increments / growths again until the "trigger events" (which may be packet drops / memory storage delay threshold exceeded ... etc.), if not, start the permission of increments / growths of transmission speeds after the seconds elapsed. Various algorithms and policies and procedures can be additionally contemplated to handle all kinds of "trigger events" in several different ways. Here on the external Internet where decongested RTT and / or decongested OTT will not be easily known in advance for the newly established TCP flows, therefore RTTest (min) or OTTest (min) observed last current will instead provide equivalent dynamic estimation of decongested RTT and / or OTT values. Existing normal TCPs emphasize the clean portions and hospitality of competing TCP streams, but inefficient in the full utilization of the available bandwidths for maximum throughput as evidenced by the very long period required to regain the transmission speed established previous / previous performance after only a single packet drop RTO waiting interval or after the fast retransmission of 3 DUP ACK especially on long distance thick sections with high bandwidths and long RTT latencies (mainly due to the TCP conservative linear CWND increments in the Congestion Evasion mode after achieving a SSThresh CWND size during the exponential CWND growth of the Slow Start). A new improved criterion for modified TCP should now include high utilization of available bandwidth and / or buffers available for maximum TCP throughput, not just inefficient slow friendly just partition. The rather fast reaction time (instead of the lowest minimum ceiling value of the existing RFC of 1 second for the dynamically derived RTO value) of the TCPs modified here to "pause" and / or reducing CWND in the various "trigger events" will minimize the packet fall percentage, the "continuous pause" described above will further reduce the sizes of transmission rate decreases strictly, ie from eg 64 Kbytes per RTT to only 40 bytes for example for 300 ms). Modified TCPs here can be made more aggressive in CWND increment sizes (and / or equivalent "pause" interval, "continuous pause" interval settings for example to be of smaller values) in many different ways. The CWND may be increased for example to a specific integer multiple or an integral multiple dynamically derived from MSS per received ACK and / or by RTT instead of 1 MSS of existing RFC per received ACK and / or by RTT, the value of Ssthresh shall be you can initialize to the specific value and / or permanently set the very large value such as to make the same as the Maximum Window Size negotiated during the TCP connection phase ... etc. While speed reductions are made in the "activation events" (such as falling of coupled / uncoupled packets of RTO time-out, fast retransmission of 3 ACKs of DUP, decreases of decoupled speeds in the ACKs that return out tightly adjusted from the specified interval ... etc.), modified TCPs can be sketched for velocities of reduction in such a way as to ensure that the use of the bottleneck link will be maintained in high yields eg 100% / 99% / 95% / 85% ... or even at various levels of storage buffer delay 100 Previous congestive%, etc., (assuming all TCPs crossing the route will be all modified TCP). As an illustration among the many possibilities, the modified TCPs (in either the sender or receiver or both) will here be in possession of prior knowledge of the source-sink-source decongested RTT or the decongested source-sink OTT value, or RTTest (min) / OTTest (min) of best dynamic estimation equivalent of the above; when all cross-links do not exceed each their respective available bandwidths of 100% (ie no packet memory storage occurs at any of the crossed nodes), RTT or OTT or RTTest (min) u OTTest (min) derived from for example the return ACKs will now be the same as the actual decongested RTT or the decongested OTT value (with very small random variations introduced by node processing delays / source host processing delays or receiver ... etc., referred to below as V ms, this value of variances of V ms will usually be of an order of magnitude less than the other system parameters described above such as B ms specified or dynamically derived ... etc. Where V ms to the unexpected sometimes very rare briefly becomes very large for example Windows operating system are not real-time operating systems ..., this can be treated "exceptionally" in the same way as it arises / is introduced / caused due to memory storage delays of nodes found instead). As long as the RTT or OTT or RTTest (min) or OTTest (min) values derived from eg the return ACKs continue to show no memory storage lag found along the cross paths, the modified TCP may continue either by allowing the increases / growths of transmission speeds conservatively as in the existing RFC or increasing / growing more aggressively. When exceeding certain levels of memory storage delay indicated / derived from return ACK, that is, the value in milliseconds of [(Return RTT or OTT) - (RTTest (min) or OTTest (min))] will now indicate the cumulative total memory storage delays found at the various nodes along the cross paths (referred to later as C ms). For example, in 20 ms / 50 ms / 100 ms ... etc., from the value of C that are exceeded, modified TCPs can now reduce for example the speeds of transmission so that the use of bottleneck link will be maintained afterwards for example 100% / 99% / 95% / 85% ... etc., assuming that all the TCPs crossing the bottleneck links are all Modified TCPs (now knowing the last equivalent value of the current decongested RTT or decongested OTT of the flows per TCP, and the C value, the required CWND decrement percentage and / or the "pause" intervals or sequences of The required "pauses" can now be determined to achieve the desired final results required). The modified TCP can now stop for example any increase / increase in additional TCP flow rates over a period of seconds (specified or dynamically derived by algorithm) as described for example above to then respond accordingly as described for example previously or in several different ways contemplated additionally. This particular example has the effect of achieving high utilization yields, in addition to the just friendly sharing of existing RFC, and also helps to maintain cumulative memory storage delays of cross-paths maintained at low level correlated to the C value: in the absence of other strong dominant unmodified TCP flows, in which case the Modified TCP flows here will initiate / may allow the initiation of increments / growths of speeds within seconds, then together with all unchanged TCP flows eventually cause the packet drop event: after which the TCP flows will not modified will re-enter "Slow Start" which takes a very long time to re-start the previous achieved transmission speeds while the modified TCP streams can retain high arbitrary proportion of the previous achieved transmission speeds / yields (solving the existing problems of sensitivity associated especially with thick long distance long RTT sections). With modified TCP speed decreases to achieve eg 95% subsequent bottleneck utilization, new TCP streams (and / or other new UDP streams ... etc.) will always be able to immediately use up 5% of available bandwidths of the bottleneck links to start flow rate increases / growths without introducing packet memory storage delays along the route, additionally bottleneck links will be able to Instantly accommodate new instant additional sudden bursts of traffic of X milliseconds equivalent of available bandwidths without dropping packets (most Internet nodes it commonly has between 300 ms - 500 ms of equivalent buffer size): this is consistent with the common knowledge of preserving the established returns of existing flows while allowing new controlled incremental growths of additional flows. Alternatively, the modified TCP can always allow increases / increases of speeds in a conservative way as in the linear growth of existing RFC or more aggressively (instead of regulating back in C ms of cumulative total storage delays detected ... etc .), and only regulates the "events" of packet drop; this will only be of interest to increase the maximum TCP throughput and not good for other critical UDP flows in real time, BUT the crossed nodes can easily ensure very good guaranteed service performance of real-time critical UDP packets By simply booking a guaranteed minimum percentage of physical bandwidths available for priority delivery of UDP packets ... etc. The servers of the websites / server farms can advantageously implement the modified TCP implementations described above. Typical websites are often optimized to be approximately 30 Kbytes - 60 Kbytes for downloads accelerated (for a 56 K analog modem download at approximately 5 Kbytes / second continuously uninterrupted by packet drop etc., this will take approximately 6 seconds - 12 seconds). Immediately after the TCP connection establishment phase of recognition / synchronization recognition / synchronization, the modified TCP of the sending source server will have an initial initial estimate of decongested RTT or decongested OTT of the TCP flows in the form of the RTTest (min) of source-receiver-source minimum observed current end or value of OTTest (min) of source-receiver (if it is representative of the value of RTT decongested or OTT decongested current, or not). The modified TCP of the sending source server can now optionally immediately start sending the very first segments / data packets by immediately starting with the CWND window size of the W segments for example with a Maximum Segmented MSS Segment Size of approximately 1600 bytes and "= 20, it will only take 2 * RTT for all contents of 60 Kbytes to be received by the client web browsers (assuming no packets are dropped or corrupted in the transmissions and the smaller bandwidth of the link to along the route which is a broadband of 500 Kbits / second of last mile end user.) With W = 64 you can take only 1 RTT or 1 OTT for client web browsers to fully download the contents of 60 Kbytes website (typical Internet RTTs are commonly around tens to several hundred milliseconds, including the delays introduced by in-memory storages along the route) . Where the smallest link bandwidths along the route are an analog modem dial-up of 50 Kbps / second of last mile end-user, the time periods above will have been at least 6 seconds or 12 seconds as the transmissions over the last mile link can only be a maximum of around 5 Kbytes per second (assuming the value of 30 Kbytes or 60 Kbytes of segments / packets are stored first in memory in the final user's last mile, of the AOL web proxy servers, before they are transmitted forward to end-user web browsers over dial-up). Even if in the worst case, the CWND window value of 64 or 20 initial MSS segments / packets immediately caused buffer overflows so segments / packets were dropped in any bottleneck link, the TCP modified here it can react very quickly consequently (much faster than the default reaction time of minimum minimum floor of existing RFC of 1 second minimum) in the ways as briefly described / illustrated in the above by example rate reduction to ensure certain levels of utilization / performance of subsequent bottleneck links (in place of divisions at half the existing RFC rates and the extended periods resulting from bandwidth utilization), and / or increases / increases of more controlled aggressive subsequent speeds, and / or congestion evasion of more controlled memory storage delay levels (for example, "wait seconds before allowing increases / increases in speeds ... etc., instead of existing RFCs present only the "wait for packet drop" scheme) ... etc. It is noted that where modified TCP, or modified TCP for web servers, needs to be implemented in the form of Monitor Software / Proxy TCP ... etc., (for example without direct access to host TCP stack source codes for modifications) this will essentially essentially require that the TCP Monitor / Proxy software that resides on the sending source servers "Interfere with the ACKs" whenever the TCP stack of resident sending source servers is required to be controlled at the transmission speed / window size of the CWND of more controlled aggressive increment, and / or interfere with the small or zero receiver window size update packet whenever the TCP stack is required to Resident sending source server to temporarily stop transmissions or decrease transmission speeds and / or for the Monitor Software to effect the decrease of equivalent transmission speeds by "pause" / "continuous pause" (and / or allow 1 or a small number of packets sent during each pause interval) in the forward sending of intercepted TCP originated packets, and / or maintaining a full window value of all current data packets / segments sent by the stack of Resident host TCP to then perform all coupled or decoupled RTO broadcasts / DUP 3 ACK fast retransmissions that relieves the resident host TCP stack of all these responsibilities, and / or maintains the multiple full window value of all the current data packets / segments sent by the resident host TCP stack, thus allowing multiple values to be generated segments / packet windows over the resident host TCP stack within an individual RTT when the Monitor Software does "ACK Interference" to the resident host TCP stack to effect more aggressive controlled rate increases / growths and / or when using ACK Divisions / multiple DUP ACK / ACK Optimistic techniques to do so, and / or examining incoming ACK packets from the network and / or it examines its RTT / OTT to react accordingly which includes either modifying several fields (ACK Number, Sequence Number, Timestamp values, various indicators, announced window size, etc.), before forwarding to the Host TCP stack resident or even discarded, and / or etc., as described in the various previous methods / sub-component methods in the body of the description. It is noted here that the TCP Monitor / Proxy Software ... etc., can still maintain the effective transmission window of the resident host and / or CWND so that they are permanently set to a certain required size or even a size of Maximum Window negotiated at all times with the aforementioned combinations of techniques, methods and sub-component methods, allowing the transmission speeds to be controlled by only "pause" V 'continuous pause "and / or allowing 1 single or a fixed small number of packets that are sent during each pause interval to act as a "probe". (Immediately after the TCP connection establishment phase of recognition / synchronization recognition / synchronization, the modified TCP of the sending source server may instead start immediately sending the first ones directly. segments / data packets immediately starting with the existing RFC Slow Start CWND window of segment size of 1 MSS, but this can take many RTTs now to finish transferring contents around tens of seconds to minutes as it is in the typical daily experience typical of end users). (Bl) Receiver-based Monitoring of last RTTest (min) decongested and / or OTTest (min) decongested last ... etc. To detect the beginning of packets that start to be stored in buffer and / or packet loss, in proprietary networks such as LAN / WAN / Internet owned and / or external Internet. (This is sufficiently direct from the previous receiver-based methods / sub-component methods and various methods / sub-component methods described in the sections here and in the various parts of the Description Body, using remote ACK Divisions / Multiple DUP / ACK ACK Optimists, and / or window size updates of various sizes to cause "pause" and / or produce "do nothing" ACK responses using the replicated packet method, and / or 3 ACK of DUP to activate the fast retransmission to RTO retransmissions of attribution, and ... etc. See previous section in Implementation of modifications of TCP to work on external Internet).
As an example, with the Time Mark option negotiated during the TCP connection establishment phase, the modified Receiver TCP or Monitor Software can now derive the source-receiver route estimation equivalent of the unidirected unidirectional travel time. current of the arrival packets, that is, the OTTest (min) observed last current. Total, cumulative memory storage delays, if any, found by any arrival packet can be derived by subtracting the OTT from the arrival packet by the OTTest (min) (which ignores any usually very small random variance introduced by the fluctuations of sending / processing time of packets of the nodes). It is preferable that the option of Selective Recognition be used and the option of Delayed Recognition be disabled (for example by the TCP / IP registry entry settings of the host PC, but these are not a strict requirement at all). The modified TCP or Monitor Software will now be in position, now armed with the equivalent of estimating the current decongested memory and OTT storage delay levels of the decongested source-receiver path, to react accordingly (which causes the Send source TCP "pause" and / or "continuous pause" with 1 sending of individual packets allowed by pause interval, and / or "do not pause", and / or increase the CWND sizes through the ACK Divisional / Multiple ACK of DUP / ACK Optimal, and / or Attendance RTO waiting intervals by fast retransmission of the 3 previous DUP ACKs, and / or ... etc., as is desired to achieve the specified maximum bandwidth utilization / performance criteria while maintaining fair sharing friendly. The example immediately above can be further simplified so as not to require any use of timestamp options to give (ie, that you do not need to derive or make use of the OTT value of arrival nor the OTTest value (min) nor the value of the memory storage delays, found, total, cumulative, derived for nothing, the modified receiver TCP or the Monitor Software can instead simply wait for the specified milliseconds W (for example 250 ms) interval for the next package to arrive since the arrival time of the last received immediately previous packet and if this does not go up within the milliseconds W then treat this as "trigger event" (most likely the next packet fell by overflow buffer in the buffer) by then immediately consequently (remotely cause the TCP of the sending source to "pause" and / or "continuous pauses" with 1 sending of individual packets iduals allowed by pause interval, and / or "do not pause", and / or increase the CWND sizes by ACK Divisionals / multiple ACKs of DUP / ACK Optimists, and / or allocate the RTO timeout by rapid retransmission of the previous 3 ACU of DUP, and / or etc.), as desired to achieve the criteria of use / maximum yields of bandwidth specified while maintaining the fair sharing friendly (but more aggressive than the example immediately above) . Here it should be noted that where a packet encounters 3 storage delays in memory of for example 300 ms in each of the 3 different A / B / C nodes and subsequent that is the fall of excessive flow congestion in buffer in the other node D (with, for example, a storage capacity equivalent to 400 ms) along the route, and the "pause" of, for example, 250 ms in the TCP of the sending source now not only reduces the level of congestion of the buffer in node D to only 150 ms but also similarly reduces buffer levels in each of the A / B / C nodes to only 50 ms each. Whereas an algorithmically or specified algorithmically derived "pause" interval value of 450 ms will certainly fully purge all the buffering completely in each of the A / B / C / D nodes (ie , everybody now totally congested without packages that are buffered for nothing). However, the immediately preceding example, armed with the knowledge of OTT and OTTest (min) and the memory congestion lags found, cumulative, derived, can therefore react with the finest level of controls depending on the knowledge of the previous values compare this present simplified additional example which can react only mainly after the events of packet overflow into buffer (it is still pointed out when all buffers on all nodes (assuming 400 ms equivalents each storage capacities in intermediate buffer) are consistently increasingly closer to very close but not yet overflowed, the pack immediately after the packet received immediately before will still be arriving within eg 50 ms / 100 ms / 200 ms / 250 ms ... etc ., of its immediately previous package). It is preferred to keep track of the smallest current past observed intervals E (L) for a next packet of length L = 1 to the maximum segment size negotiated MSS, which up from the last received packet (of any length), this gives the knowledge / equivalent estimate of the transmission time delay for an individual packet of length L for that leaves completely in the medium of transmission of link of smaller bandwidth along the route (for example, usually telephone dialing of 56 Kbs of last mile of end user or broadband of 500 Kbs, also see page 192 - 195 of the Body of the Description (in English)). The transmission time delay E (L) is expected to be linearly proportional to the length L of the packet. Now we can specify W milliseconds such that modified TCP or Monitor Software will only "activate" events to react accordingly in eg (W milliseconds + E (L) of negotiated length segment packet maximum length, MSS) elapsed without the packet arriving, or reacting accordingly in eg only W milliseconds if it assumes E (L) of maximum negotiated segment size MSS length has already been taken into consideration in the derivation / specification of the value of W. Another additional simplified example among many, a modified simplified TCP-based draft for TCP is described, implemented in the Monitor Software using Inter-packet Interval Interval techniques (which can be further modified / adapted, and can also be implemented directly within the TCP itself instead of the Monitor Software) giving better performance over the external Internet for example much faster downloads web pages, ftp downloads etc .; 1. whenever the TCP packet from another sender is received, check the Source and Port Address if it is already in the TCP table per flow, IF NOT create new TCP TCB per flow with several parameters; (YOU DO NOT NEED TO MAINTAIN THE PREVIOUS SEQUENCE TIME / TENDER NUMBER TICKET ENTRIES FOR ALL INTERCEPTED PACKAGES). - LOCAL SYSTEM TIME RECEIVED from last packet (received from remote sender, pure ACK or regular data packet), announced window size of last receiver packet (sent by local MSTCP to remote sender), ACK number of last packet of receiver, that is, next expected Sequence Number of the remote sender (sent by local MSTCP to remote sender, requires inspections of incoming and outgoing packets by flow, and will now be able to immediately remove the TCP table entry by flow in the ACK of END / END that not only waits the usual 120 minutes of inactivity) ... etc. (Optional) in the synchronization / synchronization ACK terminated, immediately set the CWND of the remote sender to eg 64 Kbytes, user specified or dynamically derived by algorithm, for example it could also be adjusted to smaller or larger scale depending on the end-user last mile link bandwidth capacity. When it is set to for example 64K (which is the usual maximum size by default of negotiated window unless the window scaling option is selected, this may allow the contents of the remote external Internet website to be downloaded within just an individual RTT compared to the usual tens of seconds experienced) . This is preferable to do for example 15 immediate ACK of DUP with for example number of ACK = number of initial Sequence of remote emitter + 1, Divisional ACK can not work well since some TCP increment CWND only by the number of bytes recognized in change and the behavior of Optimistic ACKs can not be identical in all TCPs either. Note: alternative would be to wait for the first data packet received from the remote sender to generate, for example, 15 ACU of DUP with the ACK number set to the same Sequence Number just received from the remote sender (in just 1 byte of unnecessary retransmission expense) , or using ACK Divisionales. TCP uses a tridirectional communication procedure to establish a connection. A connection is established on the start side that sends a segment with the SYN indicator (synchronization) established and the initial sequence number proposed in the sequence number field (seq = X). The remote then returns a segment with both SYN and ACK indicators established with the field of sequence number set to its own value assigned to the reverse direction (seq = Y) and the recognition field of X + 1 (ack = X + 1). Receiving this, the start side makes a note of Y, and returns a segment with only the ACK indicator set and a recognition field of Y + 1. 2. If for example 300 ms expire (specified by user or derivatives dynamically by algorithm) without receiving the following package then: - you only need to be inside the next one Expected sequence number detected by software that does not enter for example 300 ms of the last received previous packet to generate 3 DUP ACKs with the ACK number established to the next expected sequence number of no arrival, and at the same time transport the update of window of eg 1800 bytes within the 3 ACUs of DUP (equivalent to "pause" of the emitter + 1 packet): keep sending the window update of the 3 DUP ACKs of 1800 bytes incremented by 1800 bytes each time if for example 100 ms elapsed without receiving any pure ACK or regular data packet, BUT if any ACK or any regular data packet is received at all, THEN send the USUAL (not the 3 DUP ACKs) the same individual window update that restores the previous window size (ACK number field set to "last recorded", "larger" number of ACK sent from local to remote MSTCP, or -1) repeatedly every 100 ms until any ACK or next regular data packet is received again from the remote, THEN repeat the previous detection circuit of expiration of for example 300 ms in the own start from Step 2 above (optionally you can first at this point before entering the circuit again use the fixed number of ACK Divisionals of the DUP / ACK ACK techniques Optimists here to set the CWND size of the sending source for example to the maximum negotiated window size of 64 Kbytes / 32 Kbytes or for example increase the size of the CWND of the sending source by 16 ACK of DUP etc.). It is noted here that 3 DUP ACKs can also be sent instead of the individual window update package but after 2 100 additional ms elapsed, the individual window update ACK packets will have to total the window update packages of the individual window. 3 DUP ACK, of course, an alternative can also be any window update package, for example DUP sequence number window update package etc.
Several Notes to some sub-component techniques that can be used: - start in the first packet received after the ACK synchronization / TCP connection establishment synchronization, if the present RTTest (min) recorded, last, current RTT observed or the present OTTest (min) recorded current last OTT observed is greater than the memory storage delays, total, cumulative, reasonable (for example, caused by the prolonged stop / separation in the generation of source packages) then ignore this occurrence and not cause the "trigger event". - transmit rate decreases by the percentage of size reduction of the CWND, for example [(RTT observed present - RTTest (min) last recorded current or OTT observed present - OTTest (min) last current recorded) + T ms] / RTT observed present or OTT but noted here with T = 0 ms implies making the performance of the subsequent bottleneck link 100% of the available bandwidth, and / or pause interval set to [(RTT observed present RTTest (min) recorded, last, current or OTT observed present - OTTest (min) last recorded current) + T ms]. - distinguish between the sub-network addresses of the proprietary, internal network and the external Internet to activate the corresponding Appropriate Methods / Algorithms. - Inter-packet arrival techniques can be adapted for use, as well as "Synchronization Packages" techniques. - bandwidth / linkage polling techniques, for example, pathchar / pipechar / pathchirp etc., can be deployed in unions to derive finer levels of knowledge of the route / nodes / crosslinks, to react better accordingly. the speed of external Internet connection introduced by the user to allow negotiation of maximum window size for example telephone dialing 5 Kbytes but the ISP can store in memory still 64 Kbytes / seconds and send the telephone dialing of 56 Kbs of the user to for example 5 Kbytes per second that can be very convenient for example when the cross route introduces long RTTs or OTTs. the very fast reaction time for "pause" / reduce CWND minimizes the percentage of packet drops, the "continuous pause" also very flexibly reduces the sizes of transmission rate decreases, ie, from for example 64 Kbytes per RTT at only 40 bytes per, for example 300 ms. TCP inherently not equitable at high RTT flows, eliminates this for example using inter-packet Interval Interval techniques. retaining several ACKs, that is, slightly delaying sending forward to the sending source, for the purpose of reducing TCP transmission transmission rates / throughput. - to be able to stay close to 100% of the bandwidth of bottleneck links, capacity utilization / performance at all times, even after packet drops due to buffer overflow and / or packet drop due to physical transmission errors, TCP modified allow approximately the good performance / utilization of bottleneck bandwidths to be doubled compared to existing RFC TCPs that use the bandwidth capacity of the link much lower (as is very evident from their usage / performance graphs). of "saw teeth" multiplicative decrease-increment, additive of AIMD of the TCP of the existing RFCs).
Additional Notes and Additional Methods The technique of inter-package arrival intervals (for example, 300 ms) can optionally be made active only when less than a full effective window value of received / sent packets; otherwise 300 ms may definitely elapse without receiving new packets for example when OTT or RTT > for example 300 ms (for the return ACKs that arrive back to the sender); you may also want to check the last received sequence number - last ACK number sent to see if for example > or < o = current effective window size. It may be desirable to maintain the sending of 3 DUP + DupNum ACK given for example 500 ms after synchronization / synchronization ACK / ACK (or after 1 or 2 regular data packets received first) so that the remote server does not adjust the CWND timeout interval and / or SSthresh to 1 or 2 MSS. The sender TCP may or may not expect to use the algorithm during the initial 64 Kbytes of packet data transfer if for example the return ACK for the first regular data packet sent - return ACK RTT for the sent synchronization ACK > C ms for example 100 ms (due to a very sudden increase in the level of congestions of the crossed route). Refined Specifications: First adjust the registry entries much more preferably allowing to enable the SACK and disable the Recognition of Delays. Parameters of order line entry: - Wait for TimeMark (ms) - Interval interval of inter-packet arrivals to prevent network congestion drops - PauseTimeMark (ms) - remote server pause interval in congestion DupNum - remote server during phase 3 DK ACK fast retransmission will additionally increase the CWND size for each additional DUP ACK received, use this technique to send a DupNum of greater number of DUP ACKs to increase the CWND - offset -0 or 1, not very sure if the ACK number field of the DUP ACKs will work if only the last updated, recorded ACK number is set (that is, last largest value of the ACK number sent by MSTCP from receiver to remote server) ) or works only after subtracting 1 byte. 1. Procedure to process outgoing TCP packets (packets from our MSTCP to the remote host). Create new entry for the TCP connection for this package if necessary. Some variables have to be recorded: - dwACKNumber (If the ACK indicator is signaled) - ACK field of the TCP header - dwSEQNumber - TCP header Sequence Number field - dwTCPEstado - This TCB variable is for your own use to control the TCP connection status, whatever it may be. Monitor the synchronization / synchronization ACK / ACK to record the Window Size of dwMaxRcv in the third ACK packet in the SYN / ACK sequence: the TCP per stream is only to be created in the SYNC detection of our receiver MSTCP that send to the remote server (not create otherwise).
Immediately upon sending the ACK response packet in the TCP connection, ACK / ACK of SYNC / SYNC, even before receiving the first data packet (assuming this works to increase the remote server CWND), then generate the number 3 + DupNum of the DUP ACKs with the ACK Number = dwACKNumber - Compensate (dwAckNumber - is ACK number of third ACK response packet in TCP connection, ACK / ACK sequence of synchronization / synchronization), and values of the Window Size fields of dwMaxRcv and dwSEQNumber. Maintain the sending of the number 3 + DupNum of the DUP ACKs every interval of WaitTimeMarca until the first data packet arrives (*** NOTE: step 3 only activated after the first data packet of the program flows arrives, Step 2 is really immediately active at all times). 2. Monitor the incoming packet for remote sender TCP end or RST, and local MSTCP RST; then the TCP flow terminates immediately, if it does not end after 16 seconds of total inactivity (ie, no central / outgoing packets of any kind whatsoever) despite some process / current circuit activity. 3. Procedure to verify the TCP flows. (IT IS SIGNED still in the sending center of the DUP ACKs of 3 + DupNum and / or window update packets Sequence Number circuit of ACK Number should always reflect the "largest" ACK number sent last, "largest" instant so that the smallest retransmission ACK number of MSTCP is ignored, and the "largest" Sequence number sent last of the local receiver MSTCP is ignored). If the connection is established and they expire in the milliseconds of MarcadeTimeTime, without receiving the next packet from the remote host to our MSTCP for any TCP flow, then send 3 DUP + ACU DUP + ACUP DupNum one after one in rapid succession to announce the window size of zero bytes and with the numbers of ACK = number of ACK dw updated last (previously registered) less values of compensation field of Sequence Number dw. Keep the shipment above 3 + DUP ACK DupNum every 100 ms until any ACK or regular data packet is received again from the remote host. Or now after the milliseconds of MarcadePauseTime have passed without receiving a next packet, whatever happens first (note: the entire undelivered portion of the 3 + DupNum DUP ACKs should now stop immediately in the next packet or Time Stamp of Pause elapsed), THEN repeatedly keep sending the individual window size update purely (with Recognition number field set to ACK number dw - COMPENSATION, NO ACK OF DUP, etc., and field values of wc number) of size = Size of window dwMaxRcv every 50 ms of interval until the next normal data packet (not pure ACK) arrives again from the remote host after which this it again enters the circuit at the beginning of step 3 above (that is, again waits for the MarcadeTime to wait without receiving the remote host's packet to "pause" the remote server ... etc.). The broadband networks (even on the transport of international structure) are of very low loss speed, and very low congestion. The http streams (port 80 signature) must be allowed which for example sends a complete content of 64 Kbytes in eg 1 RTT. Even if the ACK / ACK synchronization / synchronization phase finds the retransmission (RFC by default 1 second ...) this only encourages the use of the initial 64 Kbytes CWND since the flows through the bottleneck link now probably speeds in half ... you may perhaps want to separate (drive speeds that send 1 packet per R ms so that the 64 Kbytes are sent evenly separated for 1 second ...) in this way from the elapsed time of arrival of Inter-return ACK, for example 100 or 300 ms, etc. (if sending Sequence Number and ACK of expected corresponding return and not above after elapsed interval ... you should not use the delay recognition but you can adjust the delay recognition if it is used ...) then to "pause immediately" for the "detected" trigger events (usually packet drops ...) within RTT + (for example 100 ms or 300 ms) instead of the RFC by default of 1 second; that it does not send packets unnecessarily if they are likely to fall; the initial CWND of 64 Kbytes would be a good choice, copy well with both 56 K last mile and physical line speeds of broadband media. In addition to the minimum value of the arrival interval of registered inter-return ACKs ... etc., the physical line speeds of the last mile medium (56 K, broadband ... etc.) can be derived in a useful way in an unambiguous way The receiver may also wish to send the ACKs of DUP of 3 + DupNum (with ACK number field set to the outgoing ACK number sent last largest recorded) whenever it detects the local MSTCP in its own usual agreement it sends packets with number field of ACK = < Sequence number received larger recorded last from the remote TCP (ie, for example "separation" in Sequence Number received ... etc.), or when receiving remote TCP interval waiting retransmission (for example ACK from return or ACU of DUP of 3 + DupNum were lost sent ... etc.) to increase again the remote CWND (the remote CWND now it falls back to 1 or 2 MSS after the waiting interval) a new way to the existing TCP Congestion Control would be: 1. Transmitter TCP Window Size, and Size of Receiver TCP window micialized to a large "arbitrary" value using the scale factor 0 - 14 equal to eg 2 ~ 30 (1 Gigabyte) for example during TCP connection negotiation using the Window Scale Option (eg 64 K + window scale). (scale factor 0 = no scale option required to be established, see RFC 1323). 2. The receiver TCP (or the Receiver Monitor Software etc.), in ACK recognition of synchronization / synchronization, then ACK with window size of eg 4 Kbytes / 16 Kbytes / 64 Kbytes) or Wl Kbytes etc. ., upon receiving 4 Kbytes / 16 Kbytes / 64 Kbytes) or any specified number of Wl or fraction of Wl Kbytes then increases the Advertised Receiver Window Size to W2 Kbytes for example N2 * (4 Kbytes / 16 Kbytes / 64 Kbytes or Wl Kbytes, etc.), where N2 is a fraction of eg 1.5 / 2.0 / 3.5 / 5.0, etc., or algorithmically derived part of and so on for W3, W4 Wn etc., until the data communications are finished (total less than 2 ~ 30, that is 1 Gbytes).
It is noted that the Monitor Software based on Receiver etc., can modify the outgoing intercepted receiver MSTCP packets that modify the sizes of the Advertised Receiver Window (before sending the modified packet to the remote sender TCP) thus achieving the new TCP congestion control method based only on the advertised Receiver Window Size continuously increased; and / or The Emitter TCP (or Emitter Monitor Software etc.), in synchronization ACK then synchronization with window size of eg 4 Kbytes / 16 Kbytes / 64 Kbytes or Wl Kbytes etc., upon receipt of the ACKs from return that recognize 4 Kbytes / 16 Kbytes / 64 Kbytes or any specified number of Wl or fraction of Wl Kbytes to then increase the Emitter Window Size to W2 Kbytes for example N2 * (4 Kbytes / 16 Kbytes / 64 Kbytes or Wl Kbytes etc.), where N2 is a fraction of eg 1.5 / 2.0 / 3.5 / 5.0, etc., or algorithmically derived part of and so on for W3, W4 Wn etc., until the data communications are finished (total less than 2"30, that is 1 Gbytes, if it exceeds perhaps around the window size equal to for example a superimposed of sequence number, or a new TCP connection that continues etc.). Monitor based on Emitter etc., can modify incoming packets intercepted from the remote receiver that modifies the Advertised Receiver Window sizes (before sending the modified packet to the Issuer TCP) thereby achieving the new TCP congestion control method based only on the Annotated Receiver Window Size continuously increased . It is also pointed out that the TCP can be symmetric, one end can be both the Emitter and the Receiver, ie the previous Method then needs to be implemented directionally. The method will allow a more flexible and finer, arbitrary variety of control / regulation of packet transmissions, while (if required) it retains all existing TCP congestion / error control mechanisms (or similar corresponding mechanisms offered). ), such as slow start / linear increment of congestion / fast retransmission control of 3 DUP ACKs / standby interval etc for example instead of the previous method of sending 3 + DupNum of DUP ACKs (or divisional or technical ACKs) of SACK optimists etc.) to increase CWND (with for example accompanying detriment to the value of SSthresh in the initial fast retransmission, the semantics of TCP from end to end if the optimistic ACKs are used, etc.), the same purpose and more can be achieved better (for example increase the value of the window size announced for example by 3 + DupNum of DUP ACKs etc., without accompanying disadvantages) The CWND of the emitter must be micialized to the desired initial value 4Kbytes / 16Kbytes / 64Kbytes or WKbytes etc., or the receiver can send for example 3 + DupNum of the DUP ACKs or a series of these DUP ACKs at various times or optimistic ACKs etc to increase the CWND (the existing RFC 2414/3390 already allows an initial value of CWND of 4 Kbytes) , in which case there is no need to increase CWND). Existing servers on the Internet currently already adjust SSthresh to an arbitrary large value (for example = TCP window size value) that will allow the rapid exponential increase of the CWND value, however in the absence of the large SSthresh setting the receiver can send a large number of for example 3 + DUP ACK DupNum to cause the linear increase of CWND (for example, 1,000 ACK of DUP = 40 Kbytes = 320 Kbits which can all be sent under one second with broadband, to increase CWND to Mbytes by assuming SMSS of 1 Kbytes or to increase CWND to 16 Mbytes by scaling the window factor 16). It is noted that with the window factor scaled to for example 16, the minimum window size increase resolution will be 16 bytes, that is, it is not possible to increment by saying 5/8/15 etc bytes. With the size method of Continuously announced announced receiver window, the receiver can "limit the speeds" of the packet injection sender speed without requiring the sender to send evenly spaced packets / uniformly delayed inter-packets. It is pointed out that it may be sufficient without the window scale factor to fully utilize this method (for example, TCP window size of eg 64 Kbytes without the scaling option), since the allowable shipping window "is "enlarges" with each received ACK of return, that is, the receiver can continuously increase / decrease / adjust the announced receiver window size using the knowledge of the activating events of the network conditions (and / or the knowledge of for example the last received valid sequence number / last valid acknowledgment number sent etc.) to adjust for example continuously rwnd in this way is the minimum effective window size of the emitter (cwnd, rwnd, swnd) of for example rwnd values of 4 / 16/32/40 Kbytes etc., when the congested network is detected via "activating events" and enlarges rwnd to for example 48/56/64 Kbytes etc., in this way the effective window size of the em isor when the decongested / underutilized network is detected. It is pointed out that this method can be used on its own or in combination with any other method, for example "pause" methods. It is noted that the synchronization packet method can have the values of rwnd continuously adjusted. To implement the method on the receiver only without any modification at the remote receiver at all (in the initial CWND, SSthresh value settings), the receiver may choose to wait for eg a number of seconds or an RTT number or a number of packets that have elapsed / received (without intervening the RTO waiting interval of the sender and / or requesting the rapid transmission of the receiver, where this is presented the receiver may choose to activate the method directly to a before the RTO waiting interval of the pending emitter etc., which avoids the RTO wait interval of the transmitter) before activating the method, in this way the CWND is already sufficiently large and therefore any fast retransmission request will maintain a sufficiently high SStresh (= CWND / 2 after all the packages already in flight before the request of fast retransmission of 3 ACK of DUP). Where it is required, or is advantageous as in accessing http websites, where the complete contents usually < 64 Kbytes), the receiver can immediately after ACK synchronization / synchronization or immediately after 1 or 2 received regular data packets, then immediately increase CWND by the optimistic ACK (with the ACK number = to the last valid sequence number received + by example 4/16/32/64 Kbytes etc., this will not affect SSthresh), at the same time establish a parallel TCP connection to the same remote IP number and same port number and same source IP number but different port number specified where immediately after ACK / ACK synchronization / synchronization or immediately after 1 or 2 regular data packets received to optionally increase the CWND of the emitter with 3 + DupNum of the DUP ACKs so that the CWND of the emitter now = by example 4/16/32/64 Kbytes etc.) or increase only when the initial data packets of the original TCP were not received successfully); where the original connection successfully received all for example 4/16/32/64 Kbytes, the second TCP connection can now be terminated immediately by readjusting RST, otherwise (or simultaneously with the original TCP) any value of 4/16/32/64 Kbytes initial or absent from the packets / segments can be obtained from the second TCP connection (for example, sent to the original TCP receiver cavity by the modified software the modified software can also register, if it is required, the entire packet flow in both directions for example authentication packets if any in the original TCP connection during the first 4/16/32/64 bytes receptions and injects the exact same sequence in the second parallel TCP connection during the first reception of 4/16/32/64 Kbytes). It is pointed out that even if CWND initializes, for example, a maximum of 64 Kbytes here, the receiver can still regulate the ejector speeds of the emitter starting, for example, 2/4/8 Kbytes etc., when sending rwnd micially of 2/4/8 Kbytes that incrementing and adjusting the rwnd (for example, window update packages or regular data packets) according to the events. It is pointed out that when waiting eg for the first regular data packet that is received (or more or even immediately immediately after receiving the synchronization ACK from the sender TCP) then the CWND of the sender increases for example 3 + DupNum of the ACKs of DUP with the field of the ACK number set to the final received highest valid sequence number in place of the last valid serial number or larger usual -1 (ie, to retain the recognition of a larger received byte throughout the length of the TCP session, or optionally) and then use the advertised receiver window size method, which is continuously increased (along with scaling the window large enough at both ends), now both speeds are successfully set of TCP transmission from endpoint under the total control and conserved semantics of TCP (and with the method of "pause", both end TCPs can now transmit to a full speed subjected only to "pause" congestion control, that is, CWND, the TCP window sizes of both ends, SSthresh ... etc., you do not need to play any additional part at some point in time once you stabilizes the TCP flow ... however, it is preferred to use the continuous increase of rwnd starting from appropriate smaller values that are accumulated to for example full permissible physical speeds or transmission speed allowed by the current rwnd size (the flow now grows to be "stabilized" ...) Obviously, the maximum transmission speeds of the transmitter are dependent at least (swnd, cwnd, rwnd); Unrecognized sent segments (or unrecognized sent segments decrease the swnd and recognized segments that increase the swnd, if the swnd here is set to the same window size performance initially negotiated), and the rwnd method of increment / decrease / adjust continuously consider this in rwnd updates. Also now that the TCP of remote server transmits speeds can now regulate by adjusting only the rwnd (cwnd of the remote server, SSthresh, the swnd can now always be kept at large or very large arbitrary values), the software based on the receiver can regulate dynamically transmit speeds of the remote transmitter by dynamic selection of values of rwnd window updates, in this way can identify all rwnd field values in all intercepted receiver MSTCP generated packets destined for the remote server TCP to the required values of rwnd to regulate the transmitter speeds of the transmitter ( these will require the modification of the package verification recompute) - receiver / TCP based software (which can also be implemented as sender-based software / TCP modifications can advantageously monitor OTT values of arrival of timestamp fields , while the OTT values remain the same as the last Ottest (min) (or same as the OTT decongested, current, known, previous) within small allowed variations (for example due to small variations in the emitter OS / time of the CPU processing of batteries), the software based on the receiver / PCT make note of the final bigger rwnd achieved, this gives a The largest rwnd achieved so far during which the packet that crosses the route does not encounter any storage and memory delays or cumulative delays of delays and memory of at most the same small allowed variation (and / or more additional B ms of allowed cumulative storage and memory delays of eg 0 ms / 50 ms / 100 ms etc.) as before; Subsequently whenever the packets are loaded by congestion, the receiver-based software can advantageously / optionally set the update values of rwnd (changed rwnd field values in the intercepted packets) to this final registered rwnd value as large as define in the above; that is to say in the events of falling by congestion and / or in the events of fast retransmission etc.), the receiver continues regulating of sustained form the speed of transmission of the emitter so that the speed can be maintained to the greater historical speeds achieved by the flow under uncongested cross-path conditions, thus maintaining very ideal uses of high bandwidths of link. Additionally, the TCP / reception software can increase rwnd (either by emulating the slow-start rwnd exponential growth and / or the linear growth of congestion evasion) continuously as long as the OTT arrival value does not exceed the last Ottest ( m? n) (or current decongested OTT), that is, no storage and memory delays along the route (and / or optionally decrease downwards if the OTT above exceeds the Ottest (m? n), additionally but when the OTT value of arrival then exceeds the last known Ottest (min) or known decongested OTT) for example by the 10 ms / 50 ms / 100 ms specified etc., (for example, due to other unchanged existing TCP flows that increase their speeds even when packet starts, or UDP traffics are buffered), receiver / TCP based software can now choose to allow the rwnd to increase again ... points out that where all TCP flows along the route can also conveniently allocate the minimum guaranteed portion of their bandwidths to TCP flows, and some portion to UDP ... etc) which are these modified TCPs mentioned in the immediately preceding paragraph, these TCPs at all times will not cause buffering to be required; the route almost completely decongested / not stored in buffer is maintained at all times. To ensure fair sharing that allows the growth of newly established modified TCPs when the pre-existing modified TCPs have already jointly achieved the full utilization of full cross-bandwidth, newly established TCPs can be allowed to grow their network speeds. transmission or rwnd or cwnd up to no more than for example additional delay of 100 ms in Ottest (min) or RTTest (min) or their current known values, and all modified TCPs upon experiencing for example an additional delay > 100 ms will reduce all your transmission speeds or rwnd or cwnd ... etc., by a certain percentage, by 10% / 15% / 25 % ... etc., (this favors pre-existing established flows but also allows the new established TCPs to begin to achieve their growth of transmission speeds). It is pointed out here that there will be no drops due to congestion as long as all the achievements used have more than an equivalent value of 100 ms of storage and buffering. Another scheme will be to allow continuous transmission speeds or rwnd or cwnd ... etc., of growth until the beginning of packets begins when it is buffered (indicated by additional delays in Ottest (min) or RTTest (min) of the latter OTT or RTT) after which their transmission speeds or rwnd or cwnd will decrease back one step (thus oscillating the forward increment and decreasing back around the 100% utilization level). It is also noted that the various schemes above can be implemented in a similarly easy way as transmitter-based TCP. Simply by allowing for example transmission speeds or growths of rwnd or cwnd until the congestion fall events (after which the modified TCPs revert their greatest transmission speeds achieved by the size of rwnd or cwnd under total uncongested conditions or percentage of them, or simply percentage of the present speeds of transmission or sizes of rwnd or cwnd when congestion falls ... etc.,) allows good coexistence with the normal TCP flows of RFC present. Where the "pause" method is incorporated, the "pause" interval can also be derived from the later OTT or RTT value just before the congestion drops are detected and the Ottest (min) or RTTest (min) or current known decongested RTT or OTT value; for example if the last OTT just before the event of congestion falls is 700 ms and the Ottest (min) is 200 ms then you can now adjust the "required" pause interval to eg 500 ms (700 ms - 200 ms) to simply simply purge all packets stored in memory of the nodes or even more than for example 600 ms or less of for example 400 ms as required. An example implementation based on receiver, among the various possibilities (it is pointed out that the based on the emitter will be similar but simpler), it will simply be that the receiver asks for the window scale option for example a scaling to a maximum of 256 Mbytes (maximum possible scale is 1 Gigabyte, that is, 2 ~ 14 * 64 bytes or 14 times the window size is changed 16 bits, no scale, usual, here 256 Mbytes maximum will be the window scale factor of 12, that is, 2 ~ 12 * 64 Kbytes or leave the window size change 16 bits without usual scale: see Google search term "window scale size, http: // rdweb. cns. vt. edu / public / notes / win2k-tcpip .htm, http: // support. microsoft. com / default. aspx? scid = kb: en-us: 19947, http: // www. netperf.org / netperf / training / netperf-talk / 0207.html, http: // www .ncsa.uiuc.edu / People / vwelch / net_perf / tcp_windo ws .html, http: // ww. mo key. org / openbsd / archive / bugs / 0007 / msg0002 2.html, http: // www. freesoft. org / CIE / RFC / 1072 / 4.htm, http: // / www .networksorcery.com / enp / protocol / tcp / option003. htm, http://www.ehsco.com/reading/19990628ncwl.html, Google group search term "window scale size, http: // rdweb.cns.vt.edu/public/notes/win2k-tcpip.htm) gives a minimum possible resolution of 4 Kbyte receiver window size (4 Kbytes incidentally corresponds to the initial CWND value of experimental RFC): 1. The Remote server can choose a size of emitter window to scale, however it can also simply allow the scale receiver but can choose not to scale its own emitter window size; this does not matter much (even if these negotiated window sizes are too large for the last mile and / or mile mile physical bandwidths, for example, 56 K / 500 Kbs ... etc.). NOTE: If the emitter makes the scale factor of similar window as the receiver, this can allow very simple ready use of this method, without any new software or modified TCP required, for example by simply adjusting the TCP window size registration value of the receiving PC to eg 1 and for example scaling factor of for example 2"14 (minimum window size resolution now being approximately 4 Kbytes) in this manner, the effective transmission window of the transmitter will be limited at all times to approximately 4 Kbytes since the receiver now adjusts only your rwnd to as much 4 Kbytes at all times (as with the receiver PC record setting or the plug buffer setting of the TCP window size registration value of 2 and factor of scale of 14 this gives a resolution of approximately 16 Kbytes * 2, that is 32 Kbytes) 2. The receiver then where it is required modifies all the outgoing packets intercepted assuring each u not of its receiver window size fields at all times not exceeding a suitable upper ceiling value eg 16 Kbytes for last mile telephone dialing of 56 K receiver or for example 96 Kbytes for last mile DSL receiver receiver 500 kbs etc. [the very elegant simple arrangement here now will have assured the growth of issuer CWND, exponential, very fast throughout the entire TCP assignment for example at all times which requires only at most a time of 6 RTT instead of requiring for example a time of approximately 64 RTT to reach CWND of 64 K (it is pointed out that the initial SSThresh of the emitter is set very very large at the same value as the receiver window size scaled), but the maximum effective transmitter speeds of the emitter at all times will be limited to the upper ceiling value of the window size of the transmitter. received modified receiver; the sending speeds of the transmitter at all times are always no more than those allowed by the top ceiling of the receiver's window size, ordered additionally by the size of the emitter's sliding window and the characteristics of "auto-synchronization" through the Return ACK (it is noted that the speeds of the return ACKs reflects the smallest available bandwidth of the bottleneck link, usually in the first or last mile media link). The beginning of storage and buffering delays along the route will decelerate the issuer's BDP performance, while limiting packet drops by congestion to cause the receiver to request the fast retransmission of 3 DUP ACKs The issuer has now halved the CWND and the value of SSthresh will most certainly be continuous to remain very very large from the upper ceiling value of the receiver window size at all times, while sustained congestion packet drops will cause the sender to deplete the RTO retransmission that the sender's CWND will now start slow again to eg 4 MSS but again grows in an exponentially fast manner , it can be seen that all this CWND of the emitter of the TCP flows can now be limited but it also remains almost at all times near the top ceiling of the window sizes in its receivers. 3. Optionally, the receiver can regulate the packet emitter injection speeds in the network by slowly increasing the range of the receiver window size of the outgoing packets, for example and immediately after the TCP establishment receiver can send a uniformly separated and synchronized series of eg 16 pure window update packets, each for example 62.5 ms of eg 1 start of 1 second with 4 Kbytes then 8 Kbytes then 12 Kbytes then 64 Kbytes (instead of announcing the size 64 Kbytes upper ceiling window that will cause the packet burst) thus ensuring no large sudden burst of packets from the emitter (it is pointed out that the return ACKs if there are any during this series of window size updates will increase as possible the packet injection speeds, without However, the receiver may optionally reduce the window update size values by taking this into consideration). The receiver can optionally modify the values of the receiver window size field of the packets at any time where appropriate. Similarly, this update / modifications of the window size can be carried out in any desired way of increments / decreases / adjustments at all times, possibly taking into account the ACK values of latest outbound returns sent ... etc. This can be useful for calling the contents of the http web sites in an optional faster way immediately after the establishment of the TCP connection (ie after regulating the sender to send for example the maximum physical line speeds of last possible receiver mile: it is pointed out that making the sender immediately send all the contents of for example 64 Kbytes in an RTT can be counterproductive ...) 4. Optionally, this can be implemented together with the method of " pause "and / or the method of" inter-package arrival "and / or several methods described in the previous paragraphs ... etc. For example, where the decongested RTT / OTT here is for example 50 ms, the "pause" method can specify here a waiting interval period that is decongested RTT / OTT (or RTT / OTT estimated last estimated) between the two extremes plus for example 200 ms of memory storage delays, and "pause interval" in the wait interval of for example 150 ms, the bandwidth of the bottleneck link here it can be constantly 100% used at all times, since the "pause" method here strives to keep the buffers of the cumulative crossed routes occupied within a small interval of buffer occupancy at all times ie the Bottleneck link can always be 100% used. Therefore, it is pointed out that the CWND mechanism of the issuer here is redundant to the requirements in achieving the purposes of congestion control at some stage (except where other component methods such as the ambo-inter-packet method more than 3). + DUP ACK DupNum quickly increase the CWND size of the congestion triggering events avoiding the RTO timeout events etc., they are not incorporated, in which case therefore the CWND will continue to play only part of the probing of available network bandwidth during the exponential and / or linear growth of very early stage to achieve very large values (although the maximum transmission speed of connection is at all times limited to, for example, a very small rwnd value comparatively the receptor announces in format displaced in scale instead of for example announcing the value of rwnd of receiver 64K TCP now announces only 4 if the maximum scale factor 14 used which means the value of rwnd of 4 left changed 12 places ie same as 64 K; it is noted that although both ends now allow / negotiate very large maximum scale window sizes, the receiver TCP will only be able to announce its maximum receiver window size available last physical stream for example if its storage facility and window memory possible physical maximum receiver is 16 K then the window size field value is announced receiver in all packets generated by receiver TCP that assumes a maximum scale factor of 14 used now will show a maximum possible value of a 1 in at any time), after which the division into CWND and / or the values of SSthrsh into the fast retransmission / recovery of 3 ACKs of DUP, the CWND divided into two and / or the values of Ssthresh remain very large compared to rwnd : where the network keeps the sender decongested, it can fortunately maintain the transmission at maximum speeds limited only by the segments / bytes available in the network. sliding feeder (dependent on the auto-synchronization characteristics of the return ACKs) and / or the size of rwnd or cwnd, at the maximum transmission speed of the request sender 3 DAC ACK fast retransmission will now be limited only by the segments / bytes available in the sliding window (that the segments / bytes available in the sliding window will now be appropriately reduced by the proportions / number of packets in flight sent unrecognized, but here although CWND and SSThresh are both reduced by half they have no impact at all since the CWND divided into two and the SStresh will still be quite greater than RWND or SWND) in this way indeed the transmission speed is now properly reduced proportional, in the RTO waiting interval (usually after the minimum RFC minimum ceiling time period of 1 second) the transmitter speed of the transmitter, is governed by dial-up CWND from 1 to several SMSS is now reduced to a minimum but in can actually always hold the same transmission speed before the RTO timeout because the sender here will typically have to send a portion n very large or full effective value of segments / bits total window before the RTO waiting interval in this way many immediate transmissions of serial RTO waiting intervals will follow rapidly in succession caused by the series of the following segments sent not yet recognized / packets and the size of the proportion / number of these fall packets by congestion in all unrecognized segments sent within the effective sliding window (even if all dropped due to congestion) will not reduce the transmitter speed of the transmitter after the RTO timeout event of for example 1 second but the transmitter will have stopped any transmission during the period of eg 1 second before the RTO timeout; all packets buffered from the intervention nodes will be debugged by an equivalent amount of for example one second of these packets stored in particular modified TCP buffer memory (or equivalent amount of other packets stored in buffer flows) ) and will also most likely be debugged from an equivalent amount of eg 1 second of most buffered packets from existing unchanged TCP streams (or equivalent amount of other packets stored in stream buffer) since the equivalent amount of eg 1 second far exceeds the equivalent storage capacity and usual buffer of the nodes of 200 ms - 500 ms and some other flows either modified one may be exhausted later than in a second minimum of the RFC (if their RTT they are unusually very large) helping to ensure the total purification of all the os packages stored in memory of the crossed nodes (since all flows will exhaust RTO although some may be in times slightly later) [NOTE: this is synonymous with a large "pause" interval of 1 second]. This method in its simplest mode only requires users to adjust their TCP registration parameters of the local PC to use the large window scale factor such as scale factor of for example 12, while the size value of usual 16-bit TCP window can be set as small or as large as required eg 1 byte to 64 Kbytes: with the user PC scale factor of 12, that is, window size value at maximum possible scale of 256 Mbytes and user PC TCP window size of only 1, and the remote server negotiated scale factor of for example 12 and remote server TCP window size of 64 Kbytes, the maximum transmission speed Remote server at any time will not exceed the PC-to-user window size of 4 Kbytes (1 * 2"12) per RTT (assuming intermediate software, if any, that does not intercept and modify the field values of rwnd of outgoing packages of l to user PC to be greater than 64 Kbytes). It is noted that the remote server Ssthresh value is usually initialized to be equal to the value of rwnd negotiated during the TCP connection establishment. To implement this method on the remote sender server, only the TCP stack of the remote server is required to set its SStresh values to be very large, for example, to "infinity" and to use the window scale option for TCP connection negotiations (and / or set its CWND value to its largest fully achieved growth, ie the CWND can be increased continuously, for example from the initial RFC value of 1 SMSS but it never diminishes). It is pointed out that the use of modified TCP can increase the yields and reduce the long transfer termination times by file ftp, such as for backup applications of data storage sites on leased lines / DSL ... etc. This is due to the fact that with the existing TCP, the transmitter always increases its transmission speeds at all times, that is, the CWND increases monotonically until the packets fall due to congestion, after which the TCP of The transmitter aggressively reduces its transmission speed, ie readjusts the CWND to for example 1 SMSS and starts the very long slow rise back to the achieved transmission speed or the CWND size achieved just before the RTO (or just before receiving the requests for fast retransmission to 3 ACKs of DUP after which the transmitter speed of the transmitter, ie the CWND is reduced by two). Assuming that the TCP streams do not have the DUP 3 ACK fast retransmit mechanism enabled, the transmission speeds of the stream with the performance by the CWND graph here would show the well-known "teeth-close" pattern, which slowly rises linearly to the maximum and then suddenly drops back to almost "0" repeatedly, that is, it is immediately apparent that up to half of the available physical bandwidths of the link are wasting unused, while the modified TCP flow will exhibit a transmission or performance rate or CWND graph of almost 100% constant utilization of available physical link bandwidth, that is, possibly even double the throughput / reduce if it is divided into two at the transfer termination time of the unmodified TCP flows. With the DUP 3 ACK fast retransmit enabled mechanism, the TCP flow graph will show a mixture of sudden drop to half of the previous transmission speeds and close to "0", thus the modified TCP flows will show somewhere between 33% -100% more performance compared to unmodified TCP flows; enabled possibly to duplicate instead the apparent physical bandwidths of the link, where the link can be leased lines / interconnected underwater optical cables / satellites / wireless media ... etc. To recapitulate, the method of "window size at large emitter scale" of the paragraphs immediately preceding (even if the connection at either end they really have no real need for this large-scale window size) can be used immediately by PC users without needing any software or modification to the existing normal TCP; users can manually adjust their PC TCP system parameters that allow a large-scale window size of the sender window (for example, TCP window size and / or global maximum TCP window size in window size of TCP of Window 2000 settings greater than 64 Kbytes will automatically allow the window scale factor), the TCP1323opt 1 or 3 (1 is window scale factor enabled but without the time stamp option, 3 is with the mark option of time), the value of the window scale factor between 1 and 2"14. The receiver TCP must allow the sender TCP to negotiate the window scale option, but the receiver TCP receives in itself the size of maximum window that should be kept relatively small preferably only to be able to fully utilize the bandwidth capacity of the "bottleneck link" of the route crossed by the IP packets (the usual bottleneck link) mind is either the first mile means of the issuer, for example DSL or the first mile of the receiver, for example, leased lines); for example assuming the decongested RTT between the two extremes is for example 100 ms and remains constant at this value of for example 100 more completely, and the bandwidth capacity of the bottleneck link is 2 mbs, the maximum receiver window size here should be maintained / adjusted relatively small to only eg 25.6 Kbytes (This ensures the "effective window size" "of the sender's TCP, which at any time does not exceed 25.6 Kbytes, in this way will not transmit at speeds greater than 2 mbs at any time, although the CWND of the sender's TCP can grow rapidly to achieve / exceed the maximum window size enough of the receiver of eg 25.6 Kbytes and subsequent is fully maintained at very large values allowed by its large-scale maximum window size value which ensures that the packet loss / corruption events causing the fast retransmission will not now cause the size of CWND reduced by two of the TCP of the emitter divided in two the value of Sstresh to sink below the maximum size of the receiver window of for example what 25.6 Kbytes almost at any time. Whereas after the packet loss events that cause the retransmission of the RTO timeout with the CWND size readjustment of the sender to eg 1 SMSS, much rarer, the CWND of the sender's TCP may re-achieve very quickly and extend the maximum window size of the receiver of for example 25.6 Kbytes in only 5 * for example RTT of 100 ms that is to say in only 500 ms). The graph of transmission speeds / graph of Instantaneous performance speeds (as can be seen using the installation of IO traffic visualization analysis - ethereal graphs http://ethereal.com) will show here almost constant about 100% utilization of link bandwidth, it is say the graph will look like the "square wave signal shape" with the upper flat plateau closer to the 100% link utilization level, compared to the existing standard TCPs that almost invariably exhibit "closed teeth" shapes with plateaus in the valleys of the teeth closes much further away from the level of 100% link utilization. However, in the real-world public Internet, the RTTs between the two endpoints may vary in order of magnitude over time (for example from 10 's of milliseconds to 200 ms) unless the RTT of the end-to-end connection end is guaranteed by the guaranteed RTT / bandwidth of the carrier's IP transit service level agreement, thus the transmission speeds of the regulating "emitter" to the bandwidth capacity of the bottleneck link through, for example, the maximum window size of the receiver ... etc., will suffer yields of order of magnitude and / or "degradation of good performances" during these times when these RTTs on the public Internet are extended; much better is to adjust the maximum window size of the receiver here to much more values large to be able to accommodate these RTT elongation scenarios of the public Internet for example where the maximum window size of the receiver now adjusts to eg 8 *, the former eg 25.6 Kbytes then the end-to-end returns and / or "good yields" can be kept as close as 100% bottleneck link bandwidth capacity at any time assuming RTTs do not extend more than 8 times of decongested RTT between the two extremes . It should be noted that when the CWND of the TCP of the transmitter stabilizes and does not increase (for example, when the CWND has reached the maximum emitter window size) it is the ACK auto-synchronization feature that regulates both can transmit transmitter TCP (the sliding window of the TCP), that is to say according to the speeds of the ACK of return of arrival, and the maximum speed of this return ACK is limited in turn to the capacity of bandwidth of the link of bottleneck of the crossed route is to say how fast the data from the sender can be sent along the bottleneck link and this is approximately equal to the bottleneck bandwidth in bytes per second (if the overload of the for example 40 bytes) required for the IP packet header without data). When the CWND of TCP of the emitter continues to increase exponentially in the "slow start" phase, the CWND actually increases according to the number of return ACKs during each successive RTT (not necessarily duplicating exponentially during each successive RT) that is, if the CWND present in the TCP is 8 Kbytes and sends out 8 Kbytes (assuming that it is allowed by the maximum sizes of the sender and window, the "effective window" with enough ACK returned) of data segments with only 6 returnees and 6 fallen in the next RTT then CWND will only now increase to 14 Kbytes (not duplicated to 16 Kbytes) which assumes "slow start". Congestion will not rise as the CWND size now increased (thus the effective window now increased, not caused by increases in the number of return ACKs received) remains below what can cause transit speeds that they are on those that can be sent by the bandwidth capacity of the bottleneck link. But if the transmission rates are now greater than those of the bandwidth capacity of the bottleneck link, some transmitted packets will now begin to be stored in memory in the bottleneck link (the Internet nodes usually have approximately 200 - 400 ms equivalent and storage capacity and buffer memory). In the stage when the transmission speed The transmitter corresponds exactly to that of the bandwidth capacity of the bottleneck link, in the CWND now "duplicated" in size in the next RTT and assuming that the RTT is around 100 ms, then in this next RTT, this additional bandwidth capacity of equivalent value of 100 ms of packets needs to be stored in memory in the bottleneck node. Assuming the speeds of the return ACKs over the successive RTTs that are now at or around the maximum bottleneck link bandwidth capacity (ie, the bottleneck link continues to send data at 100% utilization of link bandwidth), then the CWND of the transmitter will be incremented successively by an amount equal to the bandwidth capacity of the bottleneck link at each succeeding successive RTT, each successive RTT is slightly memorable with respect to the RTT immediately previous due to the successive equivalent amount of for example 100 ms of additional packet traffic stored in memory introduced by the incremented CWND (or increased effective window) up to for example the succeeding 4th RTT where the bottleneck node of the intermediate memories, causing the packets to fall. The sender will then rapidly retransmit the packets dropped on the reception of 3 DUP ACK from the receiver TCP, in which case even the CWND reduced in two and the value of SSthresh to one will remain almost invariably multimayors than the relatively small maximum receiver window size value; in this way, the emitter TCP will continue transmitting at the same previous speeds not diminished by these packet drops events, and with the ACKs returning at speeds equal to the bandwidth capacity of the bottleneck link, the The transmitter's transmitter speed will now continue to be at the exact maximum speeds equal to the bandwidth capacity of the bottleneck link (assuming this is equal to or smaller than the maximum receiver window size). It is noted that the sender can also retransmit the RTO timeouts of dropped packets only after a minimum minimum RFC time period of one second, minimum, if care is not taken already of the request for rapid retransmission of 3 ACK of DUP of the receiver, but these will be very very rare; case in which the CWND of the emitter still increases very rapidly exponentially in only a few RTTs to re-achieve / exceed the relatively small maximum receiver window size value (aided by the large "arbitrary" Ssthresh value) . The CWND of the issuer here will grow "exponentially" to very large values (tends toward the value of arbitrarily large "maintained" Ssthresh) Despite the periodic fast retransmission that divides the CWND and the Sstresh values in two. It is pointed out that once the CWND of the transmitter TCP is reached / exceeded, in the maximum window size of the receiver, its received part of the auto-synchronization speeds of the return ACKs will be predominantly later, the total velocities of which at most are equal to the bandwidth capacity of the bottleneck link at any time, which will dictate transmitter TCP transmission rates from now on. Variations of TCP response from the other end in the generation of the response ACKs can reduce the speeds of the return ACKs to below that of the bandwidth capacity of the bottleneck link, the storage delays of memory in the intervention nodes along the cross route (lengthening of the RTTs) etc., can reduce the total speeds of the return ACKs to all the TCP flows that cross the bottleneck link to below / less than 100% of the bandwidth capacity of the bottleneck link (thus adjusting the maximum window size of the receiver to be greater than the minimum size required, to fully utilize 100% of the capacity of the Bandwidth link bandwidth assuming that some RTTs are decongested throughout the TCP lease, sufficient to offset these Variations will be capable of 100% utilization of bandwidth of the bottleneck link at all times despite variations. Here, it can be seen that the maximum window size of the emitter and the CWND values can be arbitrarily large at any time (kept aided by the large "arbitrary" Ssthresh value), and with a maximum receiver window size value relatively small, the end-to-end TCP connection that uses the large-scale "not required" but intentional large-scale emitter window size and the relatively small maximum receiver window method, here will extend toward transmission speeds set equal to the bandwidth capacity of the bottleneck link, that is, the transmission speeds or performance graphs here will exhibit a link utilization level of 100% "square waveform" Conventional file transport technologies such such as FTP dramatically reduce the data rate in response to any packet loss, and can not maintain performance s long-term to the capacity of high-speed links. For example, a transfer of individual FTP files over an OC-3 link (155 Mbps) in a metropolitan area network stabilizes at 22 Mbps, assuming a packet loss percentage of 0.1% and a delay of 10 ms.
It is possible to add simple codes to simply verify the return intervals of the ACK-inter packets of the arrival ACKs, received in the receiver TCP from the receiver TCP greater than for example 300 ms, it can also be caused by physical errors , not necessarily congestion drops; both are captured here) for the local intercept software of the transmitter to generate 3 + DupNum of the DUP ACKs (with ACK number = number of ACK received from receiver TCP, and / or sequence number equal field of last number of sequence received from the TCP of the receiver) to the local MSTCP attributes the reductions of transmission speed of waiting intervals, it is well known that even the corruptions of physical errors (without congestions) of 0.1% that in the transmitted packets will severely limit the yields for 80 %, see http: // www. asperasoft.com/technology-faspvftp html # continental Configuration: 1. It is necessary to simply incorporate the intercept / outbound packet interception core and the TCB of the TCP flows. 2. Register the sequence number field "largest" last of the local MSTCP to the "last sequence number sent" 3. Record the last largest incoming packet ACK number field received from the last received "last ACK number" (and the last sequence number received from the packet sequence number), and the "last receive time" received, and copy of this "last package received complete". 4. If the present time less package time received last > for example 300 ms and Last sequence number sent + 1 > ACK number received last then send 3 of the "lastrcvpkt" (easier, no need to compute the checksum for the generated package, duplicate sequence number / duplicate data etc., if present in lastrcvpkt, only ignored by MSTCP local as long as the fast retransmission of 3 ACK of DUP is caused). 5. In the initialization of the software, editing the TCP registration (and / or optionally the own plug-in buffer size per individual application) ensures the completely new TCP window request window scale factor 14 and size 64 K TCP window (ie, maximum 1 Gigabyte), SACK preferable enabled, preferable not ACK delay. [References: Google search term "adjust to large-scale window size control of Plug-in buffer "(or similar related terms), www.psc.edu/networking/perf_tune.html, publib.bib1de.ibm.com / infocenter / pseries / topic / com.ibm.aix.d oc / aixbman / prftungd /2365a83.htm, www. Dslnuts.com / 2kxp.shtml, http: // www .ces.net / doc / 2003 / research / qos. Html, forum. Java. Sun. Com / thread. Jspa? ThreadID = 596030 & messageID = 3165 552, netlab.caltech. edu / FAST / meetings / 2002 uly / reíatedWork. ppt, www. baby org / research / tcp / debugging / firstpackets. html) This will work perfectly for data storage applications. Note: with both ends negotiating the large scale factor of window and the large window size, flow TCP will very quickly build CWND values to, for example, 1,024 * MSS of 1,500 bytes , that is, 1.5 Mbytes within 10 RTT, for example, 2.5 seconds. In any fast retransmission request that is generated by software (for example, allocation RTO timeout) or remotely, halving CWND and adjusting SSThresh to CWND / 2 will have no effect at all when reducing the "effective window", the "effective window" at any time after ACK / ACK synchronization / synchronization always either 1. Limits the size of reception window announced the receiver at all times; receiver has to say usually 16 Kbytes and in this way in all subsequent packets, the receiver will announce the reception window size of "1" (14 places displaced in scale = 16 Kbytes); the transmitter speeds of the local transmitter at any time will always be the speeds to this advertised window size of "16K" receiver and "speed regulated" effectively by the inherent auto-synchronization characteristics of the ACK (as it has arrived to be very aware in the last days); it is noted that the CWND and emitter window size can be large, arbitrary, and does not play any additional part in the congestion controls (once the achieved size of CWND is much larger than the maximum window size of the receiver, subsequently its ACK self-synchronizing feature that adjusts at maximum possible sending speeds to the available bandwidth of the obstruction link but of course, the receiver can continue to dynamically adjust the announced window size of the receiver to further exercise control in the transmission speeds of the transmitter, or the intercept software residing at the end of the emitter can optionally dynamically modify the window size of the receiver of the incoming packet to exert similar control in the transmission speeds / "window effective "of the sending MSTCP), or 2. The maximum size of the issuer window has been intentionally set so that it is negotiated at arbitrary large-scale window size values (or only without large scale 64K, 256K scale ... to these values), with the maximum window size of the receiver only slightly disturbed during the negotiation to for example 4 times greater than what is really required / needed (such as for example 64 K, 256 K ... etc ., instead of the usual required / required size of the maximum 16K by default) so that the CWND and SSthresh of the sender (which is usually set equal to the maximum window size of the negotiated receiver), almost at all times maintains very large values despite the divisions in two frequent of the fast retransmission (very large value that the relatively small current system resource of the receiver restricted the size of the advertised receiver window) assuring very efi close to the square waveform of 100% utilization of the obstruction link; is the maximum possible speeds of the auto-synchronization of the return ACKs that arrive back only at a lot at the speeds of the obstruction line which ensures this, since with both the window size of the emitter as CWND now almost always invariably at all times are many orders of magnitude greater than the size value of particular emitter window needed to ensure TCP of the emitter can transmit at sufficiently fast speeds to use 100% of the bandwidth capacity of the link used for obstruction (this relates to the well-known bandwidth delay product, ie , the well-known RTT * window size equation), additionally after the CWND has quickly achieved a size greater than the negotiated window size value of the receiver (from above for example 64 K, 256 K ... etc.), the emitter TCP here will not subsequently ever increment the current "effective windows" beyond the maximum negotiated window size of the receiver (for example above 64 K, 256 K ... etc.), by means of growths. of window size during successive RTTs and in this way will only subsequently register / send additional packets at the reception of the return ACK stream (maximum speeds s of the return ACKs always restricted here to be within the bandwidth capacity of the obstruction link). It is pointed out that both in cases 1 and 2 above, the interception software (or TCP source code) can always modify the values of the receiver window size field in the incoming packets of the remote receiver to be of any of the values smallest maximums required (either dynamically derived for example from the last values or estimates of the interval of the minimum interconnection ACK and decongested RTT / OTT ... etc., or the user can specify specific values of the previous knowledge of the bandwidth capacity of the crossover link), ensuring in this way, the effective window size of the emitter TCP exceeds the size level necessary to correspond to the bandwidth capacity of the crossover linkage; now it does not need to recur to the resource limitations of the receiver system to limit the value of the advertised dynamic window size field of the receiver, and both maximum window size values of the sender and receiver can be jointly negotiated both at the same values of window size on a very large and arbitrary scale. It is noted that it may be desired / necessary to further ensure that the CWND of the issuer will definitely obtain the accumulation at a sufficiently large or very large ab value in the establishment of the TCP data transfer channel of ftp, but an immediate drop of packets in this very early stage may cause the emitter's SSThresh to adjust to half the present very small initial CWND value; This can be achieved, for example, by the interception software that stores a number of, for example, 10 of the very first initially sent data packets. current retransmissions to the remote receiver of any of the for example 10 packets that were not received (that is, it verifies the incoming return ACK number during this time to detect missing packets not received in the remote receiver TCP, and discards / modifies or does not send these arrival packets back to the local MSTCP to prevent the local MSTCP from resetting the Sstresh value to half the very small, initial CWND value present at this time). It is pointed out that where the issuer TCP source code is available for direct modifications, it will be very simple; for example, it is only necessary here to modify the source code so that the value of Ssthresh is now set "permanently" to a very large arbitrary value, and / or the maximum sender window size of the sending TCP is now set to "permanently" "at a very large arbitrary value etc. (There may be many ways to achieve the purpose). Also all methods / techniques can be modified accordingly to work as receiver-based control (instead of emitter-based control). It is noted that in addition he should be able to immediately use the previous technique of "square waveform" manually without any required software, in a very basic way: 1. Manually adjust two PC registers per consequential for large window scale, large window, SACK, non-ACK delay 2. Large FTP between these 2 PCs 3. The FTP performance / transmission rates here should show "square waveform of link usage level" of almost constant 100% obstruction. " In addition, it may be desirable to add regular minimum inter-packet delay sending data packets in the inter-return ACK interval of last recorded minimum, observed (in terms of for example the bytes per second, which must correspond to the blocking link capacity, this value can be additionally derived / updated, for example only from the immediately preceding specified time interval such as derivative / updated each eg 300 ms), buffering the packets if needed; Do not "buffer" bursts in routers that can contribute to unnecessary packet drops due to momentary congestion, not real congestion. It is possible for this interception software to cause congestion drops of the successive RTT increment of successive CWND (as long as the exponentially increased CWND remains = <window size announced by the receiver for example that allows to duplicate the Transmission speeds despite auto-synchronization of ACKs while previously using 100% of the bandwidth of obstruction linkage or bottleneck, some users can still adjust the resource of storage-size systems in memory of actual physical reception to be really great). The existing technique of "pause", ie "pause" must be incorporated for the interim interval "registered" ACK last minimum (corresponds to the capacity of the obstruction link) for each return ACK outside the "wait interval", that is, it simply does not send forward to the remote receiver TCP, the next intercepted packet pending, if the specified interval expires (for example, 1.8 * inter-ACK interval of last minimum registered return) without receiving the next new incoming return ACK from the previous one, for a period equal to, for example, the same inter-ACK return interval, registered, minimum , last, that is, minimum return inter-ACK interval; here the sender TCP can only transmit at most 2 packets (each is the inter-ACK interval of minimum return regulated in speed of for example 50 ms between the sending) before the "pause" activated by the first ACK of sending returning out of 1.8 * interval of inter-ACK return minimum recorded last, for example, 90 ms; the software does not cause falls in itself of congestion + possible exponential deployment over external Internet + TCP friendly + preserves decongested level transmission speeds achieved completely even when other TCP causes packets to fall (no teeth are seen). It may be necessary / desired to additionally implement buffering to store the intercepted packets waiting to be sent to the remote receiver TCP and / or various information in these stored memory packets for example received in time in the buffer etc., and for then generate the request for fast retransmission of 3 ACKs from DUP to the MSTCP (to allocate the RTO timeout to the local MSTCP) if for example the timeout of the particular packet stored in memory in the buffer queue approaches for example, 1 minimum RTO time period by default of normal RFC of 1 second, and to additionally replace this particular packet stored in memory in the queue with any new "rapidly retransmitted" new packet. It is pointed out that an alternative TCP congestion control mechanism does not necessarily need any of the existing standard RFC sliding window mechanisms / AIMD ... etc., and / or that it works in parallel as the interception software ( and / or direct TCP source code modifications) with the sliding window of Normal RFC / AIMD mechanism etc., will be incorporated into the "regulated in the transmission speed" technique of the inter-ACK interval of arrival of the immediately preceding paragraphs (to pause / skip the packets that are sent to the remote receiver in for example the next ACK of return that arrives outside the period of time specified from the previous ACK arrived), and either to increase / decrease the generation speeds of the MSTCP packages (that will be available for the shipment at speeds faster increments / slower decrease) that is adjusted according to for example the latest value of the inter-ACK return interval between the last successive packets and / or the RTT value or the current OTT value of the particular package (which should show beginnings of buffering by congestion along the cross-route, or total absence of which, very well) or to use in the AIMD mechanism very own entity, of existing normal RFC TCP parallel (and / or in conjunction with the buffering of packets waiting to be sent to the remote receiver, and / or the generation of fast retransmission requests from 3 DUP ACKs to the local MSTCP to allocate the RTO timeout of the failed spool packets and / or newer retransmission packets that will replace the packets from the previous spooled version in the buffer to and / or the time / receive information of event list time and / or RTT / OTT monitoring per packet ... etc., to perform "transmission / pause rate regulation" techniques of the inter-ACK interval of return). In the periodic specific time period, the above scheme can ensure two or a small number of packets that are available for forwarding to the remote receiver one immediately after the other in very fast sequences allowable by the first link bandwidth Immediate ring to ensure the best estimate of the last cross-route of the obstruction link bandwidth capacity is continuously updated of the subsequent last value of the inter-ACK interval of minimum recorded return (for example, still waiting for two or a small number) of packages are available before sending them forward together ... etc., it is pointed out that the actual bottleneck bottle bottle capacity can be derived additionally at the finest level of bytes per second instead of certain packages size per second, and techniques for pausing transmission speed and / or speed regulation of transmission Ion can be adapted to use this common thinner granularity derived from bytes per second knows the actual size of the pending packet size to be transmitted forward). The scheme here can use its own algorithm contemplated to increase / decrease the regulated transmission speed, different from the existing RFC sliding window congestion evasion mechanism. The transmission speeds here must exhibit the same "square waveform" of constant level of obstruction link utilization of almost 100% and at all times the transmission speeds will oscillate within a very small band around the application levels of obstruction link of almost 100%. It is noted that the local interception software may generate a window size update packet here or modify the receiver window size field values in the incoming TCP packets of the remote receiver, for example "0" or very small values as required, to the local MSTCP to temporarily "stop" (or to reduce the packet sending speeds of the local MSTCP) the local MSTCP of the generation / sending of new packets, such as when the number of packets in the packet queue Intercept software sending buffer exceeds a certain number or total size. This prevents the queue of excessively large packets from accumulating, which may result in eventual RTOs in the local MSTCP.
Simplified Quantifications of Large FTP Transfer Improvements In order to achieve minimum performance improvements of 50% (eg from 1 MBS to 1.5 MBS, there will be additional large improvements to other factors), constant periodic packet loss (and fast retransmission) ) presents the maximum transmission line speed range of the very instantaneous emitter: (1) assuming a loss rate of one per 1,000 packets, periodic constant and RTT of 200 ms, the maximum window size needs to be 200 packets (300 kbytes) to transmit all and to regulate the speeds to 1,000 packets in a second: The value of SSthresh overflies commonly around a maximum window size of 1/2 * (100 packets or 300 kbytes), due to the division in two of the successive fast retransmissions, the CWND needs to increment by 100 packets (150 kbytes) to regain maximum bandwidth retransmission speed; 100 RTT required (20 seconds); the minimum bandwidth needs to be 600 kb / s to transmit 1,000 packets in 20 seconds (1,000 * 1,500 * 8/20) (2) assuming a periodic loss rate of 1 per 100 packets constant and RTT of 200 ms, the maximum window size needs to be 20 packets (30 kbytes) to transmit all and regulate speeds to 1,000 packets in a second: the value of SSthresh commonly flies around the maximum window size of 1/2 * (10 packets or 15 kbytes), due to the division in two of the successive fast retransmissions, the CWND needs to increment by 10 packets (15 kbytes) to re-achieve the maximum bandwidth retransmission speed; 10 RTT required (2 seconds); the minimum link bandwidth needs to be 600 kb / s to transmit 100 packets in 2 seconds (100 * 1,500 * 8/2) These "square waveform" TCPs need to be TCP friendly, where the flows of TCP crossing the obstruction link consist of all these "square waveform" flows or a mixture of these "square waveform" flows and existing normal RFC TCP flows, the total speeds / total number of ACKs return to all these flows / all the mixture of flows will still be limited to no more than the corresponding bandwidth capacity of obstruction linkage or bottleneck of the cross-route; these "square waveform" TCP flows can be increasingly deployed over the external Internet, maintain / retain their achieved transmission speed despite packet drops caused by other existing normal RFC TCP flows and / or the effect of "sawtooth" of the mixture of flows and / or fall of packets for public Internet congestion and / or corruption of BER packets (bit error proportions) as long as it is able to maintain TCP-friendly all "square waveform" TCP flows and / or other TCP flows of existing normal RFCs (it is pointed out that the new TCP streams can in any case almost always start their transmission rate growths using the buffering capacity of the network nodes). With the modified TCP, if the link traffic begins to be buffered, its corresponding RTTs in echo will not exceed a specified multiplier value * decongested RTT (for the particular packet size, usually determined by the size of system MTU or MSS size) of the particular source-destination, and the software can pause the TCP stream transmissions for the specified "pause" interval; this ensures all the storage and buffering of the crossed nodes that are immediately purged from any of these packets stored in memory of the flows by TCP (or equivalent) during this interval of this "pause"; in this way there will never be any packet drops due to congestion. However, there is always the possibility of physical transmission errors that cause the RWN wait interval to readjust from CWND to 1 MSS (this is very rare and does not greatly affect improved performance performance), but you can also incorporate the "receiver-based" inter-packet arrival technique and the fast retransmission of 3 DUP ACKs together with the "large-scale window size" method of the preceding paragraphs to allocate the RTO timeout events of the transmitter / attribution of the division in two of the emitter's transmission rate or reset to "0". Therefore, the TCP flow here will not exhaust RTO to drop its transmission speeds (readjust CWND to 1 MSS) to cause "sawtooth" transmission rates. Performance tag that invariably wastes half of the transmission widths. physical band available, the equivalent required reductions in transmission speeds to avoid congestion pack fall now affecting only through "pause" intervals; the transmission rates / performance graph should now show the physical bandwidth that is close to 100% utilization almost at all times An alternative, unused, modified TCP method to attribute the previous "sawtooth" phenomenon is to adjust the maximum shipping window size of Transmitter TCP that is, the TCP window size system parameter value (and / or several other values of related parameter) so that the TCP maximum possible bandwidth delay product (maximum window size / RTT) will never exceed the physical link bandwidths in this way there can be packet drops due to congestion, assuming that this TCP flow is the only flow that uses the link at a time. When the appropriate maximum TCP window size value is chosen, the finite period of time it takes for a packet of maximum allowed size (determined by a value of MTU or an MSS value) to exit completely over the smallest link width bandwidth along the cross route will need to be added to the decongested RTT (very small insignificant packet size) of the particular source-destination, this gives the minimum RTT value for use in the bandwidth-delay equation -product (in real life, the actual RTT values will be higher taking into account the variations introduced by several components, for example CPU ACK generation processing etc.); additionally, if the return ACK is possibly carried secondary in a regular data packet (for example, if the receiver is also symmetrically sending data), then the finite time of the maximum return size data packet to exit completely over the link lower bandwidth along the return cross route again need to be added to the above to give the minimum value of RTT for the use in the bandwidth-delay-product equation. The selective recognition option will improve the performance here, and the option of recognition of delay is still allowed will have no real effect assuming that the data packet stream is continuous and assuming that the limited finite time taken by a data packet of size The maximum allowed to get out over the lower bandwidth link along the route / cross-return route is negligible (ie, lower bandwidth link is still of greater bandwidth capacity, for example, It takes 50 ms for a 1,500-byte data packet to exit at the next forward link of 240 kbs, while it takes approximately 250 ms for a 1,500-byte data packet to exit at the next forward link of 56 kbs; with the packet RTT of very small size of source-destination bytes of for example 50 ms this output time dominates the value constituting the calculation of the minimum RTT value for use in the TCP window size calculations of maximum window size).
An Immediately Increasingly Deployable TCP Modification Over the External Internet At present, the normal RFC TCP data transfer performances are poorly performing over the route / network with high proportions of congestion drops. and / or high proportions of BER (proportions of physical errors of transmission bits), especially in the network of thick long-distance conduits (LFN) with high values of RTT and very large bandwidth routes. The sawtooth transmission waveform of AIMD (multiplicative decrease of additive increment) inherent in the normal RFC TCPs that emerge constantly fluctuating between 0% - much more than 100% of the bandwidth capacity of the obstruction link / physical link, can also contribute to packet dropping by itself. In the present, the TCPs divide the CWND size of the Congestion Window in two, in this way they divide their transmission speeds in two, in the packet loss events as it is notified by the requests of Fast Retransmission of 3 ACUs of DUP o RTO Retransmission waiting interval. Currently, the TCP can not discern non-congestive causes of packet drops yet such as BER effects, and it treats all packet loss events as caused by congestions of the route / network. It is a well-documented common phenomenon that a route with only 1% of total loss ratio will divide the TCP throughput yields into two. The typical proportions of loss in Asia are 5% -40%, in America 2% -10%, as can be seen in http: // internettrafficreport. com. Here is an improvement modification to the normal RFC TCP SACK, which can completely eliminate all the disadvantages described above with respect to the route / network with high loss proportions, which can be deployable in an increasingly immediate way on the external Internet and it can also be friendly to TCP flows, based on the following general principles (or various combinations of steps or sub-component steps / processes or sub-component processes thereof): (1) In the event of a fall in packets as notified by modified TCP of 3 DUP ACK here you will only need to reduce your Congestion Window CWND size by the number of bytes that correspond to the total reported segments / packets that are lost / dropped (the number field of ACK in the incoming DUP ACK packets that trigger the Fast Retransmission and / or the subsequent multiple DUP ACKs that increment / inflate the CWND size reduced to the tad or two) indicates the sequence number of the initial lost packet, while the Selective Recognition fields indicate the blocks of the adjacent Sequence Number successfully received out of order; that is, the "sequences of missing separations" between the number of ACK (recognition) and the SACKed block of smaller sequence number, and the sequence number of absent separations between the SACKed blocks themselves, gives the packet sequence numbers of absent dropped separations, thus the total number of bytes indicated to be dropped). While the largest SACK number within the DUP ACK indicates the largest sequence number received successfully, this can optionally be used to consequently increase the modified TCP CWND size (as if the largest received ACK number). of the modified TCP now conforms to the largest received SACK number within the third DUP ACK that triggers the Rapid Relay and / or the subsequent multiple ACUs of DUP, but only for the purpose / effect of increasing the size of CWND / size of "effective window" certainly not for the purpose / effect of advancing the left edge for anything in the modified TCP slider window, that is, the end-to-end semantics of the TCP ACK number field that is to be retained as is specified in the existing standard TCP otherwise) thus allowing more segments / packets to send / inject into the network by the modified TCP as SACKed instead of as ACKed (recognized), in the same way as the effects of the incoming ACK field come in the existing TCP effective window size increment, BUT not always as the effect of the advance of the left edge of the sliding window (which can cause the "Sequence Number"). of separations absent "so that it does not stay longer within the current data window value to be Quickly Relayed / Relayed in the RTO Waiting Interval again; it is noted here that the subsequent increment of the received ACK number, if it is smaller than the previous larger SACK number used to increase the CWND / effective window size, should not have the effect of increasing the effective window size / CWND of the modified TCP again but will have the effect of advancing the left edge of the sliding window of modified TCP) and / or (2) In the event of packet drop as reported by the modified TCP stream of the third DUP ACK, here you will need only to ensure your total number of bytes in flight transmitted outstanding in the network (it is say, the total bytes of all sent packets, including encapsulations / headers, either packets containing data or control packets that do not contain data, transmitted in the network between the time from the packet containing data, with the same Sequence Number as the ACK number (acknowledgment) of the present third ACK of DUP, was sent and the arrival time of this present third ACK of DUP with the same Sequence Number) will now be adjusted / reduced to be the same number as computed here; the total number of bytes in flight transmitted in the network during the RTT of this third ACU of DUP that activates the Rapid Retransmission, that is the total number of bytes transmitted in the network between the transmission time of the packet with the same Sequence Number as the ACK number of the third ACU of DUP of return that activates the Fast Retransmission and the reception time of this particular third ACK of DUP, DIVIDED by minRTT divided by the RTT for this third particular ACK of DUP. The minRTT is the last estimate of the current completely decongested RTT between the TCP flow endpoints, so that if all the flows that cross the congestion drop node are all the modified TCP flows that act in unison, this particular node it must be congested subsequently or almost congested; the minRTT here is simply the value of the smallest recorded RTT of the observed up to now of the modified TCP flow, which will serve as the best final estimate of the current physical decongested flow RTT (obviously, if the current physical decongested RTT of the flow is known, or it is provided in advance, then it must or can be used instead). The total number of bytes in flight transmitted in the network during the RTT of this particular third ACK of DUP that activate the Rapid Retransmission, that is the total number of bytes in flight transmitted between the transmission time of the packet by the same Sequence Number as the third ACK of DUP of return that activates the Fast Relay and the time of reception of this particular third ACK of DUP, can be derived by maintaining a list of input of events ordered by time (it is say, only based on the order of its transmission in the network) consisting of triple fields of Sequence Numbers of the sent packet, and the Send Time, total number of bytes of this packet that includes the encapsulation / header. In this manner, the RTT value of the third ACK packet of DUP with a particular Recognition Number can be derived as the present arrival time of this present third ACU of DUP, Time of Sending of the packet having data with the same Number. of Sequence as the third ACK of return DUP present. And the total of transmitted flight bytes can be derived as the sum of all the fields of the total number of bytes of all the inputs between the event list entry with the same Sequence Number as the third return DUP ACK, and the last entry proper to the list of events. This event list size can be kept small by removing all entries with the Sequence Number < ACK number of the third ACU of DUP. A simplified alternative, instead of calculating the total number of bytes transmitted in flight, would be to approximate larger transmitted Sequence Numbers - the largest ACK number received, at the time of transmission / sending of the data packet with the same Sequence Number as the present ACK number of the third ACU of DUP; this gives the total number of bytes of data segment in flight that is, the pure data segments in flight do not include encapsulations / headers / control packets that do not have data. Among the several possible ways to implement the modifications in the existing normal RFC TCP source codes to adjust / reduce the total number of flight bytes transmitted outstanding in the network in the event of packet drops as reported by the third ACKs of DUP are: immediately reduce the present size of "effective window" through the Congestion Window, ie the size of CWND so that it is the same number as the total number of bytes in flight transmitted in the network during the RTT of this particular third ACK DUP that activates Fast Retransmission, that is, the total number of bytes transmitted in the network between the packet transmission time with the same Sequence Number as the ACK number of the third return DUP ACK that activates the Fast Relay and the time of reception of this third ACK of DUP, DIVIDED by [minRTT divided by the RTT of this particular third ACK of DUP] rounded to the byte plus c ercan This will result in an appropriate number of ACKs of subsequent returns that no longer have the effect of "synchronizing" the new packets in the network since the CWND size of the Congestion Window needs to be increased by an appropriate number of ACKs. of subsequent return to re-achieve its previous size, before any new arrival of return ACK would be able to "synchronize" the new packets in the network; the return ACK number required here before being able to "synchronize" the new packets would normally correspond to the number of return ACKs required to recognize the same number of bytes as the number of bytes that the CWND has been reduced. Alternatively, instead of the above reduction procedure, the CWND here will only increase in the RTT / mmRTT rate of arrival of the third ACU of DUP * the number of segment bytes sent recognized by this third arriving DUP ACK, rounded to the nearest bytes or fractions transported forward (instead of the usual increase of normal RFC TCP by the number of sent segment bytes recognized by the new arrival ACKs); this is continued for all the same multiple or increased DUP ACKs by the new ACKs, until the new reductions are achieved after which the reduction process is completed. It is pointed out that some Previous implementations of TCP can increase CWND by 1 SMSS for each new arrival ACK instead of increasing by the number of bytes of sent segments recognized by this new arrival ACK, in which case the reduction process can also be performed instead by incrementing only the CWND by 1 SMSS only once for each different RTT / minRTT number of the received ACKs of arrival (if the DUP ACKs or the new ACKs, but rounded to the nearest integer for example if RTT / minRTT = 2.5 , then CWND can be increased by 2 for every 5 new arrival ACKs). This has the effect of smoothing the byte reduction process in flight, so that there is still an appropriately reduced transmission and reception of new packets throughout the flight byte reduction process. The event of notification of falls due to congestion caused by Retransmissions of the RTO Waiting Interval, can be: - treated in the same way as the third ACK of DUP or the subsequent same number of ACK, multiple ACKs of DUP, as described above , that is, causes the process of reducing the flight bytes to remove the residence packets stored in buffer but does not readjust / reduce the size of the CWND; or - treat in the same exact manner as in the standard RFC specification ie readjusts the CWND to 1 SMSS and re-enters slow-start exponential increments; but it is pointed out here that since the value of Ssthresh has never been reduced by two or half in the modified TCPs here the slow start will quickly grow again in the initial Ssthresh value (which would not have been reduced by the successive events of Fast Relay ). In addition, the subsequent congestion fall notification event, for example, multiple subsequent DUP ACKs with the same unchanged ACK number, the third ACKs of DUP with the new increased ACK number, (or even the RTO Replay Intermission for example detected by the TCP that retransmits without the third parties DUP ACKs that activate Rapid Relays) must allow the existing "byte reduction in flight" process / procedure to be completed if the new computation does not require major reductions (ie, it does not require that total bytes in flight result more small), otherwise this new process / procedure may be optionally taken (alternatively it may allow this process / procedure to start only once per RTT, based on the particular "marked" Sequence Number which then returns the verification if there has been some event of notification of fall due to congestion during this RTT). Since the modified TCP here can derive the RTT of the particular return ACK (or return ACK immediately before the RTO Retransmit Interval) causing notification of the event of congestion failures, the modified software may additionally discern whether the same previous event was actually a "false" notification "of falls due to congestion and reacts differently if it is so; that is, if the RTT associated with the notification of events due to a particular congestion fall is the same as the last estimated decongested RTT of the endpoints (or if it is known / provided in advance), or does not yet differ by a certain amount of variance specified with the limits of a smaller buffer capacity of the equivalent node in milliseconds, then this particular notification of congestion failures can be correctly treated as arising from physical transmission / corruption / BER errors (bit error ratios) in change, and the modified software can simply retransmit the reported dropped segment / packet without having to do / enter at all in processes of byte flight reductions. It is noted here that, unlike the existing standard RFC TCP, the modified TCP here will not necessarily automatically reduce / halve / re-size the CWND in the congestion failure notification event caused by the new third ACK of DUP / same subsequent ACK number, multiple DUP ACKs after the new third DUP ACK and / or RTO Interval Retransmissions; the modified TCP here only needs to necessarily reduce the CWND size appropriately in congestion failure reporting events to reduce the number of outstanding flight bytes to the appropriately derived values. It is noted that any obstruction link will continuously send the packet sent to the receiver TCP at the physical obstruction line speeds, despite the resident occupation levels of the buffer in the obstruction node and / or occurrences of crashes by congestion, at any time; in this way the sum of all the bytes recognized during the RTT periods associated with the return ACKs received from the entire TCP of the emitter will almost invariably be equal to the physical bandwidth of the obstruction link or bottleneck at any time if fully utilizes the obstruction bandwidth. It is also noted that the TCP congestion avoidance algorithm should strive to maintain bandwidth utilization levels as close as 100% of the obstruction link bandwidth as possible, rather than the coarse sub-utilization of the normal RFC TCP caused by the reduction in the size of CWND in the events of notification of falls due to congestion.
Several different algorithms of reduction ratios / reduction quantities / levels of reduction of bytes in flight can be contemplated, and can also be used in several other parameters eg larger ACK number received and / or Sequence number sent larger and / or CWND size and / or effective window size and / or RTT and / or minRTT etc., (such as for example issuing certain tolerated levels of buffer residence occupations instead of completely debugging all residence packages of buffer memory, the additional flight bytes stored in memory of the modified TCP flows, etc.), at the time of the notification events of congestion failures and / or these historical events; and / or (3) The obstruction physical link of a TCP connection over the Internet is usually either the last mile transmission means of receiver TCP or the transmitter first mile transmission means of TCP; these are usually telephone dial by PSTN of 56Kbs / 128Kbs or the typical ADSL link of 256Kbs / 512Kbs / lMbs / 2Mbs. In these situations, despite how fast the transmitter TCP transmission speeds (which in normal RFC TCPs inevitably probe the bandwidth of the route by injecting more and more bytes in each subsequent RTT), either doubling the CWND exponentially during slow starts or linear increments of the CWND during congestion avoidance), the obstruction link or bottleneck can only send all flow traffic at the maximum line speeds limited by its bandwidth; Increasing shipping speeds beyond current obstruction linkage or bottleneck speeds (the current obstruction link may change from time to time depending on network traffic) will not result in any greater performance of TCP flows beyond the physical line speeds of the obstruction link. In this way, the TCPs here can be advantageously modified so as not to send at a higher speed than the maximum possible physical line speeds of the obstruction link. Doing so would cause the amount of physical line speed of obstruction beyond the additional packets / bytes sent during each RTT to be inevitably stored in memory or would fall somewhere along the two terminal points of the flow of data. TCP. Here is an example procedure, among several possible ones, to determine the physical bandwidth of the route obstruction link: - successive RTT values can be easily derived, since the existing TCPs of normal RFCs already perform calculations / derivations of RTT values successive based on a TCP packet "marked" with the particular Sequence Number for each successive RTT period. the performance ratio for each successive RTT can be derived by first recording or by deriving the total number of flight bytes transmitted in the network during the RTT of this particular "marked" Sequence Number packet ie the total number of bytes in flight transmitted between the time of transmission of the packet with the particular "marked" Sequence Number and the time of its return ACK (or SACKed), which can be derived by maintaining a list of event entries ordered in time (that is, based only on the order of its transmission on the network) consisting of triple Sequence Number fields of the sent packet, and the Sending Time, the total number of bytes of this packet that includes the encapsulation / header. In this way, the RTT value of the particular "marked" packet with a particular Sequence Number can be derived as the present arrival time of this present return ACK (or SACKed) - Packet Delivery Time having data with the Sequence number "marked" particular ... and the total bytes transmitted in flight can be derived as the sum of all the fields of the total number of bytes of all the entries between the entry of the list of events with the same number of Seq. as the third return DUP ACK, and the last entry in the list of events. This event list size can be kept small by removing all entries with Sequence Numbers < the ACK number of the third DUP ACK. A simplified alternative, instead of calculating the total number of bytes transmitted in flight, would be to approximate them as the largest Sequence Number transmitted + number of data bytes of this largest Sequence Number packet - largest number of ACK received, at the time of arrival of the third ACU of DUP; this gives the total number of bytes of the data segment in flight, that is, the pure data segments in flight that do not include encapsulations / header / control packets that do not have data. Alternatively, as an approximation and / or simplification of the total number of bytes in flight transmitted between the time of transmission of the packet with the particular "marked" Sequence Number and the time of its return ACK (or SACKed), the calculations / Derivations of the performance ratio for each successive RTT, may be based on the particular "marked" "packet" Sequence Number + the data payload size of the particular "marked" packet in bytes - largest ACK number received at the time when the particular "marked" Seq Number packet is sent. The proportions of performance for the RTT here is they can compute as the previous total number derived from the flight bytes transmitted in the network during the RTT period / this RTT value (in seconds). - Record of the largest value of the proportion of performance achieved in all RTT and continuously updated, later known as maxT. Also recorded is the RTT value associated with this period when the largest proportion of maxT performance was subsequently achieved as RTT_maxT, together with the total number of transmitted flight bytes associated with this period when the largest proportion of maxT performance was achieved. later known as En_Vuelo_BYTES_maxT. - whenever the ratio of performance in any RTT period = < maxT that is to say ratio of performance in this period of RTT does not get to be > maxT, and if [total number of bytes in flight during this period of RTT / En_Vuelo_Bytes_maxT] > [RTT value in milliseconds during this period / RTTjnaxT in milliseconds], THEN the physical bandwidth capacity of the obstruction link or the line speed is now derived / obtained. The rationale here is because if the flight bytes in this RTT period are for example twice those associated with the period of maxT and the RTT value for this period remains equal to (or less than twice) ) RTT_maxT, THEN the ratio of the performance ratio for this RTT does not exceed maxT is because maxT is already the same as the physical bandwidth capacity of the obstruction / line speed link, this way despite many more bytes in flight during this RTT period and this RTT value has not disproportionately increased the rate of performance in this RTT that is limited to the line speed of the obstruction so that it does not increase greater than maxT. The test formula may additionally include a tolerance value of mathematical variation, for example, "if [total number of bytes in flight during this period of RTT / In_Flight_Bytes_maxT] &[RTT value in milliseconds during this period / RTT_maxT in milliseconds] * variation tolerance (eg 1.05 / 1.10 etc.) Once the obstruction / line speed physical bandwidth capacity is derived / obtained (= maxT), the modified TCP can not then more time probing continuously the bandwidth of a route as aggressively as in the existing normal TCPs of RFC, the increment of CWND exponential of slow start / the increase of the linear CWND of evasion of congestion by RTT, which invariably rivals to cause falls of packets due to unnecessary congestion and / or packet drops per b, here the modified TCP can subsequently limit any subsequent increment in the CWND size (optionally and / or effective window size) in any subsequent subsequent RTT period to be no more than for example 5% of the [CWND size (optionally and / or effective window size) associated with maxT at the time of maxT (which is now equal to the obstruction line speed) that is achieved * (the last previous, that is, the last RTT value in milliseconds / RTT_maxT in milliseconds)]. Yes, very unlikely, the performance ratio in any subsequent RTT becomes greater than maxT, then maxT will be updated and the obstruction line speed determination process is repeated again. In this way, the modified TCP will not unnecessarily aggressively increase the size of CWND and / or the effective window size to cause congestion drops and / or burst packet drops, beyond what is necessarily required to keep busy the obstruction link to your line speed. Alternatively, the modified TCP may optionally regulate its packet / packet transmission generations over the network by speed, i.e., the modified TCP only generates packets / packets sent at the obstruction line speed maxT; for example, when setting the inter-bytes_ minimum_interval = (l / (maxT / 8)) once maxT achieves / becomes equal to the speed of true line of obstruction, if it does not optionally adjust? nter-bytes_your_? minimum_time = (l / (maxT / 8)) * 2 (since the growth of CWND at this time would be a duplication to how much exponential the CWND of the previous RTT period). In addition, optionally, the modified TCP can ensure the generation of packets / packet sending speed that will be at the corresponding maxT speed (if maxT has already achieved speeds equal to the true line speed of the blockage, or only greater than the maxT last) at all times, instead of the generation of packets / speed of packet delivery as permitted / "smthrough" by the speeds of the return ACK (or SACKed), subjected to the debugging of the bytes in-flight additional and / or appropriate speed reductions for dropped packet processes as described in congestion failure reporting events; that is, the optionally modified TCPs will generate packets / transmit at least maxT speeds not limited by the return speeds of the last ACKs (SACKed), unless required to make appropriate rate reductions to debug / reduce the bytes in flight and / or reduce the speeds that correspond to the number of dropped packets (eg reduce packet generation / transmission speed in equivalent bits per second for example maxT * minRTT / this value of RTT of period, or maxT-number of bytes dropped during this RTT * 8, in the events of notification of falls due to congestion (which may be the third ACK of DUP and / or the DUP ACK of the subsequent multiple ACK numbers same, and / or retransmissions of the RTO wait times)). Implementation without changing the source codes of the existing TCPs directly: Without directly modifying the source code of the TCPs, the invention as described in the immediately preceding paragraphs can be implemented as a software / intercept agent for independent TCP packets, where the software keeps a copy of the value of the sliding window of all the data segments sent forward, performs all RTO time interval retransmissions and / or fast retransmissions, and / or regulates the forward sending speeds of the intercepted packets from / to the local TCP (according to the value of maxT), the processes of adjusting sending speeds in the events of notification of falls due to congestion. Here, this implementation summarizes, only to provide an overview of the required steps that can be improved / modified. Additionally, any detailed, refined coding / algorithm step is only purely for illustrative purposes only, and can be I pray / modify: - the intercept software intercepts each of the packets that come from the TCP / destined to the MSTCP. the software keeps a copy of all packets that have data payload in entries of a well ordered list, according to the ascending sequence number. - in the notification of the third ACU of DUP, the software performs the fast retransmission of the copy input of the payload data packets in the list with the same sequence number as the third ACK of DUP and the subsequent multiple ACK of DUP of the same ACK number. The software keeps track of the cumulative number of the DUP ACKs of the same ACK number value as DupNum, in addition to the fast retransmission of all dropped packets as indicated by the "separations" in the selective recognition fields. The software modifies each of the ACK numbers of the DUP ACKs by decreasing this ACK number value of the packets so that it is an ACK-DupNum * number, for example 1,500, so that TCP does not receive the ACKs of any time. DUP with the same ACK number for nothing; TCP never reduces / halves the size of CWND due to fast retransmission (which will take care of the software for now). The software does not decrease any size value of CWND (this parameter is not yet accessible by the software). the software incorporates the principles / processes / procedures as summarized in the general principles described above, or combinations / sub-components thereof. Additionally; - the software can still completely perform the retransmission of the RTO timeout interval, instead of MSTCP (by incorporating RTO calculations of the RTT values of the historical return ACKs); the software in this way can interfere with each of the individual packets of the ACK immediately upon receiving the TCP packets for the sending; TCP now does not perform the RTO timeout retransmissions. The software can further "delay" the interference of the ACKs when receiving the TCP packets, as a technique to control the generation of TCP packets / TCP packet sending speeds. - instead of modifying TCP CWND size / effective window size (not yet accessible to software) although this is not an essential required feature required, the software can instead either simulate a CWND mechanism of "mirror / effective mirror window measurement "within the software itself, or alternatively give equivalent effects in other equivalent ways such as reduction of flight bytes for example by regulation of speeds to control / adjust other parameter values such as the largest received acknowledgment number, the largest sent sequence number, assuring its subtraction difference so that it is of the required size, etc. the software can also implement several normal TCP techniques such as checksum verification in each of the intercepted packets, detections and comparisons around the sequence number, detection and comparisons around the timestamp, as defined in the Existing normal RFC, etc. Here are some sketches of the software design, for purely illustrative purposes only and can be further amended / improved / modified and / or completely different design: 1. pure intercept sending 2. + positive control summation overlapping 3. + retransmit quickly only the same DUP recognized packet copy, only once for the same DUP recognition number; 4. + quickly retransmit all the copy of packets, only once for the same DUP ACK number; 5. + Rapid retransmission only of the entire packet copy in larger recognized "separations" S, only once for the same acknowledgment number DUP ACKs; 6. + quickly retransmit only the entire copy of the package in the largest recognized "separations" S >; largest reception sequence number @ each DUP recognition; (It does not need the software to retransmit repeatedly quickly multiple times unnecessarily for each subsequent ACK equal to DUP of ACK number, and / or new DUC ACK of increased ACK number, you can register / update the sequence number of the packet largest fast retransmitted, largest receiving sequence number, do not unnecessarily forward the fast retransmitted packets already upon receiving the subsequent ACKs of DUP of equal ACK number, then 7. + inter-packet forwarding intervals (determined by the input of the user of pre-known reception line speeds); 8. + as in (7), using the latest estimated obstruction line or bottleneck speeds instead of user input 9. + TCP friendly algorithms that operate by controlling / adjusting the value of the inter-packet delivery interval Simple summary of initial basic speed regulation modules The specifications of the regulation module of first stage speeds that will be added (this specification only performs the smoothing of packet transmissions over the network, nothing else); 1. have the user enter the bandwidth of the obstruction link in kbs, for example SAN.exe B (for example 512 kbs); this is usually the first mile upstream bandwidth of the sender / user but may occasionally be the last mile of the receiver (if the user does not know the bandwidth of the last mile of the receiver entered by the user in the first mile; upload bandwidth of DSL subscribers is usually much smaller than the download bandwidth) [the software can subsequently provide the final estimated value of B, not needing any user input] 2. incorporate a regulation module simple of speeds that assures the minimum shipment of interval mter-bytes, for example if a package of SI size is sent (for example 1,000 bytes of total length, encapsulation + header + useful load) that assure 1,000 bytes / (B / 8) elapsed before the sending of the next packet size of S2 (for example 750 bytes now) begins and so on the total package size S can be determined from the TCP 3 header. all the packages that are going to be sent, either new MSTCP package / retransmissions fast / retransmissions of RTO ... etc., is attached first to the buffer of packets still to be sent; this memory needs better to be ordered well and yet it needs "without separation" arrival packets from either MSTCP or fast transmitting software appended / inserted in the ascending order of the sequence number (ie, so that it retransmits fast / retransmits RTO of MSTCP packets sent first in front of other data packets with larger sequence number). The same pure ACKs of sequence number / data packets will need to be inserted in the order of their arrivals in relation to each other. (Note: MSTCP here continues to be all RTO retransmissions) [Enhancement of later specification: - useful for adding a total length of packets in the byte field to the packet entries in this list yet to be sent, for counting easy of the total bytes transmitted in each RTT, based on the sequence number in the individual round trip packet and the next packet sequence number sent after the completion of the round trip and so on. In this list, which needs to implement regulations, is different from the list of copy of packages that is here in the first stage must be ordered but does not need to be "without separation"; - whenever the buffer is still to be sent > for example 10K bytes then send "0" window update to MSTCP and modify the window size of all incoming packets to the recum count of "0" "mark" a packet sequence number (starting with the first packet after ACK / ACK synchronization / synchronization) / send time / aj use this_RTT_total_bytes_enviated = this "brand" packet length, and immediately start the account of the next_RTT_total_bytes_enviated (which does not include this "branded" package). If the ACK number of the return packet > to the sequence number of "mark" then record this value of RTT (present here by the system-time of sending) and record this_RTT_total_bytes_enviated. Then select the next "mark" sequence number as the packet sequence number sent very last (if there is packet data, not pure ACK, sent before the previous "mark" sequence number of return, otherwise wait for a next data packet to be sent), etc., and so on (Only the last updated cases of the RTT value of this_RTT_total_bytes_enviated) need to be kept in the registry - the software must increase the DupNum account only if the DUPACK packet is pure ACK ie it has no data, or the packet that has data with the SACK flag set (if the remote client also sends data it can start getting the same many packets of sequence number even if there is no drop ). E increments other DupNum data variables (number of data payload packets with same sequence number) and modifies all incoming packets with same sequence number to - (DupNum + DupNum data); DupNum data is updated in a similar way to DupNum and DupNum processing now needs to distinguish between the pure DUPACK package and the particular one with data payload. The various component characteristics of all the methods and principles described herein may be further made to work with fully incorporated into any of the illustrated methods, various methods of topology network type and / or various graph / traffic analysis methods and principles can additionally allow the economics of the bandwidth of the links. It is also noted that the figures used wherever present in the description are proposed to denote only a particular case of possible values, for example in RTT * 1.5, the figure 1.5 can be replaced by another value adjustment (but always greater than 1.0). ) appropriate for the particular purpose and network, for example period of perception of 0.1 second / 0.25 second etc. Additionally, all the specific examples and illustrated figures are proposed to communicate the fundamental ideas, concepts and also their interactions, not limited to the actual figures and the examples used. The embodiments described above illustrate only the principles of the invention. Those skilled in the art can make various modifications and changes that will be incorporated and will fall within the principles of the invention itself. Presentation October 11, 2005 Some examples of simple implementations of Next-Generation TCP of External Internet Expanding Drop-in Materials - the last RTT of the packet that triggers the fast retransmission of the third ACK of DUP or that activates the RTO timeout , it is easily understood from the existing Linux TCB maintained variable in RTT of last measured round trip time. - the minimum registered (RTT) metric is only readily available from the maintained variables of existing Westwood / FastTCP / Vegas TCB, it should be easily enough to write few code lines to continuously update m? n (RTT) = minimum of [ m? n (RTT), last RTT of measured round trip time]. Also with Modifications of receiver-based TCP / receiver-based TCP speed controls, OTTs and min (OTT) can be used instead of the RTT-based and RTT-based RTTs that can benefit from the brand-name option. transmitter time, or the TCP based receiver can use the inter-packet arrival technique instead of relying on needing to determine the OTT and min (OTT). Reference: http: 77www. cs. umd. edu / ~ shankar / 417-Notes / 5-note-transportCongControl.htm: RTT variable maintained by Linux TCB http: // www. scit.wlv.ac.uk/rfc/rfc29xx/RFC2988.html: RTO computation Gooble search term "tcp rtt variables" http: // www. psc. edu / networ ing / perf_tune. html: Linux TCP RTT parameter setting Google search: "tcp minimum registered rtt" or "minimum registered TCP RTT variable of Linux". Note: TCP Westwood measures minimum RTT Google search terms: "CWND size tracking", "CWND size estimate", "Receiver based CWND size tracking estimate", "RTT tracking", " RTT estimation "," receiver-based RTT tracking estimate "," OTT tracking "," OTT estimate "," OTT tracking estimate based in receiver "," packet tracking in total flight "," packet estimation in total flight "," estimate of packet tracking in total flight based on receiver "etc. Initial implementation simple ideas To verify the test using modified linux; in its simplest enough you only need to modify 1 line and insert a circuit delay code (to "pause" the TCP TCP executions); 1. In the Linux fast relay module code, up to three DUP ACKs do not split the CWND in half, that is, the CWND now without change (instead of CWND = CWND / 2) 2. at the same time, in the same code section location, simply insert thin lines of code to "pause" the executions of the Linux TCP program (simulating "pause") for 0.3 seconds. [Only later; it is much preferable to allow the first recognized packet of DUP to be retransmitted free, and next only the global countdown variable of 300 ms "pause" is set in this same location, then Linux TCP in its code section of " final packet transmission "to check this variable for" pause "= 0 to allow any kind of transmission at all (assuming final transmission queue of Linux attachments) to retain packets held by this "pause") to write few lines of codes to drop packets and introduce latency delays before sending the packet, only to allow the constant periodic drop interval entered by the user and the number of consecutive drops (for example 0.125 and 1 ie pack of 1 drop once every 8 generated packets [equivalent to 12.5% packet loss ratio], or 0.125 and 3 that is to make 3 consecutive packets drop once every 8 generated packets [equivalent at packet loss rates of 37.5%]) and RTT latency (eg, 200 ms). The codes need only send forward based on the fall interval and the number of consecutive drops, and program all the surviving packets to be sent for example 200 ms later than their received local system time; these programmed to be sent forward of the surviving packets need to be kept in a queue (with their own individual scheduled sending ahead of the local system time) for forward forwarding over the network. You can quickly verify in LAN of 10 mbs and the wireless router link set to 500 kbs (remember to set Ethernet to "half-duplex" mode), along with several proportions of simulated losses and latencies. In its simplest enough form, you only need to modify a line and insert a circuit delay code (to "pause" the Linux TCP runs): 1. in the Linux fast relay module code, the 3 DUP ACKs do not split the CWND in half, ie the CWND is now unchanged (instead of CWND = CWND / 2) 2. At the same time, and in the same code section location, simply insert few lines of code to "pause" the executions of the Linux TCP program (simulating "pause") for 0.3 seconds. The large file transfers FTP from SAN over the external Internet / LFN of high latency and high proportions of loss, now it should show about 100% of available bandwidth utilization, for example Shunra software can interpose to simulate for example 10% of proportions of loss and / or latency of 300 ms that is, simulate high proportions of loss in long distance, or simply write codes to drop packets and introduce latency delays before sending the packet can also easily verify this by simulating NS2 type It is very clear now that the present size, once achieved, of the emitter TCP CWND will not cause congestion drops in any way anywhere, since the sender TCP will only inject new packets corresponding exactly to the ACK speeds of return; it is marked that its momentary increase accelerated in the size of CWND (which momentarily injects more packets into the network than the return ACK speeds, for example an exponential increase that doubles that of the return ACK speeds, which is the main cause of the packet drops - once the existing size achieved of CWND is large, it will not cause new packets to be injected into the network than the return ACK speeds, this can occur only in the momentary CWND size increase) . It is really simple to modify few lines of the Linux source codes, in Windows you only need to first obtain the intercept software module to take the fast retransmission functions of the MSTCP. To implement in Windows, it is necessary to intercept each incoming / outgoing packet and modify the acknowledgment number field of incoming DUP ACKs to the MSTCP so that it does not ever get the notification / knowledge of any request for fast retransmission of lost packets (the present intercept software does all the functions now of fast retransmission, not the MSTCP); this interception software module can also take all the RTO timeout relay functions of the MSTCP (for example it can reflect the MSTCP by its own algorithm to track the RTO waiting interval, or contemplate new modified desired algorithms). With the intercept software module now taking all the fast retransmission of the DUP ACKs of the existing MSTCPs and the RTO timeout relay functions, the interception software can now fully control the generation of new MSTCP packets / transmission rates by immediate interference / temporary stop of the interference ACKs back to the MSTCP for intercepted packets, and / or adjust the receiver window size field within the interference ACKs to "0" to stop the generation of MSTCP packages. In for example the source codes of Lmux / FreeBSD / Windows, you should be able to just amend / insert few lines to have this next generation FTP shown immediately that works in a very basic way: 1. in the fast retransmit module of 3 Linux DUP ACK, you only need to remove the lines of code that change CWND to CWND / 2 (that is, CWND now becomes unchanged). All other lines of code do not need to be amended at all; for example SSthresh now remains set to CWND (that is, TCP now only increments additively by 1 segment for each RTT instead of the exponential duplication). This in itself must now show close to 100% link utilization even in LFN / external Internet with high fall rates (ie, shown working in a very crude way here). To help test, you may need to use the Shunra-type software that can enter the% packet drop and / or simulate the route latencies, interposing this software between the next generation FTP and the network on the sending side, or code similar simple utility. 2. [Optional but definitely necessary later] the next-generation FTP must actually "pause" during an appropriate interval in packet drop events such as 3 DUP ACKs, to debug all its own "additional" sent packets in flight that are being buffered (while all existing regular / FTP TCPs drastically reduce their CWND, causing severe, well-documented, unnecessary performance problems). For example, in Linux, you only need to insert some codes to keep a record of m? N (RTT) om? N (OTT), if the actual decongested RTT or decongested OTT are not known in advance, of the RTT observed more small of the flow, and up to 3 ACU of DUP to "stop" all injections of packets in the network for eg 0.3 seconds (which is the most common buffer size in the router in equivalent seconds) or some period algorithmically derived (... posterior) [it is pointed out that it can also be in lieu of pause, to fit only CWND to the appropriate, corresponding algorithmic values, such as reducing the CWND size by factor of. { last RTT value (or OTT where appropriate) - registered min value (RTT) (or min (OTT) where appropriate)} / min (RTT), or reduce the size of CWND by the factor of [. { last value of RTT (or OTT where appropriate) - registered value of min (RTT) (or min (OTT) where appropriate)} / last RTT value] that is, CWND now conforms to CWND * [1- [. { last value of RTT (or OTT where appropriate) - registered value of min (RTT) (or min (OTT) where appropriate). { / last value of RTT]], or adjust the size of CWND to CWND * min (RTT) (or min (OTT) where appropriate) / last value of RTT (or OTT where appropriate), etc. Depending on the desired algorithm contemplated]. It is noted that min (RTT) is the most current estimate of the decongested RTT of the recorded route. 3. [Optional but definitely necessary later], the available bandwidth of the obstruction link or bottleneck along the flow path can be easily determined (fairly well documented, but not perfectly compared to the recent technique developed ), in this way once this limit If the upper of the available bandwidth is known / determined, the next generation TCP must subsequently not cause increments of CWND (either exponential duplication or linear increment); once the next generation TCP transmits at this achieved upper limit rate, it unnecessarily causes CWND increments unnecessarily to unnecessarily drop packets. Simple initial implementation ideas (refining 1): To verify the test using modified linux; in its simplest form, you only need to modify 1 line and insert a circuit delay code (to "pause" the TCP TCP executions); 1. In the Linux fast relay module code, in the 3 DUP ACKs you do not need to split the CWND half, that is, CWND now without change, (instead of CWND = CWND / 2). 2. at the same time, and in the same code section location, simply insert few lines of code to "pause" the executions of the Linux TCP program (simulating "pause") for 0.3 seconds. [Later; it is very preferable to allow the first packet to be retransmitted and the next one only adjusts the global countdown variable of 300 ms to "pause" in this same location, then the Linux TCP in its "final packet transmission" code section to check this variable for "pause" = 0 to allow any kind of transmissions anywhere (assuming that Linux implements the "final transmission" queue to retain the packets held by this "pause"). [Only later; it is very preferable to allow the first packet to be retransmitted and only then adjust the global countdown variable of 300 ms to "pause" in this same location, then the Linux TCP in its "final packet transmission" code section for check this variable for "pause" = 0 to allow any kind of transmissions anywhere (assuming that Linux implements the "final transmission" queue to retain the packets held by this "pause"). Only much later; this can be conveniently achieved / implemented (only as a suggestion): 1. In the module code of fast transmission of Linux, in 3 DUP ACKs do not split in half or in two in CWND, that is, the CWND now without change (instead of CWND = CWND / 2) 2. At the same time, and in the same section location code, simply adjust the global countdown variable of 300 ms to "pause" in this same location (exactly where the CWND will have changed to be unchanged instead of CWND / 2) then Linux TCP in its "transmission code" section end of package "to verify this variable of" pause "= 0 to allow any kind of transmissions anywhere, except where the packet sequence number = < largest unrecognized sequence number sent (which can be easily obtained from the existing TCP parameters, that is, it only allows packets to be sent forward despite the variable "pause"> 0 only if the packet is a previous relay sequence number packet, ie Linux TCP always allows all RTO and / or fast retransmission relay packets to be sent forward immediately free despite CWND or size restrictions effective window where ever (since the retransmission packets will not be at all existing in-flight increment packets, but it is pointed out that as long as new packets are sent with sequence number> largest unrecognized sequence number sent it can increase the total existing flight packages.) Another implementation would simply never decrease the CWND at all, in the Falling waves due to congestion to count back the "pause" variable (either set for example to 300 ms interval or derivative such as last RTT-min interval (RTT) ... etc.) and not allow for anything CWND increments if the variable "pause" > 0; aggressive since this implementation does not help to reduce additional in-flight packets that are being buffered [also CWND can simply be left unchanged / unchanged instead of set to "0" or larger. UNA.SeqNo - SEnt. A. SeqNo, along with both step 1 and step 2] you can also enter this non-increment part while the variable "pause" > 0 in the previous, subsequent implementation, so that the return ACKs that advance the left edge of the sliding window only cause the new packets (that is, packets with sequence number> largest sequence number sent) inject at the same speed that corresponds to the synchronization speed of the return ACKs and do not cause increase "accelerator" of CWND / injection of new packets, linear or accelerating exponential beyond the speed of synchronization speed of the ACKs of return. When the global variable "countdown pause" > 0, Linux TCP should not increase the CWND at all even if the incoming ACK now advances the left edge of the sliding window, that is, Linux TCP can inject new packets into the network at the same speed as the speed of synchronization of the return ACKs, but does not "exponentially double" or "linearly increase" beyond the synchronization speeds of the return ACKs (easily implemented by modifying all the lines of CWND increment code to verify first if the countdown "pause" > 0, but derive the increase). Alternatively, the Linux modification may simply require: 1. Not to change / decrease at all the CWND value in the congestion fall events, and also not to increase the CWND at all during the resulting "pause interval" for example activated 300 ms by the event of fall by congestion (or algorithmically derived interval such as last RTT-min (RTT) ... or max [last RTT-min (RTT), for example 300 ms] ... etc. ); in congestion fall events, the modified Linux TCP does not inject new "accelerator" packets into the network (ie, with sequence number> largest sequence number sent) beyond the synchronization speed of the ACKs return during the "pause interval activated" [ie, CWND will not be incremented by the return ACKs that advance the left edge of the sliding window, even if CWND < maximum window size of sender / receiver] and / or optionally 2. Always allow retransmission packets (ie packets with sequence number = <largest sequence number sent) to send forward freely by any window mechanism Sliding More refined to step 1 ... just adjust a setting of "Pause" countdown CWND of for example 300 ms to (largest sent sequence number-sent ONE sequence number) and restore the CWND after the countdown; in this way the Linux fast relay module can "eliminate" the missing separation packets indicated by the same incoming sequence number, the multiple SACK fields of the subsequent DUP ACKs since each DUP ACK of the same sequence number subsequent arrival multiple increments the CWND to the largest sent sequence number-sequence number ONE sent + 1 [whereas if CWND is set to "0" it may prevent forwarding of retransmission packets absent forward]; the modifications of step 1 by themselves must also work quite well without the need for step 2, but with the modifications of step 1 and step 2, together it does not matter much even if the CWND is set to "0". The setting of CWND to the highest sent sequence number-sent sequence number ONE has the same effect as set to "0" by preventing new additional packets "accelerators" from being injected into the networks, but allows the relay packets ( with sequence number = <largest sequence number sent) are sent forward freely.
Modification to TCP source codes of existing RFCs and simplified test sketches: The test must be (in comparison to the server of Linux TCP not modified); the modified Linux TCP server [+ for example 2/5/20% drop of simulated packets + for example a RTT latency of 100/250/500 ms]? router? existing Linux TCP client. The link between the router and the client can be 500 kbps, the router can have a buffer of 10 to 25 packets. The window sizes of the transmitter and receiver of for example 32/64/256 Kbytes.
Linux TCP Modification Specification Suggestions: (A simple technique is achieved to "transmission pause" by setting CWND = 0 for eg 300 ms intervals, for easy implementations of Linux modifications in real time). 1. Wherever the existing Linux TCP decreases the CWND (CWND = CWND / 2) multiplier in the congestion fall events (3 DUP ACKs that halve the CWND and the RTO waiting interval reset CWND to 0) to leave instead the CWND instead and only set a CWND of "pause" countdown adjustment of 300 ms to (largest sequence number sent- sent sequence number UNA) and restores the CWND after the countdown, but also sets the SSThresh to the original CWND ce value instead of the largest sequence number sent or divided to the half-value of CWND of sequence number UNA sent; this is exactly equivalent to "pause" for an easy implementation of 0.3 seconds. [Step 2 here may be optional but preferable, it may be added after testing with only step 1]. 2. Allow any retransmission packet with sequence number = < largest sent sequence number, despite CWND / effective window window slides of availability; in sections of sliding window code where Linux TCP verifies whether it allows the packet to be sent immediately forward (ie depending on whether the largest sequence number-sent sequence number UNA <effective window size) , simply insert the code to "derive" this verifies if package sequence number = < largest sent sequence number (ie, retransmission packet, which should not prevent forward forwarding at all to weigh); this way the Linux TCP relay modules can always "delete" all the "missing separation packets" indicated by the 3rd ACU of DUP / multiple ACK of subsequent DUP, immediately. [Remember to incorporate protections of automatic change of sequence number] Useful Window Platforms Notes Intercept Rapid Relay Module This module (takes all the fast retransmission functions of the MSTCP and modifies the incoming ACK numbers of the incoming DUP ACKs so that the MSTCP never knows any of the DUP ACK events) must retransmit all "absent separation packets" indicated by the SACK fields of the same DUP ACKs of incoming sequence number, keep a list of all sequence numbers retransmitted during these multiple ACKs of DUP of the same sequence number, and will not unnecessarily retransmit what has already been transmitted during the same subsequent series of sequence number DUP ACK, except where the DUP ACK of the same subsequent sequence number now indicates reception of retransmitted packets of sequence number in this "retransmitted list"; In this case, the module must retransmit only the "previously retransmitted absent separation packets" (ie, already in the retransmitted list) with sequence number < to the largest retransmitted sequence number received, indicated by the DUP ACK of the same new arrival sequence number. Of course, in subsequent to the new 3rd DUP ACK of increased sequence number (sequence number now different and increased), this module can actually retransmit all the "missing separation packets" indicated by the SACK fields of the DUP ACKs of the same incoming sequence number again. Obviously, it is preferable that subsequent versions to the version / algorithms described above that: 1. wherever the existing Linux TCP decreases the CWND in a multiplier (CWND = CWND / 2 or CWND = 1 in the RTO timeout interval) ) in the events of falls by congestion (3 DUP ACK that halves the CWND and the RTO waiting interval that readjusts CWND to 1) to leave the CWND unchanged and only adjusts a minimum of (RTT last of the packet that activates the fast retransmission of the 3rd ACU of DUP or that activates the waiting interval of RTO-mi (RTT), 300 ms) "pauses" the countdown setting of CWND to 1 and restores the CWND to the number of sequence sent the largest current-sequence number ONE sent after the "pause" countdown (which may be a different value with just when the "pause" was first activated) after the countdown, you must also set SSThresh to sequence number sent more s large-value sequence number ONE sent (as at the time when the "pause" is active) instead of halving or the value of "1" of CWND; this is exactly equivalent to "pause" for easy implementation of 0.3 seconds. Note: In this way, after the "pause" countdown, the modified Linux TCP does not cause sudden "burst" transmissions using accumulated return ACK synchronization during the "pause-triggered" interval for congestion fall immediately again of the link; but after the "pause" countdown only to then transmit to the subsequent return ACK synchronization speed (that is, it does not include any of the return ACK synchronization signals accumulated during the "pause" interval.) Perhaps even more preferable: "1. Whenever the existing Linux TCP decreases in a CWND multiplier (CWND = CWND / 2 or CWND = 1 in the RTO timeout) in the congestion fall events (3 DUP ACK that halves or halves the CWND and the RTO timeout resets CWND 1) to leave the CWND unchanged and only set a minimum of (RTTest of the packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO-min (RTT), 300 ms) the "pause" countdown setting the CWND to the largest sent sequence number-sent sequence number UNA [Note: setting this value to CWND, instead of 1 , will allow all retransmission packets ie, with sequence number = < Sending number sent larger send forward immediately free at all by the availability of sliding window intervals, but it is noted that after the countdown of "pause", the sequence number sent largest current-sequence number ONE sent will still be the same as in the case of CWND instead of being set to "1" before the "pause" countdown] and restores the CWND to the current largest sequence number sent or current-sequence number ONE sent after the "pause" countdown (which may be of different value in conjunction with when the "pause" was first activated) after the countdown, it also gets to set SSThresh to the highest sent sequence number-number value of sequence ONE sent (as in the moment when "pause" was activated) instead of halving or "1" the value of CWND; this is exactly equivalent to "pause" for easy implementation of 0.3 seconds.
Modifications of the TCP source code of existing RFCs and simplified test sketches (refinement 1): This simpler initial modification of TCP source code from step 1 alone, should initially confirm as close as 100% utilization of bandwidth of link available. The test of specific settings should be (compared to for example the Linux TCP / FreeBSD / Windows server not modified): The modified TCP Linux server; (can be implemented using IPCHAIN) simulated 1 in 10 packets of RTT latency of 200 ms drop (largest preferred); router existing Linux TCP client. The link between the router and the client can be 1 mbs (largest preferred), the router can have one 1 mns * for example pause value 0.3 chosen / 8 = 40 Kbytes (ie 40 1 KBytes of packets) buffer size. The window sizes of emitter and receiver of 64 Kbytes (preferred major). Modification specification suggestions TCP of Linux of step 1 simpler initial: (A simple technique that achieves the "transmission pause" when adjusting CWND = 0 during for example the interval of 300 ms, for implementations of modification Linux easy of real life). 1. Wherever the existing TCP Linux decreases the CWND (CWND = CWND / 2 or CWND = 1 in the RTO timeout interval) in the case of congestion failures (3 DUP ACKs that divide the CWND in half and the RTO timeout that resets the CWND to 1) to leave the CWND unchanged and only adjust the "pause" countdown setting of 300 ms) of the CWND to 1 and restore the CWND to the original value after the countdown, you must also set the SSThresh to the original CWND value instead of dividing the CWND value by half or "1"; this is exactly equivalent to "pause" for easy implementation of 0.3 seconds. Note: this will stop all forward transmissions / retransmits for example for 300 ms (to debug the buffers) in the RTO timeouts and the DUP ACK third parties, except for the first retransmission packet in the 3rd ACK of DUP that activates the fast retransmission mechanism and the RTO waiting intervals (this is always sent forward by the Linux TCP despite the availability of sliding window intervals). Also, any subsequent multiple fast retransmission packet retained / held by this "pause" of 300 ms will be sent forward immediately after the 300 ms countdown (only if CWND has not reached the window size of maximum reception / sending, since it does not decrease the CWND at all, the CWND probably already exceeded the maximum send / receive window size in this way the subsequent multiple fast retransmission packets held / stopped by this 300 ms "pause", it will probably be sent forward only at the same speeds as the synchronization speed of the return ACKs (however, fortunately including any cumulative return ACK during the 300 ms pause period) when the 300 ms countdown is counted; this is simpler than the modifications would be already of "phenomenal" commercial success with Google / Yahoo / Amazon / Real Placer etc.
Modifications of the TCP source code of the existing RFCs and simplified test sketches (refinement 2): 1. Whenever the existing Linux TCP decreases the CWND in a multiplier (CWND = CWND / 2 or CWND = 1 in waiting intervals of RTO) in congestion fall events (3 DUP ACKs that halve the CWND and the RTO wait interval that readjust the CWND to 1) to leave the CWND unchanged and only adjust a minimum of (Last RTT of packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO - m? N (RTT), 300 ms) the countdown of "pause" that adjusts the CWND to 1 and restore the CWND to the original value after the countdown, you must also set the SSThresh to the original value of CWND instead of halving or "1" the value of CWND; this is exactly equivalent to "pause" for easy implementation of 0.3 seconds. Note: in this way if the packet drop event is triggered by the physical transmission / BER errors instead of the usual full exhaustions expected from the buffers (typical buffer size is 300 ms) that causes the drops, the modified Linux TCP does not "pause" or unnecessarily stop any forward forwarding at all; where the packet drops caused by BER and the link are decongesting, the "pause" countdown will now be correctly adjusted to 0 ms instead of being on a circuit forever "pausing" the 300 ms in a row forever. It is noted that the IPCHAIN method that simulates packet drops events does not correspond to congestion or full buffer exhaustion events at all. However, the previous modification specifications will still work, but the test should now be instead: The Linux TCP server not modified with a large FTP for example multiple of 5, on router 1 with a link of 1 mbs and / or generators of congestive traffic (or even it can be short period of 300 ms of the generation of congestive burst of UDP each for example 1.5 seconds) (link of 1 mbs) the modified Linux TCP server? 1 mbs link) router 1 (1 mbs link)? existing Linux TCP client. The link between the router and the client must be 1 mbs (preferred major), the router must have a 1 mns * for example a pause value of 0.3 chosen / 8 = 40 Kbytes (ie, particular 40 1 Kbytes) buffer size. The window sizes of the emitter and receiver 64 Kbytes (preferred major). Note: in this way any packet fall events will always correspond strictly to complete buffer exhaustion scenarios, and the "pause" for 300 ms now makes good sense (or the RTT "pause" interval of the activated-rnin packet). (RTT) if = <300 ms, for example very small capacity of intermediate mode displayed). Finally; the previous test set with IPCHAIN will work with only decreasing the size of CWND altogether without needing the whole "pause"; It exhibits 100% link utilization, but it is not aggressive not friendly with TCP. 1. Whenever the existing Linux TCP decreases the CWND (CWND = CWND / 2 or CWND = 1 in the RTO timeout interval) in the case of congestion fall events (3 DUP ACK that divides in two) the CWND and the RTO timeout resets the CWND to 1) to leave the CWND instead with no change at all, you must also set the SSThresh to the value with no CWND change of half or "1" of the CWND value; this ensures by itself close to 100% link utilization despite the fall proportions and RTT latencies.
Modifications of TCP from external Internet friendly to TCP deployable by increment based on receiver The source code of receiver TCP can be modified directly (or similarly the intercept monitor adapts to perform / work around to achieve the same), and still will work with all the TCPs of the existing RFCs; Preliminary design (see also several techniques described above, and sub-component techniques) [note: it has now been clear that the size of CWND once achieved, however, large does not by itself fall by congestion; it is the momentary acceleration increase in the CWND size for example exponential or linear growth which is the main cause of packet drops due to congestion (synchronization speeds of the return ACKs). 1. Receiver TCP when sending 3 ACU of DUP to immediately follow through with the derived number Algorithmically determined / the same number sequence multiple sequence ACU ACKs (sending speeds of these multiple DUP ACKs of the same sequence number may also be algorithmically controlled to control the emitter TCP CWND size, in this way the sending speeds as desired), this way you can control the CWND size of the transmitter, for example not to be halved in the fast retransmission of the 3 DUP ACKs or synchronized dictated increments of the size of CWND according to the detection of the receiver of the route congestion levels (decongested / start of storage buffer delay of / certain previous values, packet drop by congestion etc.). It can be combined with several previous techniques such as large window sizes, inter-packet arrivals to detect early packet drops, adjust the receiver window size (for example "0" to fully pause the transmission speeds of the effective window size) of the emitter, in this way the window size of the receiver now controls the transmission speeds of the effective transmitter window instead of CWND) etc. The receiver can also use the CWND size tracking method of the transmitter to help determine the multiple generation speeds of the DUP ACKs, it also includes 1 byte data in certain ACKs generated so that the sender will notify the receiver precisely which of the DUP ACKs was received on the sender's TCP; or 1. The receiver TCP retains the sending of the ACK for a certain number of previous received sequence, in this way the transmitter TCP can now be made to transmit only (ie synchronized increments of the CWND size of the transmitter) at the speeds of the receiver to generate multiple ACKs of the same sequence numbers (derived algorithmically as desired), in this way the receiver can control the speed of the transmitter; effectively the transmitter's TCP now almost always in fast retransmission mode. With the window size of the sufficiently large, negotiated receiver and transmitter, the multiple DUP ACKs of the same sequence number may cause the Gigabyte to be transferred to the term that remains with the same sequence number number of the DUP ACKs, or the sequence number can be increased to a larger (or larger) sequence number successfully received at any time before the actual window size exhaustions to "change" the window edges of the emitter. (It can be combined with the techniques to keep the size of the issuer's CWND large enough at all times); and / or 1. Receiver TCP never generates 3 DUP ACKs, it only allows the sender RTO timeout retransmits (preferably, sufficiently large window scale sizes negotiated to ensure continuous transmissions of the transmitter without being stopped by the most recognized transmissions held before the longer RTO standby period is triggered), but the readjustment of the CWND of the sender to "0" or "1" in the RTO timeout interval that the receiver needs to ensure rapid restoration of exponential increases of the CWND of the sender by a DUP ACK number followed after detecting the timeout retransmissions of RTO. Notes: - routers can conveniently adjust the buffer to a smaller magnitude ... such as 50 ms (see google search reports published on improved efficiencies of these small buffer settings), you can also adapt the RED mechanism for dropping, for example, the first packet stored in memory of any flow that has packet residences stored in buffer memory; helps achieve real-time transmissions / TCP traffic input speeds over Internet subsets. Also, TCPs can simply regulate the speeds to "pause" what immediately debugs the beginning of any buffering / reduces the size of the CWND appropriately to allow the debugging of the beginning of any of the buffering. The above receiver TCPs can independently use the SACK fields to carry sequence number blocks received beyond the same sequence number "clamped" from the multiple DUC ACK signal, additional SACK fields can also be used to transport Occasionally absent subsequent "separation" packages (the RFCs allow 3 SACKed blocks and the SACKed sequence numbers will not be unnecessarily retransmitted by the TCPs of the existing RFCs). Receiver TCPs here can use the "SACK field blocks", generating the sequence number "clamped", "synchronized" of the DUP ACKs of the same sequence number (thus controlling the value of a window sent one) slider in the emitter to control the effective window sizes, also the number of the multiple DUP ACKs of the same sequence number generated to control the CWND of the emitter), adjusting the window sizes of the receiver, following the techniques of the size of CWND of the emitter etc., which allow the receiver to control or "pause" the emitter / effective window size / size of its CWND according to the receiver's monitoring of the beginning of the route of the congestion / fall of the exhaustion packages of buffer (distinguishable from packet drops by BER as long as it decongests, as it is distinguishable in OTT time either beyond the m? n (OTT) recorded in this way away) Varied Notes There are many different ways and several different combinations to the described sub-component methods possible, to implement the desired modifications in many different perhaps even simple ways. For example, where all the TCPs in the network were modified in a similar way, it would be very easy for each TCP sender to just "pause" (receiver-based TCP causes the sender TCP "pause") for example in the RTT last interval (or OTT where appropriate) -min (RTT) registered (om? n (OTT) where appropriate), to ensure PSTN-like transmission qualities throughout the entire network / sub-network of the Internet. Instead of the previous "paused", modified TCPs can each reduce their CWND size to, for example, CWND * (last RTT-min (RTT)) / last RTT, or for example CWND * (last RTT-min ( RTT)) / min (RTT) etc., depending on the desired algorithms contemplated for example to ensure the total number of in-flight packets are immediately reduced to ASAP so that any additional packets in flight (more than capabilities of the available physical bandwidth of the link can cope, without causing the start of buffering) that can cause or require buffering can be completely debugged (or only reduce memory storages by certain levels), is say to ensure all subsequent outstanding packets in flight will now not require buffering along the route (or will only reduce buffering by certain levels). Where all the receiver TCPs in the network are all modified in this manner as described above, the receiver TCPs can have full control of the transmitter TCP transmission rates through their complete complete control of the series of same sequence numbers. of the generation of multiple DUP ACKs, speeds, spacing, temporary stops ... etc., according to the desired algorithm contemplated ... for example multiplier increment and / or linear increase of the multiple speeds of the DUP ACKs each RTT (or OTT) while the RTT (or OTT) remains less than the min (RTT) registered last current (or min (OTT) registered last current) ... etc. Additionally, once the RTT (or OTT) becomes greater than the min (RTT) registered last current (or the min (OTT) registered last current) it is say, the beginning of the detected congestion, the modified receiver-based TCP (or the interception / sending proxy software etc.) may "pause" during the algorithmically contemplated period during this period, the modified receiver-based TCPs may "freeze" "the generation of additional extra ACKs of DUPs except to correspond to what is required to correspond to the new packets of incoming sequence numbers (ie, 1 ACK of DUP is generated for each of 1 of the new incoming sequence number packets) , this will allow the reduction / debugging / prevention of the additional total flight packets of the sender that are buffered along the route. Receiver-based TCP may include for example 1 byte garbage data that is included in the "marked, selected" DUP ACK, to assist the receiver in detecting / computing the RTT / OTT / total packets in flight etc., using the ACK number and the sequence number of the transmitter etc., subsequently received.
Presentation November 21, 2005 Various Next Generation TCP Refills and Notes, Data Transfer Storage Utilization 100% External Internet Link Friendly to TCP Expanding Expansion At the highest level, the CWND now never shrinks at all. It is easy to use the Windows folder writer "search folder chain" to locate each and every occurrence of the CWND variable in all the sub-folders / files so that they are thorough in the RTO waiting interval. .. even if its induced congestion does not reduce / re-adjust the CWND at all ... the pseudocodes of the RTO waiting interval algorithm, modifying the specifications of the existing RFCs, would be (for indications of "falls due to real congestion"): Waiting interval: / * multiplier decrease * / The registered CWND = CWND (but if another RTO waiting interval occurs, during a "pause" in progress, then the registered CWND = registered CWND / * does not want to erroneously cause it to reduce the size of the CWND * /) ssthresh = cwnd (but if another RTO timeout occurs during a "pause" in progress, then SStresh = CWND registered / * you do not want to erroneously cause the size to be reduced of SSTresh * /; calculate the "pause" interval and set CWND = "1 * MSS" and restore the CWND = registered CWND after the "pause" countdown; The present pseudocodes of the RTO wait-time algorithm, which modify the specifications of the existing RFCs, would be (for indications of "falls due to non-congestion"): Waiting interval: / * multiplier decrease * / ssthresh = sstresh; CWND = CWND; / * both without change * / you just need to ensure that the modified TCP of the RFC complies with these simple button rules: 1. Never reduce the CWND value at all, except to temporarily "pause" the indications of " real congestion "(restore the CWND to the CWND registered later). It is noted that in the actual congestion indications (last RTT when 3rd DUP ACK or when RTO-min wait interval (RTT) > eg 200 ms) SSTresh needs to be adjusted to the pre-existing CWND so that increments subsequent CWNDs are linear and additive. 2. If the indications of non-congestion (last RTT when 3rd DUP ACK or when RTO-min wait interval (RTT) <for example 200 ms), for both fast retransmission and for RTO timeout modules and it does not "pause" and does not allow existing RFCs to change the value of CWND and the value of SStresh at all. It is noted that the current "pause" in progress (which can only be triggered by the indication of "actual congestions"), if any, should be allowed for progress over the countdown (both for fast retransmission and for the RTO standby modules). 3. If the current "pause" is already in progress, the subsequent "real congestion" intervention indications will now completely end the current "pause" and begin a new "pause" (a matter of only adjusting / overwriting a new "pause" countdown value); taking care that both the fast retransmission and the RTO standby modules of the CWND registered now = registered CWND (instead of = CWND) and now SStresh = registered CWND (instead of CWND) Full specifications of first version of basic work very simple; only very simple freebsd / linux TCP source code modifications of a few lines [Initially you need to adjust the very large initialized min value (RTT) = for example 30,000 ms, then continuously adjust the min (RTT) = min (last RTT of the arrival ACK, min (RTT))]. 1.1 If 3rd ACD of DUP, then If RTT of last ACK of arrival then fast retransmission of 3 ACK of DUP - min (RTT) registered current = < for example 200 ms (ie, it is now known that this packet drop may not be possible to cause for the "congestion event", this way you should not unnecessarily adjust the SStresh to the CWND value) then change the value of CWND / SSTresh (that is, do not yet set CWND = CWND / 2 or SSthrsh to CWND / 2, as it is currently done in the existing fast retransmission RFCs). Otherwise, you must adjust the SSThresh to be the same as this existing registered CWND size (instead of CWND / 2 as in the existing fast retransmission RFCs), and instead keep a record of the existing CWND size and set CWND = " 1 * MSS "and adjusts a global countdown variable of" pause "= minimum of (last RTT of packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO-min (RTT), 300 ms) Note: adjusting the value of CWND = 1 * MSS will cause the desired temporary pause / stop of the entire packet forward, except for the first fast retransmit packet, to allow packets buffered to along the route they are debugged before the TCP reassumes the shipment] Finish, if 1.2 After the time variable "pause" counting down, CWND is restored to the previous registered CWND value (ie, the sender can now resume normal sending after the "pause") 2. 1 Yes, RTO timeout, then Yes, RTT of the last return ACK when RTO-min wait time (RTT) logged stream = < for example 200 ms (ie, it is now known that this packet drop was possibly not caused by the "congestion event", in this way the CWND value should not be unnecessarily readjusted to 1 * MSS), then, do not readjust the CWND value to 1 * MSS or change the CWND value at all (ie, not yet readjust the CWND at all , as is currently done in the RTO waiting times of existing RFCs). In addition, a record of the existing CWND size should be maintained and CWND = "1 * MSS" adjusted and a global countdown variable set to "pause" = minimum of (last packet RTT when RTO waiting interval- min (RTT), 300 ms). Note: adjusting the value of CWND = 1 * MSS may cause the desired forward pause / deviation of all forward packets, except for the RTO timeout relay packets, to allow buffered packets to be debugged in buffer along the route, before TCP resumes sending]. 2.2 After the time variable of "pause" made in countdown, CWND is restored to the value of CWND previous recorded (that is, the sender can now resume normal sending after "pause").
Materials Background the last RTT of the packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO, is easily available from the maintained variable of TCB of Linux existing in the last RTT measured of round trip time, the minimum mn (RTT) recorded is only readily available from the maintained variables of existing Westwoord / FastTCP / Vegas TCB TCP, but it should be easy enough to write few code lines to continuously update m? n (RTT) = minimum of [m? n (RTT), last RTT of round trip measured time] References: http: // www. cs. umd .edu / ~ shankar / 417-Notes / 5-note-transportCongControl.htm; RTT variables maintained by Linux TCB < http: // www. scit.wlv.ac.uk/rfc/rfc29xx/RFC2988 html >; Google search term of RTO computing "TCP RTT variables" < http://www.psc.edu/networking/perf_tune.html >; adjustment of Linux TCP RTT parameters, Google search: "Linux TCP minimum registered RTT" or "Linux minimum registered TCP RTT variable". Note: Westwood TCP measures minimum RTT.
Notes: 1. The above "congestion notification activation events" can be defined alternatively as when last RTT-min (RTT) > = interval specified for example 5 ms / 50/300 ms etc. (which corresponds to the delays introduced by buffering experienced along the route above and beyond the pure decongested RTT or its estimated mn (RTT), instead of the packet drop indication event. Once the "pause" has been counted in a regressive way, activated by the indications of falls due to real congestion, the previous algorithms / systems can be adapted so that now the CWND is adjusted to a value equal to the packets in flight outstanding totals in this instant "pause" countdown time (ie, equal to the last largest sequence number sent-last largest ACK return number), this would prevent a sudden large burst of packets generated by the Source TCP, since during the "pause" period, there may be many received ACKs of return that can have substantially advanced the sliding window edge, as well as an alternative example between As many as possible, the CWND can micially on the fast retransmission request of the 3rd DUP ACK that activates the countdown of the "pause" to be set to the CWND without change (in place of "1 * MSS") or a value equal to the total of outstanding flight packets in this case of time, and an equal value to this instantaneous total of outstanding flight packets is restored when the "pause" has been counted "[optionally less than the total additional number of multiple DUP ACKs of the same sequence number (beyond the 3 initial DUP ACKs that activate the fast retransmission) received prior to the" pause "countdown at this instant account time regressive "pause" (that is, equal to the last sequence number sent the largest-last number of ACK return largest at this instant of time)]; the modified TCP can now delete the new packet in the network that corresponds to each of the multiple additional DUP ACKs the same sequence number received during the "pause" interval, and after the "pause" countdown can " slow down "optionally with delayed transmission speeds to debug buffer storage locations along the route, if the CWND is now restored to a value equal to the new instantaneous total of outstanding packets in flight minus the number total of multiple additional DUP ACKs of the same sequence number received during the "pause", when the "pause" has been counted down. Another possible example is for CWND micially in the fast retransmission request of the 3rd DUP ACK that triggers the "pause" countdown to apply to "1 * MSS", and then is restored to a value equal to this instant total of packets in instantaneous flight minus the total number of multiple additional ACKs of DUP of the same sequence number when the "pause" has been counted down; in this way when the modified TCP with "pause" countdown does not "break" the new packets but only initiates the elimination of new packets in the network that correspond to the subsequent new ACK return rates. 3. The global variable of "pause" countdown of the previous algorithms / schemes = minimum of (last RTT of the packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO-m? N ( RTT), 300 ms) above, can be set instead = minimum of (last RTT of packet that activates the fast retransmission of the 3rd ACK of DUP or that activates the waiting interval of RTO-rnin (RTT), 300 ms, max (RTT)), where max (RTT) is the RTTest observed so far. The inclusion of this max (RTT) is to ensure even in some probably very rare circumstance where the buffer capacity of the nodes is extremely small (for example in a LAN or even WAN), the period of "pause" is not will be unnecessarily adjusted to be too large such as for example the specified value of 300 ms. Also in Instead of the 300 ms of previous examples, the value can be derived algorithmically dynamically for each of the different routes. 4. A simple method to allow the extended deployment phase of the network designed for the guaranteed service (or just a network free from drops due to congestion, and / or only the network with much less buffering delays), would be that all (or almost all) routers and switches in a node in the network modify / update the software to immediately generate the total of 3 DUP ACKs to the crossover TCP flow sources to indicate sources to reduce their transmission speeds when the node begins to buffer the crossover TCP packets (ie, the send link is now 100% used and the packets of the aggregate crossover TCP sources begin to be buffered) ). The generation of the 3 DUP ACKs can alternatively be activated, for example, when the sending link reaches the specified level of use, for example 95% / 98% ... etc. , or some other specified activation condition. It does not matter yet if the packet corresponding to the 3 pseudo ACKs of DUP is actually received correctly in the destinations, as the subsequent ACKs of the destination to the source that this remedies. Package fields of 3 DUP ACK generated they contain the required minimum addresses of source and destination and the sequence number (which can be easily obtained by inspecting the packets that are currently being buffered, taking care if the ACK field of the 3 pseudo ACUs of DUP is obtained / drift of the ACK number (recognition) of the inspected packets stored in buffer memory). While the ACK number field of the 3 pseudo ACK of DUP can be obtained or derived from for example the table maintained in the switches / routers of the latest and greatest ACK number generated by the destination TCP for particular flows unidirectional TCP source / destination, or alternatively the switches / routers can wait for a destination to the source packet to arrive at the node to obtain / derive the field of ACK number of the 3 pseudo ACK of DUP of the field inspection ACK of the return packets. Similar to the previous schemes, the existing RED and ECN can similarly have the modified algorithm as summarized above, allowing networks capable of guaranteed real-time service (no congestion drops and / or networks with much lower storage delays in buffer). 5. Another variant implementation in Windows: first you need the module to take control of all the fast retransmission / RTO timeout of the MSTCP, that is, the MSTCP never sees any DUP ACK or RTO timeout; the module will simply interfere with the new recognized packets intercepted from the MSTCP (only later, and where it is required to send the window size update from "0" of MSTCP, or modify the window size field of the incoming network packets from " 0", to pause / reduce the speed of generations of MSTCP packets, in congestion notifications for example 3 DUP ACK or RTO wait interval). The module builds a list of sequence numbers / packet copies / system time of all sent packets (sorted well in the sequence number) and quickly retransmits / retransmits RTO from this list. All points in the list with sequence number < ACK received larger stream will be removed, all sequence numbers of SACKed will also be removed. Remember that you need to incorporate the protections of "automatic sequence number change" and "automatic time change" in this module. As the acknowledgments interfere with all outgoing MSTCP intercepted packets, the Windows software now does not need to alter any incoming network packets to the MSTCP for anything at all ... the MSTCP will simply ignore the 3 received DUP ACKs since they are now already outside the sliding window (which are already recognized), it will never send packets to the waiting interval (which are already recognized); now you can easily control the generation speeds of MSTCP packages at any time, by changing the size fields of receiver window ... etc. The software can emulate the own mechanisms of increase of windows / control of congestion / AIMD of the MSTCP, when allowing at any moment a maximum of packages in flight equal to the size of CWND emulated / followed by the MSTCP; as an example of summary overview (among many possible), this can be achieved for example by assuming for each of the emulated return ACKs / followed by the pseudo-mirror CWND size is duplicated on each RTT when there has not been no fast retransmission of the 3 ACUs of DUP, but once this has been presented the size of CWND of pseudo-espe or emulated / followed only now will be increased by 1 * MSS by RTT. The software will always only allow an instant total maximum of outstanding flight packets of no more than the pseudo emulated / followed CWND size, and will regulate the MSTCP packet generations by updating the receiver window size of "O'Vodifying the receiver window size of incoming packets to "0" to "pause" MSTCP transmissions when the pseudo-size of CWND is exceeded.
This window software can then keep track of or estimate the MSTCP CWND size at all times, by following the last sequence number of the largest MSTCP packets sent and the ACK number of the latest incoming packets and more of the network (this difference gives the outstanding total packets in flight, which correspond to the CWND value of the MSTCP very well). The window software here only needs to make sure to stop the automatic interference of the ACKs to the MSTCP once the total number of packets in flight > = the CWND estimate mentioned above (or alternatively effective window size derived from the previous CWND estimate and RWND and / or SWND). It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (14)

  1. CLAIMS Having described the invention as above, the content of the following claims is claimed as property: 1. Methods for improving the transmission control protocol (TCP) and / or TCP-type protocols and / or other protocols, characterized in that they are capable of fully implement directly through the series of TCP / protocol software modifications with TCP protocol without requiring any other change / re-configurations of the other network components at all and enabling networks capable of PSTN transmission quality of guaranteed service ready immediately and without an individual package falling due to congestion, the methods to avoid and / or prevent and / or recover network congestions by full or partial "pause" / "high" in the sender's data transmissions when detected congestion events such as packet drops due to congestion and / or round trip time (RTT) of reconnaissance (ACK) of return / time of Unidirectional trip (OTT) arrives close to or exceeds a certain threshold value for example known value of the decongested RTT / OTT of the flow path or its best estimate min (RTT) / min (OTT) available last.
  2. 2. Methods to improve TCP and / or PCT-like protocols and / or other protocols, characterized in that they can be able to fully implement directly through the series of software modifications with TCP protocol without requiring any other change / reconfigurations of any other network component at all and that can enable networks capable of PSTN transmission quality and guaranteed service, lists , immediate and without an individual package ever falling due to congestion, the methods comprise any combination of joint links from (a) to (c): (a) make good use of the new realization / techniques that the window CWND of congestion and / or "effective window" of the TCP sliding window mechanism does not need to be reduced in size to avoid and / or prevent and / or recover congestions. (b) Congestion is avoided and / or prevented and / or recovered by full or partial "pause" V'alto "in the sender's data transmissions when congestion events are detected such as packet drops due to congestion and / or return the RTT of round trip time of the return ACK / OTT of unidirectional travel time to close to or displayed at a certain threshold value eg known value of the decongested RTT / OTT of the flow path or its best estimate / min (RTT) / min (OTT) available last. (c) Instead of or in exchange or in combination with (b) above, the CWND value of the congestion window and / or "Effective Window" of the TCP sliding window mechanism is reduced to an algorithmically derived value dependent at least in part on the RTT value of round trip time returned last / OTT of unidirectional travel time when congestion is detected, and / or RTT of round trip time / OTT of known decongested unidirectional travel time of the particular flow path or its best estimated min (RTT) / min (OTT ) last available, and / or max (RTT) / round trip time / max (OTT) unidirectional travel time, last observed last.
  3. 3. Characterized methods because they are for data communications capable of guaranteed service virtually free of congestion in networks / Internet / Internet subsets / proprietary Internet segments / WAN / LAN [hereinafter referred to as network] with any combination / set of characteristics (a) to (f): (a) where all packets / data units sent from a source within the network arriving at a destination within the network all arrive without a packet falling due to congestion of net. (b) applies only to all packets / data units that require guaranteed service capability; (c) when the traffics of packages / units of data intercept and process before it is sent forward; (d) where source traffics / shipping sources are intercepted, processed and sent forward, and / or packet traffics / data units are only intercepted, processed, and sent forward in the source / source shipment of origin; (e) where the existing TCP / IP stack in the sending and / or receiving destination is modified to achieve the same end-to-end performance results between any pair of source-destination nodes within the network, without require the use of existing QoS / MPLS techniques that do not require any of the switch / router software within the network to be modified or contribute to achieving end-to-end performance performance that does not require the provision of bandwidth unlimited in each of the inter-node links within the network. (f) in which the traffics in the network mainly comprise TCP traffic, and other types of traffic such as UDP / ICMP ... etc., that do not exceed, or the applications that generate other types of traffic are arranged so as not to exceed, the full available bandwidth of any of the inter-node links within the network at any time, where if other types of traffic such as UDP / ICMP ... exceed the available full bandwidth of any of the inter-node links within the network at any time only the traffic of the pairs of source-destination nodes that traverse the inter-node links affected in this way within the network will necessarily be capable of a guaranteed service virtually free of congestion during this moment and / or all packets / data units sent from a source within the network that arrive at a destination within the network will not necessarily arrive, that is, the packages will not dry due to the network congestions.
  4. 4. Methods according to any of the preceding claims 1-3, characterized in that the improvements / modifications of the protocols are made in the sender TCP.
  5. 5. Methods according to any of the previous claims 1-3, characterized in that the improvements / modifications of the protocols are made in the TCP on the receiver side.
  6. 6. Methods according to any of the previous claims 1-3, characterized in that the improvements / modifications of the protocols are carried out in the nodes of the switches / routers of the network.
  7. 7. Methods characterized because the improvements / modifications of the protocols are made in any combination of locations as specified in any of the preceding claims 4-6.
  8. 8. Methods characterized in that the improvements / modifications of the protocols are made in any combination of locations as specified in any of the previous claims 4-6, in the methods the existing "Random Early Detection" NET and / or ECN "Notification of Explicit congestion "are modified / adapted to give effect to that described in any of the preceding claims 1-7.
  9. 9. Methods according to any of the preceding claims 1-8 or independently, characterized in that the switches / routers of the network are adjusted in their configurations or adjustments or operations, such as for example buffer size adjustments, to give the effect to that described in any of the preceding claims 1-8.
  10. 10. Methods according to any of the preceding claims 1-9, characterized in that: the RFCs of existing protocols are modified such that the CWND value of the emitter is not thereby reduced or compacted, except to temporarily effect the "pause" / "high" of the sender's data transmissions in the congestions detected (for example, by temporarily adjusting the CWND of the emitter = 1 * MSS of the sender during the "pause" / "stop" and after the "stop" / "stop " finished to then restore the CWND value of the sender to eg the existing CWND value before the "pause" / "high" or some algorithmically derived value); the interval of "pause'V'high" can be set to for example 300 ms arbitrary or at least algorithmically derive (last RTT of return ACK packet that activates the fast retransmission of the 3rd ACK of DUP or RTT last of packet of Return ACK when the waiting interval of RTO 300 ms, max (RTT) or algorithmically derived such as the minimum (last RTT of the return ACK packet that activates the fast retransmission of the 3rd ACK of DUP or the last RTT of the packet of Return ACK when the RTO wait time, 300 ms, max (RTT)), and / or the RFCs of the existing protocols are modified such that the SSThresh is now adjusted instead to the existing CWND value before the detection of congestion that activates the "V'alto" pause, subsequent increments of CWND, will only be linear additives beyond the CWND value
  11. 11. Methods in accordance with the claim 10 above, characterized because if the congestion detection is due to non-congestion drops, for example physical transmission errors or BER, that is, not due to congestion packet drops, then the "pause" countdown interval V'alto "will adjust to "O" instead, that is, the current "pause" / "stop" of the data transmission will not start, it also indicates that any current "pause" / "high" in progress will be allowed to progress normally in an account regressive congestion detection can be attributed to reasons of non-congestion if for example the RTT of the ACK returns when the 3rd DUP ACK that triggers the fast retransmission or the ACK RTT returned last when the RTO wait-min interval (RTT) < for example 200 ms.
  12. 12. Methods according to the preceding claims 10-11, characterized in that a current "pause" / "high" already exists in progress, a subsequent "real" congestion event indication will now extend the "pause" / "high interval "current, in question of only adjustment / overwriting of the" pause "/" high "countdown present to a new value such as for example minimum (last RTT of return ACK packet that activates the fast retransmission of the third ACK of DUP or RTT last of the return ACK packet when the RTO timeout interval, 300 ms, max (RTT)).
  13. 13. Methods according to any of the preceding claims 1-12, characterized in that any, or all or almost all of the routers and switches in a node in the network that the software will be modified are updated to immediately generate the total of 3 DUP ACK to crossover flow sources to indicate sources reduce their transmission speeds when the node begins buffering crossover TCP packets (ie, crossover links is now 100% used and they start buffering the aggregate crossover TCP source packets); the generation of the 3 DUP ACKs can alternatively be activated, for example, when the sending link reaches a specified usage level, for example 95% / 98 I ... etc., or some other specified activation condition.
  14. 14. Methods according to any of the preceding claims 1, 2, 1, 9-13, characterized in that: existing RED and ECN can similarly have their modified algorithm as summarized in the principles and schemes contained in any of the claims previous, allowing networks capable of guaranteed service in real time (or without drops due to congestion and / or delay networks of much less buffering.
MX2007006395A 2004-11-29 2005-11-29 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san. MX2007006395A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
GB0426176A GB0426176D0 (en) 2004-11-29 2004-11-29 Immediate ready implementation of virtually congestion free guaranteed service capable network
GB0501954A GB0501954D0 (en) 2005-01-31 2005-01-31 Immediate ready implementation of virtually congestion free guaranteed service capable network: inter-packets-intervals
GB0504782A GB0504782D0 (en) 2005-03-08 2005-03-08 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet NextGenTCP
GB0509444A GB0509444D0 (en) 2005-03-08 2005-05-09 Immediate ready implementation of virtually congestion free guaranteed service capable network:external internet nextgentcp (square wave form)
GB0512221A GB0512221D0 (en) 2005-03-08 2005-06-15 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgen TCP (square wave form) TCP friendly
GB0520706A GB0520706D0 (en) 2005-03-08 2005-10-12 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgenTCP (square wave form) TCP friendly
PCT/IB2005/003580 WO2006056880A2 (en) 2004-11-29 2005-11-29 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san

Publications (1)

Publication Number Publication Date
MX2007006395A true MX2007006395A (en) 2007-10-17

Family

ID=39787888

Family Applications (1)

Application Number Title Priority Date Filing Date
MX2007006395A MX2007006395A (en) 2004-11-29 2005-11-29 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san.

Country Status (5)

Country Link
JP (1) JP2008536339A (en)
BR (1) BRPI0518691A2 (en)
EA (1) EA200701168A1 (en)
IL (1) IL183431A0 (en)
MX (1) MX2007006395A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6409558B2 (en) 2014-12-19 2018-10-24 富士通株式会社 Communication device, relay device, and communication control method

Also Published As

Publication number Publication date
EA200701168A1 (en) 2007-12-28
IL183431A0 (en) 2007-09-20
BRPI0518691A2 (en) 2008-12-02
JP2008536339A (en) 2008-09-04

Similar Documents

Publication Publication Date Title
US20080037420A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san
US8462624B2 (en) Congestion management over lossy network connections
Balakrishnan et al. How network asymmetry affects TCP
US20100020689A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network : nextgentcp/ftp/udp intermediate buffer cyclical sack re-use
AU2005308530A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet NextGenTCP (square wave form) TCP friendly san
EP1955460B1 (en) Transmission control protocol (tcp) congestion control using transmission delay components
US20070008884A1 (en) Immediate ready implementation of virtually congestion free guarantedd service capable network
EP2148479A1 (en) Bulk data transfer
Ren et al. An explicit congestion control algorithm for named data networking
Natarajan et al. Non-renegable selective acknowledgments (NR-SACKs) for SCTP
US20090316579A1 (en) Immediate Ready Implementation of Virtually Congestion Free Guaranteed Service Capable Network: External Internet Nextgentcp Nextgenftp Nextgenudps
MX2007006395A (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square wave form) tcp friendly san.
AU2014200413B2 (en) Bulk data transfer
Sathiaseelan et al. Reorder notifying TCP (RN‐TCP) with explicit packet drop notification (EPDN)
Wu et al. ACK delay control for improving TCP throughput over satellite links
Venkataraman et al. A priority-layered approach to transport for high bandwidth-delay product networks
Barakat et al. On ACK filtering on a slow reverse channel
Buntinas Congestion control schemes for tcp/ip networks
Mukherjee Analysis of error control and congestion control protocols
Sathiaseelan et al. Robust TCP (TCP-R) with explicit packet drop notification (EPDN) for satellite networks
Premalatha et al. Mitigating congestion in wireless networks by using TCP variants
Kadhum et al. A study of ecn effects on long-lived tcp connections using red and drop tail gateway mechanisms
Oluwatope et al. Available Bandwidth Based Congestion Avoidance Scheme for TCP: Modeling and Simulation
Barakat et al. On ACK filtering on a slow reverse channel
Balakrishnan et al. How Network Asymmetry Affects Transport Protocols

Legal Events

Date Code Title Description
FA Abandonment or withdrawal