EP1829321A2 - Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree) - Google Patents

Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree)

Info

Publication number
EP1829321A2
EP1829321A2 EP05806538A EP05806538A EP1829321A2 EP 1829321 A2 EP1829321 A2 EP 1829321A2 EP 05806538 A EP05806538 A EP 05806538A EP 05806538 A EP05806538 A EP 05806538A EP 1829321 A2 EP1829321 A2 EP 1829321A2
Authority
EP
European Patent Office
Prior art keywords
tcp
packet
packets
ack
sender
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05806538A
Other languages
German (de)
English (en)
Inventor
Bob Tang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0426176A external-priority patent/GB0426176D0/en
Priority claimed from GB0501954A external-priority patent/GB0501954D0/en
Priority claimed from GB0504782A external-priority patent/GB0504782D0/en
Priority claimed from GB0512221A external-priority patent/GB0512221D0/en
Priority claimed from GB0520706A external-priority patent/GB0520706D0/en
Application filed by Individual filed Critical Individual
Publication of EP1829321A2 publication Critical patent/EP1829321A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/06Transport layer protocols, e.g. TCP [Transport Control Protocol] over wireless

Definitions

  • the router will thus need to examine (and buffer/ queue) each arriving data packets & expend CPU processing time to examine any of the above various fields (eg the QoS priority source IP addresses table itself to be checked against alone may amount to several tens of thousands).
  • the router manufacturer's specified throughput capacity (for forwarding normal data packets) may not be achieved under heavy QoS data packets load, and some QoS packets will suffer severe delays or dropped even though the total data packets loads has not exceeded the link bandwidth or the router manufacturer's specified data packets normal throughput capacity.
  • the lack of interoperable standards means that the promised ability of some IP technologies to support these QoS value- added services is not yet fully realised.
  • the TCP/IP stack is modified so that :
  • simultaneous RTO rates decrease and packet retransmission upon RTO timeout events takes the form of complete 'pause' in packet/ data units forwarding and packet retransmission for the particular source - destination TCP flow which has RTO TimedOut, but allowing 1 or a defined number of packets/ data units of the particular TCP flow (which may be RTO packets/ data units ) to be forwarded onwards for each complete pause interval during the ' pause/ extended pause' period
  • simultaneous RTO rate decrease and packet retransmission interval for a source - destination nodes pair where acknowledgement for the corresponding packet/ data unit sent has still not been received back from destination receiving TCP/IP stack, before ' pause ' is effected, is set to be : .
  • TCP/IP modifications may be implemented incrementally by initial small minority of users and may not necessarily have any significant adverse performance effects for the modified ' pause ' TCP adopters, further the packets/ data units sent using the modified ' pause ' TCP/IP will only rarely ever be dropped by the switches/ routers along the route, and can be fine tuned/ made to not ever have a packet/ data unit be dropped .
  • modifications becomes adopted by majority or universally, existing Internet will attain virtually congestion free guaranteed service capability, and/or without packets drops along route by the switches/ routers due to congestions buffers overflows.
  • timeout interval is set to same s seconds or less (which may be within audio-visual tolerance or http tolerance period )
  • any packet/ data unit sent from source's modified TCP/IP will not ever be dropped due to congestions buffer overflows at intervening switches/ routers and will all arrive in very worst case within time period equivalent to s seconds * number of nodes traversed, or sum of all intervening nodes' buffer size equivalents in seconds, whichever is greater ( preferably this is, or could be made to be, within the required defined tolerance period ).
  • the intervening nodes' switches/ routers buffer sizes are all at least equal or greater than the equivalent RTO Timeout or decoupled rates decrease timeout interval settings of the originating sender source's/ sources' modified TCP/IP stack.
  • the originating sender source TCP/IP stack will RTO Timeout or decoupled rates decrease timeout when the cumulative intervening nodes' buffer delays added up equal or more than the RTO Timeout interval or decoupled rates decrease ( in form of ' pause ' here ) Timeout interval of the originating sender source TCP/IP stack , and this RTO Timeout or decoupled rates decrease Timeout interval value/s could be set/ made to be within the required defined perception tolerance interval.
  • the originating sender source TCP/IP stack will alternate between ' pause ' and normal packets transmission phase each of equal durations -> ie the originating sender source TCP/IP stack would only be ' halving ' its transmit rates over time at worst, during ' pause ' it sends almost nothing but once resumed when pause ceases it sends at full rates permitted under sliding windows mechanism
  • External Internet Nodes (which could also be applicable to Internal network nodes )
  • the same decoupled ' pause ' / transmit rate decrement & actual packet retransmission timeouts mechanism (ACK Timeout & packet retransmission Timeout ) applied to guaranteed service Internet subset/ WAN/ LAN , could be similarly applied to external nodes on the external Internal cloud/ external WAN/ external LAN .
  • the uncongested RTTest ie a variable of the latest smallest minimum time period for a corresponding returning ACK received so far
  • this uncongested RTTest serves as most recent estimate of uncongested RTT value between source & destination ( better still were the uncongested RTT between the source & external Internet node is actually known ) .
  • Monitor Software/ IP forwarding module/ Proxy TCP could be implemented via existing rates shaping/ rates throttle techniques OR implementing as another Window size/ Congestion Window size mechanism for each TCP flows within Monitor Software/ IP forwarding module/ Proxy TCP which simply mirror the most recent Effective Window Size value for the particular TCP flows ( and/or suspend operations of this mechanism ), BUT not mirroring / stops mirroring the most recent Effective Window Size value ( ie start operations of this mechanism ) when / as long as the particular flow's most recent received ACK's RTT * a continue to be > uncongested RTTest : INSTEAD during this time when / as long as the most recent received ACK's RTT * a continue to be > uncongested RTTest the Monitor Software's Window size/ Congestion Window size value for this particular flow would be decreased to m% eg 95% of the flow's most recent mirrored derived/ computed current Effective Window size ie the lesser of Window size/ Advertised Window size/ Congestion Window size value ( NOTE above
  • effective window size min ( Window size, Congestion Window size, Receiver advertised Window size ) .
  • Another example could similarly likewise derive Sender TCP source's current effective Window size/ current congestion window size derive by monitoring total bytes forwarded by Monitor Software within an RTT interval.
  • Monitor Software may effect ' pause ' ( &/or allowing one or a number of packets to be forwarded during this pause interval ) instead :
  • This ' pause ' interval may not even need to be evenly spaced apart periodically, and/ or each ' pause ' intervals may not even need to be of same pause durations.
  • EXAMPLE were there in total 5% less time to transmit during to ' pause/s ', the bandwidth delay product of the source - destination would now be reduced to 0.95 of existing value. This is because now there would be 5% less number of non- overlapping RTT intervals within eg 1 sec to transmit up to a total effective Window size worth of data bytes for each non-overlapping RTT intervals above.
  • the ' pause ' interval duration should preferably be set at least equivalent to a minimum of uncongested RTTest, but could be made smaller if required : example in VoIP transmissions sending one sampled packet every 20 ms ( assuming much smaller than uncongested RTTest ) we can make the single ' pause ' interval duration of 50ms within eg lsec ( ie effecting rates decrement equivalent to 5% effective Window size decrement ) into 5 evenly spaced periodic ' pauses ' within eg 1 sec, each of the ' pauses ' here to be of duration 10ms ( so as not to introduce lengthy delay in time critical VoIP packets forwarding ) , or 10 evenly spaced periodic ' pauses ' within eg 1 sec, each of the ' pauses ' here to be of duration 5ms....& so forth.
  • the Sender TCP source code may similar implement the current effective Window size settings entirely utilising ' pause ' methods, totally replacing need for Congestion Window size settings : in these modified TCPs the current effective Window size at any time would be [ min ( Window size, Receiver advertised Window size ) * ( ( 1 - ( p * I ) ) / lsec ) ]
  • Monitor Software's Window size/ Congestion Window size value may now be further optionally repeatedly decreased to eg 90/ 95% ( L% or m% ) of the 'present already decreased to L%/ m% Monitor Software's Window size/ Congestion Window size value ⁇ b denotes more severe level of congestion than a, or even packet drops, either or both a & b could be such that they very likely signify/ packet drops events.
  • Monitor Software may optionally delay above operations by t sec, eg 1 sec so that all existing unmodified TCPs will synchronise in rates decrement ⁇ AND/ OR not increment the Window size/ Congestion Window size for certain period based on some devised algorithm when certain conditions hold, eg as long as the flow's most recent/ subsequent received ACK's RTT * a continue to be > uncongested
  • Monitor Software When using Monitor Software, the TCP of course continues to do its own Slow Start/ Congestion Avoidance/ coupled RTO...etc. Monitor Software could predict/ detect TCP RTO event, eg when a sent segment's ACK has yet to be received back after a very long period eg 1 sec...etc , or from sudden halving of the flow's send rates...etc.
  • Monitor Software may further choose to decrement its mirrored Window size/ Congestion window size value to eg 90% ( n% ) of existing, AND/OR just not increment its own Effective Window size/ Congestion Window size for the particular flow for some period of time derived based on some devised algorithms eg as long as the most recent/ subsequent received ACK's RTT * a continue to be > uncongested RTTest .
  • Monitor Software could additionally implement its own packet retransmission timeout as well, this requires the Monitor Software to always retain a dynamic Window's worth of copies of sent packets & similar retransmission software module as in TCP , hence Monitor Software could perform above paragraph functions much quicker not needing to wait for TCP RTO indications.
  • Monitor Software may optionally delay above operations by t sec, eg 1 sec so that all existing unmodified TCPs will synchronise in various rates decrement.
  • the modified TCP ( or even modified RTP over UDP/ modified UDP ...etc ) flow here does not need to halve rates , since they do not have to increment rates when congested ( during buffering events) to cause packet drops, & the eg 10% / 5% decrement in transmit rates ensures new flows non-starvations ( any other existing unmodified TCP flows would ensure 50% decrement , but they always would strive to increment rates to again cause packet drops ). New flows would build up their fair share over time. This also nicely preserves low latencies...etc of existing established flows ( suitable for VoIP/ Multimedia ), & reflects existing traditional PSTN calls admissions schedules.
  • Modified TCPs/ modified RTP over UDP/ modified UDP here retains their established share , or most of their established share, of link's bandwidth, but do not cause further additional congestions/ packets drops.
  • Modified TCP / modified RTP over UDP/ modified UDP here may even employ quick sudden burst of sufficient extra traffics , eg when congestion level close to packets dropping, to ensure all or selective existing flows traversing the particular congested link/s gets packets drop notifications to reduce transmit rates : existing unmodified TCPs would halve their rates & takes a long time to build back up to previous congestion causing transmit rates, while modified TCPs would retain most of all their established share of bandwidths along the link/s .
  • Modified Sender TCP sources would achieve higher throughputs , retain their established share of bottleneck link's bandwidths upon bottleneck link's congestion causing drops ( or just physical transmission errors causing packet drops ) while preserving fairness among flows ( cf existing TCPs which lose half their established bandwidths on a single packet drops ), and on their own will not cause any packet drops.
  • This modified sender source TCP overcomes existing TCP rates recovery problems, caused by just a single packet drop, in high bandwidth long latencies networks.
  • Sender TCP Source's traffics originate from external Internet nodes/ WAN / LAN and assuming the external originating traffics are time stamped ( enabling Receiver TCP to derive the path transmissions time or one-way transmission delay from source to destination )
  • the above modified Sender Source TCP methods could be adapted to act as Receiver based methods :
  • the timestamps of the originating source needs not be accurately synchronised to the receiver. Receiver could ignore the timestamp drifts of the source system clock here.
  • the OTTest ( most current update estimate of one way transmission latency, of received packets from source to destination , being the lowest value derived so far equivalent to current Receiver system time when packet received - Received .packet's Sender timestamp ) is derived at the receiver. Any increment in OTT observed in subsequent received packets will indicate insipient onset of congestions along the path ( ie at least one forwarding link along the path is now fully utilised 100% and packets start being buffered along the path ), would now signify that Sender TCP
  • Source should now trigger the modified rates decrement or ' pause ' mechanism.
  • Receiver could signal this to Sender TCP , ,
  • Effective Window size min ( Window size, Congestion Window size, Receiver Window size ), eg to 95% of current derived/ estimated effective Window size of Sender TCP source.
  • Effective Window size min ( Window size, Congestion Window size, Receiver Window size ), eg to 95% of current derived/ estimated effective Window size of Sender TCP source.
  • the Sender TCP Source would not continuously increment the Effective Window size for ACKs received within each RTT, as long as modified Receiver TCP keeps ACKing with same advertised decremented current derived/estimated effective Window size.
  • the modified Receiver TCP would negotiate to have timestamp option with the Sender TCP Source.
  • This Receiver based modified TCP/ modified Monitor Software does not require Sender TCP to be modified.
  • both Sender and Receiver TCPs are modified, together with timestamp options, would enable better precise OTTs / OTTs variations knowledge in both directions ( both modified TCPs/ modified Monitor Software could pass the knowledge of OTT's in their direction's to each other thus modified TCPs/ modified Software Monitor could now provide better control using OTTs instead of RTT , eg if the sent segment's OTT indicates no congestion but the returning ACK' s OTT indicates congestion, there is no need to rates decrement/ ' pause ' even if their RTT as used in earlier RTT based method would have timedout.
  • RTT based modified TCPs when implemented at Sender only , used together with timestamp option, would enable Sender to similarly be in possession of returning ACK' s OTTest and/ or OTT variations to similarly provide better controls. It is noted that were the modified TCP techniques be implemented at both ends of Intercontinental submarine cables/ satellite links/ WAN links would increase bandwidth utilization and throughput of the transmission media for TCPs , in effect like doubling of the physical link's physical bandwidths.
  • UDP traffics alone may exceed link's physical bandwidth, could have UDP sending sources reduce transmit rate ie resolution qualities &/or router/ switch nodes to perform this resolution reduction process on all UDP flows ( eg sending only alternate packets of the flow & discard the other alternate UDP packets, or to combined two ( or several ) eg VoIP UDP packets' data into one packet of same size but of lower resolution quality )
  • nodes may ensure TCP non-complete starvation by guaranteeing minimum proportions of forwarding link's bandwidth for various UDP/ TCP...etc flows.
  • New flows UDPs ICMPs TCPs
  • new unmodified TCPs/ RTP over UDPs/ UDPs should now always have at least 5% non- starvation guaranteed bandwidth to grow at all time, as modified TCPs/ RTP over UDPs/ UDPs could eg all not increment transmit rate when link utilization exceeds eg 95%.
  • Pausing for interval x instead of Sliding Window/ Congestion Window Size decrement/ rates decrement , would gives fastest possible early clearing of congested buffers at the node , & helps keeps buffer delays at the nodes along the path to the very minimum.
  • Buffer size requirements here is not a very relevant factor for considerations at all. Can conceivably keeps all traffics to within/ not exceed 100% of the available physical bandwidths at all time ( subject to very sudden burstiness may be needing to be buffered ).
  • the source VoIPs/ Multimedia may now transmit at eg some percentage eg half the resolution quality & wait until the other traffics' growth now bringing link utilization back up to eg 95%/ 100%, to now sudden burst back to full resolution quality transmission &/or plus extra resolution eg 200% or more ( with extra redundant erasure codings...
  • VoIP / multimedia may even begin with higher resolutions transmission quality ( eg 200% of normal required resolutions, with redundant erasure codings...etc ).
  • Router Software may further be upgraded to permit authorised request to drop flow packets (eg 1 packet from each TCP flow to signify sender to rates decrement ), &/or to do this upon detection of eg 95%/ 100% link utilizations.
  • Above method may be used in conjunction with existing eg RIP/ BGP router table update packets...&/or similar techniques, to ensure minimum or no buffer delays at all nodes, upgraded router software does the links preference routing table update to pre ⁇ empts eg exceeding 95%/ 100% of particular forwarding links... &/or propagates this throughout network not just neighbouring routers ( but would need to be enhanced to allow more frequent real time speed updates )
  • Another next generation network design may be for router to signal neighbouring routers of particular forwarding link's eg 95%/ 100% utilization ( 100% utilization would indicate imminent onset of packets buffering ) and/or other configuration details such as links' raw bandwidths/ queueing policies/ buffer sizes...etc, for neighbouring router to not increase existing sending rates to this router/ or just this forwarding link, AND/OR per flow rates decrement/ rates shaping on the flows which traverses the notified router link by some percentages based on devised algorithms depending on updated informations or even some corresponding ' pause ' interval x before continue unrestricted sending rates for period y (limited in fact only by the link bandwidth between the routers ) .
  • the router may also modify setting the advertised Window size field in the ACKs returning to Sender TCP source to be zero for certain duration or certain duration periodically ( causing ' pause ' or periodic ' pause ' ), or even modify/ set the advertised Window field value to certain decremented percentage of derived/ estimated current effective Window size of Sender TCP source ( thus effecting rates limiting of source traffics ) .
  • the switch/ router on the Internet / Internet subset/ WAN/ LAN needs only maintain table of all flows' source - destination addresses &/or ports together with their latest Seq Number &/or ACK number fields ( &/or per flow forwarding rates along the link, current derived/ estimated per flow Effective Window sizes along the link...etc ) to enable router to generate Advertised Window Size updates via ' pure ACKs ' &/or ' piggyback ACKs ' &/or replicated packets ' ...etc ( eg notifying source TCPs to ' pause ' via continuous advertised Receiver Window size of 0 for certain period before reverting to existing Receiver Window size value prior to the ' pause ' , or reduce rates via advertised Receiver Window size of decremented value based ' on derived/estimated current source TCP Effective Window size ) .
  • Neighbouring routers would reduce/ traffic shape packets destined to the along the notified router's link of next router, neighbouring knowing certain packets IP addresses are destined to be routed along the notified next router's link from Routing Table entries, RIP/ BGP updates, MIB exchanges...etc. For example, an already periodically paused flows at the neighbouring router preceding the notifying router ( rates controlled via periodic ' pauses ' ) would now further increase the affected flows' ' pause ' interval length &/or increase the number of ' pauses ' within the period.
  • the periodic pauses may cease or lessen in frequency/ individual pause interval , upon eg some defined period derived from devised algorithms eg when the notifying router now updates neighbouring routers indicating link utilizations which has fallen back down below certain percentage eg below 95%.
  • RED/ ECN mechanism could be modified to proved this functionality, ie instead of monitoring buffered packets & selectively drop packets/ notify senders, RED/ ECN may base policies on link utilizations eg when utilizations approaches some percentages eg 95%... etc.
  • bottleneck link utilization estimation available bottleneck bandwidth estimation, bottleneck throughput estimation, bottleneck link bandwidth capacity estimation techniques could be further incorporated into the earlier described rates decrement/ ' pause ' methods based on uncongested RTT/ RTTest/ RTTbase/ Receiver OTTest methods : here there would be plenty of time for the bottleneck link utilization estimation, available bottleneck bandwidth estimation, bottleneck throughput estimation, bottleneck link bandwidth capacity estimation techniques to be derived/ estimated for sufficient good accuracy to further enhance the earlier described rates decrement/ ' pause ' methods based on uncongested RTT/ RTTest/ RTTbase/ Receiver OTTest methods.
  • Various further techniques to complement/ provide path's topology/ configurations may include SNMP/ RMON/ IPMON/ RIP/ BGP...etc.
  • periodic probes could be in form of Windows Update probe ( to query receiver Window Size, even though receiver has yet to advertise 0 window size ) or similar probe packets ...or uses actual data packets as periodic probes ( where available for transmissions )...etc , or UDPs to destination with unused port number ( to get return msg destination port unreachable ), &/or plus timestamp options from all nodes. OR similarly TCP to destination with unused port number ( THE TCP PACKET MAY BE TCP SYNC TO UNUSED PORTNUMBER)
  • bandwidth est techniques in conjunction : eg receiver processor delay, raw bandwidth, available bandwidth, buffer size, buffer congestion level, link utilisations
  • Receiver based OTTest need not deploy GPS synchronisation, just need uncongested OTTest or uncongested OTTbase or known uncongested OTT & OTT monitor variations ! ! !
  • Modified TCP/ modified Monitor Software when paused could optionally immediately generate and send ( despite ' pause ' ) a pure ACK carrying no data payload corresponding to every newly arrived data segments with ACK flag set ( ie piggyback ACK segments or pure ACKs, ignoring normal data segments which does not ACK anything ) from host source TCP which now needs to be buffered .
  • All generated pure ACK/s during this pause interval/ extended pause intervals, which is/are sent immediately, could have its/ their Seq Number field value set to be the very same Seq Number as that of the very 1 st buffered data segment MINUS 1 (which could be normal data segment with or without ACK flag set, or pure ACK segment ).
  • Modified TCPs/ modified Monitor Software may optionally enable segments with URGENT /PSH flags....etc to be immediately forwarded even during ' pause ' / extended ' pause '
  • Receiver base could distinguish between congestion loss & physical transmission error, & detecfrates, OTT or OTTbase, onset of congestions separately in either directions much more accurately . Even better sender receives ACK back with timestamp of when receiver first receives the packet, &/or when receiver last touch the packet ( &/or ACK ) sending back to sender ( eg IPMP ).
  • TCP Window size such that eg 95% of available bandwidth or eg 95% of capacity immediately utilised.
  • Note ACK Timeout ( & or actual packet retransmission Timeout value ) value may be dynamically derived based on devised algorithm for the purpose, from returning real time RTTs similar to existing RTO estimation algorithm from historical RTTs
  • DUP ACKs should not be delayed, here we complied by already sending generated pure ACKs immediately for every buffered ACK packets or just their highest ACKNo
  • Percentage rates decrement/ ' pause ' interval lengths may be adjusted depending on the size of the buffer delays experienced along the path eg OTT - OTTest ( or OTT - known uncongested OTT ) , or RTT - RTTest ( or RTT - known uncongested RTT )
  • sender TCP may infer if the 1 byte data generated pure ACK not
  • Receiver based Resends ACKs if ACKs not confirmed back received.
  • Can dynamically adjust Receiver Window size as % of estimated Sender's maximum actual transmitting Window size ( corresponding to the actual rate, could assume this actual transmitting Window size is equiv to total packets in flight ) during preceding RTT interval.
  • Future RFCs for TCP should have one extra Acking ACK field ( Acking the ACKs control feedback loop ), this completes the control loop ( ie existing TCPs are blind as to whether RTOs are due to data segment loss on the forwarding link or its corresponding ACK loss on the returning link ), improves both TCP's knowledge of events states. Or Monitor Software may perform this ACKing the ACKs via ACK with Seq No ( replicated segments )...etc
  • receiver could coordinate to pass one way transmission times, in both directions, to the other.
  • Receiver based Monitor Software could derive external Internet node's OWD ( One way delay ) from timestamp option requested at SYNC connection establishment.
  • Sender based Monitor Software could estimate OWD to remote receiver via IPMP, NTP...while receiver to Sender OWD via timestamp option.
  • Owd needs timestamp to derive, or ipmp / icmp probes/ ntp ....etc.
  • Monitor Software at both ends, just timestamp segment when received & when returning Acking the Segment Seq No ( all these 2 timestamp values, coupled with sending monitor recording of segment seq no SENT TIME kept in event list, & arrival time of the Seq No's ACK provides all OWDs, ends processing delays etc.
  • ICMP about only packet with ready send, receive, return time stamps giving OWDs both directions, in wan/ Ian/ small internet subsets traverses same paths as tcp/udp both directions.RFC for top/ udp should enable these timestamps.
  • Periodic icmp probes could complement passive top rtt measurements .
  • IPMP provides similar timestamp capability & traverses the same paths as the sent TCP segments, & could be utilized as the probe packets sent with same IP addresses as the flow/s TCP IP addresses but with different port addresses.
  • the periodic probe packets may take the form of separate independent TCP or UDP or IPMP connection established between the two ends' modified TCP/ Monitor Software with same IP addresses as the flow/s TCP IP addresses but with different port addresses, and both ends' modified TCPs/ Monitor Software could now include timestamps of time when segment with the Seq Number first arrive and/or time when segment with the same Seq Number is ACKed & returned, enabling OWD measurements by both ends.
  • the data packets communications between the source sender and receiver could be subject to congestion packet drops beyond our control : eg http webpage download/ ftp from external Internet sites.
  • the Method/s here extend our modifications/ inventions to also be applicable where either one of the source sender or receiver ( or both ) resides at external Internet, BUT could also be applied where both resides within Internet subsets/ WAN/ LAN/ proprietary Internet as in various earlier described Methods in the description body.
  • Receiver TCP would still be in position to generate DUP ACKs to sender source TCP to trigger fast retransmit/ recovery which only halves the CWND instead, thus averting sender source TCP's RTO packet retransmissions timeout event which would cause sender source TCP re-entering ' slow start ' with CWND of 1 segment.
  • Scenario (A) above could be prevented by modifying sender source TCP so that eg IF the immediately next sent data packet's Acknowledgement is not received back after eg 300ms ( or user input value, or algorithmic derived value which may be based on RTTest(min) &/or OTTest(min) ...etc, 300ms was chosen example here as being larger than the Delayed Acknowledgement max period of 200ms ) of the immediately previous sent data packet's Acknowledgement which has been received back or eg 300ms + latest RTTest elapsed since the immediately next sent data packet's Sent Time whichever is the later ( ie we can now quite safely assume the immediately next sent packet was lost/ dropped or its Acknowledgement from the receiver back to sender source TCP was lost/ dropped ) , THEN [ hereinafter refers to as algorithm A ] ( Except where all sent data segments/ data packets have all already been returned Acknowledged back , ie latest sent ' largest '
  • the sender source TCP instead of entering into ' continuous pause ' upon initial elapsed 300ms, the sender source TCP only reduces its CWND to x % ( eg 95%, 90%, 50% ...which could be user input or based on some devised algorithms )
  • the sender source TCP instead of entering into ' continuous pause ' upon initial elapsed 300ms, the sender source TCP only ' pause ' for ' pause-interval ' which may be user input or derived from some devised algorithms ( eg pause-interval of 100ms would be equivalent to above Step 1 reducing CWND to 90% ) without changing the CWND size
  • Step 1 & 2 instead of entering into ' continuous pause ' upon initial 300ms elapsed, only immediately ' pause ' for an ' initial pause- interval ' only which may be user input or derived from some algorithm , eg 500ms to ensure all the cumulative buffered packets delays built up along the router/ switches nodes traversed by packets from sender source TCP to receiver TCP would be cleared by this eg 500ms amount, reducing buffer latencies experienced by subsequently sent packets.
  • eg 500ms to ensure all the cumulative buffered packets delays built up along the router/ switches nodes traversed by packets from sender source TCP to receiver TCP would be cleared by this eg 500ms amount, reducing buffer latencies experienced by subsequently sent packets.
  • sender source TCP now instead transmit at rates permitted by the new CWND size during ' continuous pause' or ' pause-interval ' or ' initial pause-interval ' OR not transmitting any packet/s at all
  • Timestamp options could enable OTTest information to be utilised in sender source TCP decisions, SACK option if used would reduce occurrences of DUP ACKs events.
  • Sender source TCP could be further modified as above to do away with requirement for re-entering ' slow start ' under any circumstances whether packet loss is due to congestion drops or physical transmission errors... etc, ie TCP could now be made to eg maintain transmit rate/ CWND to eg 90% of the transmit rate/ CWND ( or equivalent ' pause-interval ' of 100ms, without changing CWND ) previous to the RTO packet retransmissions timeout or DUP ACKs fast retransmit , instead of re-entering RTO ' slow start ', fast retransmit rates halving... etc.
  • the further modified TCP could react much quicker to congestion drops react accordingly eg including an ' initial pause-interval ' to clear cumulative buffered delays cf existing RFCs minimum RTO default lowest floor of 1 second.
  • modified Monitor Software/ modified proxy TCP/ modified IP Forwarder ...etc could keep copy of current window's worth of data segments/ data packets transmitted & perform the actual 3 DUP ACKs fast retransmit & RTO actual packet retransmit ( instead of TCP which now simply would not carry out any fast retransmit & RTO retransmit whatsoever at all ) eg when modified Monitor Software/ modified proxy TCP/ modified IP Forwarder ...etc realises particular data segment/ data packet sent has not been returned ACKed & TCP would soon perform RTO timeout, to then ' spoof ' the particular Acknowledgement for the particular ' soon late ' data segment/ data packet & perform the actual data segment/ data packet retransmissions
  • the modified TCP is installed at user local host PC only, and the remote sender source TCP such as http web servers/ ftp servers/ multimedia streaming servers have yet to implement the above modified TCP.
  • the modified local host PC's TCP would here need to act as Receiver based modified TCP, ie to influence the remote sender source TCP remotely.
  • Some of the ways local host TCP could influence the remote sender source TCP congestion controls/ avoidance are via sending receiver window size updates to remote sender source TCP, sending DUP ACKS to remote sender source TCP to fast retransmit/ recover averting RTO packet retransmissions timeout at the remote sender source TCP...etc
  • latest packet RECEIVED LOCAL SYSTEM TIME received from remote sender, pure ACK or regular data packet
  • latest receiver packet's advertised window size sent by local MSTCP to remote sender
  • latest receiver packet's ACK Number ie next expected Seq Number expected from remote sender sent by local MSTCP to remote sender, requires per flow incoming & outgoing packets inspections , & we now should be able to immediately removes the per flow TCP table entry upon FIN/ FIN ACK not just waiting for usual 120seconds inactivity )... etc
  • TCP uses a three-way handshaking procedure to set-up a connection.
  • the initiating side makes a note of Y and returns a segment with just the ACK flag set and an acknowledgement field of Y + 1.
  • SCENARIO B is taken care of by keeping sending same 3 DUP ACKs every 100ms , UNTIL a next ACK or data packet is received from remote ( ie bottleneck now not dropping every remote sent packets ) : WHEREUPON we keeps sending single window size restoring packet every 100ms until any NEXT PACKET RECEIVED ( ie even if worst case all the window restore packets dropped, 300ms later the process will repeat , again ensuring window ' pausing ' followed by window restore attempts )
  • latest packet RECEIVED LOCAL SYSTEM TIME received from remote sender, pure ACK or regular data packet
  • latest receiver packet's ACK Number ie next expected Seq Number expected from remote sender sent by local MSTCP to remote sender, requires per flow incoming & outgoing packets inspections , & we now should be able to immediately removes the per flow TCP table entry upon FIN/ FBSf ACK not just waiting for usual 120seconds inactivity )...etc
  • TCP uses a three-way handshaking procedure to set-up a connection.
  • the initiating side makes a note of Y and returns a segment with just the ACK flag set and an acknowledgement field of Y + 1.
  • SCENARIO B is taken care of by keeping sending same 3 DUP ACKs every 100ms , UNTIL a next ACK or data packet is received from remote ( ie bottleneck now not dropping every remote sent packets ) : WHEREUPON we keeps sending single window size restoring packet every 100ms until any NEXT PACKET RECEIVED ( ie even if worst case all the window restore packets dropped, 300ms later the process will repeat , again ensuring window ' pausing ' followed by window restore attempts )
  • Timestamp option for these flows during connection establishment ( can modify Sync packet ? or may need to set the PC registry so all flows in paragraphs 1, 2 above also lumped with timestamp ? Window server 2003 only allows timestamp option if initiated by remote TCP !?)
  • OTTest stands for one way trip time estimate, ie the max & min OTT observed so far.
  • OTTest(max) & OTTest(min) is updated from every subsequent packets received.
  • customised TCP If incoming packet's OTTest - OTTest(min) > eg 100ms ( user input parameter ), THEN remote sender should ' pause ', customised TCP generate 1 byte garbage (or no data) segment window size advertisement packet of eg 50bytes ( not necessarily 0 , to allow remote sender TCP to reply/ pure ACK), with Seq No set to receiver's last sent sequence no OR last received ACK No - 1 ( in case receiver does not send data segments to remote sender at ball thus there is no receiver's last sent Seq No ).
  • Receiver continues sending same generated window advertisement packet ( but the Seq No or last received ACK No -1 may have changed ), UNTIL there is a reply confirmation received to one of these ' replicated packet window update' packets thus signifying at least one of these window update packets has been received at sender & its reply confirmation now arrived ( could be lost in either direction ), and whose OTTest - OTTest(min) must be ⁇ eg 100ms ( we do not cease ' pause ' until no congestions ).
  • the ' pause ' may also be ceased upon any other packets eg regular data packets arriving within OTTest(min) + 100 ms. Where upon receiver sends same window update packet but with window size field set to the value immediately prior to the ' pause ' (this value is recorded prior to effecting eg 50 bytes advertisement.
  • Timestamp option is not necessary but useful to know the one way delay back to better determine cause of RTT ⁇ timeout ( could be caused by reverse path congestion )
  • MSTCP upon MSTCP originating packet/s with Seq No ⁇ last Seq No sent ( packet drops retransmission ) , MSTCP would enter slow start again : customised TCP would now spoof ACKs' back to MSTCP for every packets originated by MSTCP for a period of eg lOOms.This would bring the congestion window back up to eg TCP window size. Any subsequent forwarded buffered packets drops could be fast retransmitted via receiver's 3 DUP ACKs received ( where upon customised TCP may again spoof ACKs back ) +4+4H-+++4H-++++ ⁇ ++++++++4H-+++++++++4++++++++++4+-l+ ⁇ +++++++++++++++++++
  • latest packet RECEIVED LOCAL SYSTEM TIME pure ACK or regular data packet
  • latest receiver packet's advertised window size latest receiver packet's ACK Number ie next expected Seq Number ( requires per flow incoming & outgoing packets inspections , & we now should be able to immediately removes the per flow TCP table entry upon FIN/ FESf ACK not just waiting for 120seconds )
  • SCENARIO B is taken care of by keeping sending same 3 DUP ACKs every 100ms , UNTIL ' ACKing the ACK ' is received., or a next regular data packet is received ( ie bottleneck now not dropping every remote sent packets ) : WHEREUPON we keeps sending 3 DUP ACKs restoring advertised window size every 100ms until ' ACKing the ACK received
  • TCP will in this case retransmit ' beginning from the lowest unacked packets or the first unsent packet in current congestion window '.
  • window updates can simply repeats every 100ms ( instead of 3 * OTTest(min) in paragraph 4 ) UNTIL receiving any pure ACK or regular data packet ( receive time does not matter ).
  • UNTIL receiving any pure ACK or regular data packet ( receive time does not matter ).
  • SCENARIO B is taken care of by keeping sending same 3 DUP ACKs every 100ms , UNTIL ' ACKing the ACK ' is received., or a next regular data packet is received ( ie bottleneck now not dropping every remote sent packets ) : WHEREUPON we keeps sending 3 DUP ACKs restoring advertised window size every 100ms until ' ACKing the ACK received
  • congestion or packet drops indications could now instead be detected/ inferred by modified TCP/ modified Monitor Software/ modified proxy/ modified Port forwarder...etc by observing the delay between inter-packet-arrival eg in particular when the ' elapsed-tine-interval ' between immediately successive packets exceed certain user input interval ( or derived from some algorithm which may be based on RTTest, OTTest, RTTest(min), OTTest(min) ...etc ) since the last packet received from the remote sending source TCP or the remote receiver TCP ( whether pure ACK or regular data packet...etc ) .
  • TCP connection between symmetrical with each end capable of sending and receiving at the same time and one end's sent data segments/ data packets & their corresponding return response ACKs from the other end may be co- mingled with the other end's independently sent data segments/ data packets & their independent corresponding return response ACKs from the other end [ hereinafter refers to as sub-flow B ]: thus modified TCP/ modified Monitor Software/ modified proxy/ modified Port forwarder...etc when observing the delay between inter- packet-arrival above should ' discern ' & separately observe the inter-packets- arrivals of sub-flow A &/or sub- flow B completely independently " ⁇ so that when one end's ie sub-flow A's sent data segments/ data packets were dropped along the onwards path to the other end thereby their corresponding return response ACKs will not be returned from the other end along the return path, independently the other end's ie sub-flow B's sent
  • Modified TCP/ modified Monitor Software/ modified proxy/ modified Port forwarder...etc on one end when acting as receiver would only observe the other end's own sub-flow B's incoming segments/ packets for inter-packet-arrivals delays for ' elapsed-time-interval ' expiration ignoring this end's own independent sub-flow A's ( if any ) corresponding arriving returned response ACKs stream.
  • the task should be simple enough : one end when acting as sender based would only needs monitor its own sent packets' corresponding incoming return response ACKs for ' inter-packets-interval ' delays for ' elapsed time interval ' expiration , whereas when acting as receiver based would only needs monitor the other end's sent data segments/ data packets : further were the other end's independent sub-flow's sent packets continue to arrive, before ' elapsed time interval ' expiration of this end's independent sub-flow's sent packets' corresponding return response ACKs from the other end whose ' inter-packets- interval ' delays has now ' elapsed time interval ' expired , this would provide additional definite indications/ definite inference that the one way path from the other end to this end is ' UP ' & that the one way path from this end to the other end is ' DOWN ' , to react accordingly.
  • the regular data packets are transmitted continuously when not interrupted by RTO packet retransmission timeout re- entering slow start with CWND reset to 1 or segment size.
  • the lowest possible bandwidth link along the path traversed by a packet would be 56Kbs in the worst case scenario .
  • the default packet size is usually about 500bytes , as is usually negotiated by TCP during connection establishment .
  • the ' inter-packets- arrivals ' method may begin with ' elapsed time-interval ' value settings & ' synchronisation ' interval value settings based on assumptions of 56Kbs lowest bandwidth link along the path & negotiated largest packet size, then continuous monitor the actual observed latest minimum value of received inter-packet-arrivals interval between regular data packets ( or between ACKs for actual data packets sent ) to dynamically adjust the ' elapsed time interval ' value setting & ' synchronisation ' interval value settings eg if the latest minimum ' inter-packets-arrivals ' interval is now only 20ms then ' elapsed time interval ' value could now be set to eg 80ms & the ' synchronisation ' interval value could now be set to eg 40ms ...etc or derived based on devised algorithms.
  • the total amount of intervals due to the single packet transmit time delay encountered at each nodes along the path traversed where the node/s uses store & forward switching could vary from few milliseconds if the nodes along the path traversed are of high bandwidth capacity links ( even if store & forward switching is implemented instead of cut through switching ) to tens or even few hundred milliseconds if the links • traversed are of low bandwidth capacities.
  • the total transmit completion time delays encountered by a single 1500 bytes size packet at each successive stage of the forwarding links with the nodes all implementing store & forward switching cf cut through switching here assuming no congestion buffer delays whatsoever at each of the nodes traversed would be around 24ms + 1.2ms + 0.12ms + 1.2ms + 24ms 50.52ms , ie when finally received at destinations the inter-packet-arrivals interval would centre around 50.52ms between immediately successive packets.
  • Any congestion buffer delays which increases the time it actually takes for a packet to finally arrive from source to destinations and may cause a much later sent packet ( ie not immediately successive next packet to the referenced earlier sent packet eg spanning several seconds or tens of seconds ) to take eg 300ms longer than the much earlier referenced sent packet to actually arrive at destination receiver caused by the cumulative congestion buffer delays encountered at the nodes traversed , BUT since between any two immediately successive next sent packet & the immediately previous sent packet the ' extra ' increased cumulative congestion buffer delays encountered by the immediately successive next packet compared to its immediately previous sent packet's could be only eg 3ms ie several magnitude order very much less than above eg 300ms as between two distant sent packets spanning several seconds apart ( assuming the congestion level is increasing here, the same reasonings similarly applies where the congestion level is decreasing ).
  • This 'extra ' additional congestion buffer delays would be small as between immediately successive next packet & its immediately previous sent packet, would only increases gradually between any subsequent pairs of immediately successive next packet & its immediately previous counterpart.
  • the congestion level could (not impossibly ) suddenly builds up eg 200ms of buffer delays within short period eg 100ms such as eg when the incoming link is lOOMbs & the outgoing link is only lOMbs...etc , in which case we may here conveniently include the scenario to cater for the elapsed time interval to detect/ infer this very rare very sudden congestion buffer delay event , in addition to the congestion &/or packet drops &/or physical transmission error events.
  • TCP connection is full duplex ie each of the both ends of the connection could be sending & receiving acting as sender source TCP & receiver TCP at the same time. Even if only one end of the connection is doing almost all or all of the sending of regular data packets eg ftp file downloads/ http webpage download...etc the receiving end TCP would always be sending back Acknowledgements in response to regular data packets received back towards the end TCP doing almost all or all of the regular data packets sending.
  • the end TCP doing almost all or all of the regular data packets sending in that upon ' elapsed time interval ' expired without receiving pure ACK packets &/or piggyback ACK packets from the other end TCP receiving the downloads , the end TCP doing almost all or all of the regular data packets sending could now infer detection of the congestion &/or packet drops &/or physical transmission error &/or ' very rare ' very sudden ' congestion level built-up events, & react accordingly.
  • the modified TCP/ modified Software Monitor/ modified proxy/ modified IP Forwarder/ modified firewall...etc may then proceed with existing coupled actual packet retransmissions simultaneous with CWND decrease/ rates decrease, &/or modified decoupled CWND decrease/ rates decrease only without accompanied by actual packet retransmissions , &/or various modified ' pause ' methods with or without accompanying CWND decrease / rates decrease...etc as described in earlier methods / sub-component methods
  • modified TCP/ modified Software Monitor/ modified proxy/ modified IP Forwarder/ modified firewall...etc may OPTIONALL &/OR FURTHER also then proceed with causing the other end TCP doing existing coupled actual packet retransmissions simultaneous with CWND decrease/ rates decrease, &/or modified decoupled CWND decrease/ rates decrease only without accompanied by actual packet retransmissions , &/or various modified ' pause ' methods with or without accompanying CWND decrease / rates decrease...etc as described in earlier methods / sub-component methods in the body descriptions.
  • the modified TCP/ modified Software Monitor/ modified proxy/ modified IP Forwarder/ modified firewall...etc may OPTIONALL &/0R FURTHER also then ONLY proceed with causing the other end TCP (without causing local TCP to do so at all ! such feature would be useful eg when the other end TCP doing almost all or all of the regular data packets sending being existing unmodified standard TCP ) doing existing coupled actual packet retransmissions simultaneous with CWND decrease/ rates decrease, &/or modified decoupled CWND decrease/ rates decrease only without accompanied by actual packet retransmissions , &/or various modified ' pause ' methods with or without accompanying CWND decrease / rates decrease...etc as described in earlier methods / sub-component methods in the body descriptions.
  • each end of both modified ends' TCPs would immediately know/ infer/ detect the one-way path from the other end to local end TCP is encountering congestions &/or packet drops &/or physical transmission error &/or very rare very sudden congestion level build ⁇ up event ( BUT not including rare 200ms Delayed ACK event here :
  • the local modified end's TCP would only be able to immediately know/ infer/ detect that either of, but not knowing which one definitely, the forwarding or returning paths between local modified end TCP and the other unmodified end TCP is encountering congestions &/or packet drops &/or physical transmission error &/
  • local end modified TCP may either immediately trigger & cause local end's modified TCP ( &/or optionally also ' remotely ' cause the other end's TCP ) doing existing coupled actual packet retransmissions simultaneous with CWND decrease/ rates decrease, &/or modified decoupled CWND decrease/ rates decrease only without accompanied by actual packet retransmissions , &/or various modified ' pause ' methods with or without accompanying CWND decrease / rates decrease...etc as described in earlier methods / sub-component methods in the body descriptions, OR to do so only after a further certain period eg 250ms ( user input value or some derived value based on algorithm including factors such as R
  • the ' synchronisation ' packets sent to the other modified end's TCP could simply be in the form of a generated packet with same source IP address Port number & same destination IP address & Port number as the particular per flow TCP connection, together with suitable Identifications uniquely identifying such packets as ' synchronisation ' packets : such as eg special fixed length unique identification in the data field portion or ' padding ' field portion inserted eg containing source IP address Port Number &/or destination IP address Port number , without requiring to elicit the other receiving modified end's TCP to generate returning response ACKs...etc.
  • the ' synchronisation ' packet when sent by the modified end towards the other unmodified end would need to be in the form of a packet which elicits return response ACKs from the receiving unmodified end such as eg a generated packet with same source IP address Port number & same destination IP address & Port number as the particular per flow TCP connection together with a Duplicated Sequence Number field value not within Window which elicits a return response ACK from the receiving unmodified end (such as sending eg out of order Seq No packet not within window which receiving TCP always generate a ' do nothing ' return ACK see Internet newsgroup topic ' Acking out of Order packet ' http://gr ⁇ ups- beta.google.com/group/comp.protocols.tcp-ip u 1 Phil Karn Mar 2
  • the elicited returned response ACK from the other unmodified end would simply has its ACK field value set to be the Next Expected Seq Number to be received by the other unmodified end from the modified end, upon receiving this return response ACK the modified end would just discard & ignore this returned response ACK since the Next Expected Sequence Number data segment has yet to be sent .
  • both ends' TCPs implement sending of ' synchronizing ' packets to the other end's TCP .
  • This enables each end's TCP to be able to definitely ascertain/ definitely infer the one-way path from the other end's TCP to local end's TCP is congested &/or packet drops &/or physical transmission errors &/or very rare very sudden congestion level build-up ( but 200ms Delayed ACK mechanism will not be the cause now, since ' synchronising' packets mechanism is implemented here ) whenever ' elapsed time interval ' expires without receiving any packet of the same sub-flow ( including generated ' synchronisation ' packets for the same sub-flow ) from the other end's TCP.
  • More complete combination scenarios includes the following ( assume both ends' modified TCPs further includes ' synchronizing ' packets method ) :
  • local end's modified TCP could only definitely infer that either of the one-way paths ( but not definitely which of the from local end's modified TCP to the other end's unmodified TCP or from the other end's unmodified TCP to the local end's modified TCP is ' DOWN ' ( cf when both ends are modified & implement ' synchronisation ' packet techniques ).
  • timestamp option being selected, this would enable both of one-way paths latencies ( ie OTTest & OTTest(min) ...etc be derived instead of just RTTest & RTTest(min)...etc ) to react better accordingly.
  • SACK option would enable less unnecessary retransmissions of packets which had already been received out-of- order.
  • the ' synchronization ' packets &/or earlier periodic probe packets method could if required be sent independently in form of new TCP connection established between the per TCP flow/s with destination IP address & Port, source IP address unchanged but source Port now assigned a different unused Port number.
  • the ' inter-packets-arrivals ' ( &/or optionally ) ' synchronization ' packets method within each per flow TCP can be made operational upon certain criteria/ events being fulfilled , to settle in the per flow TCP , such as eg only after the initial Sync/ Sync ACKs &/or only after a small number n of successive packets being received from the other end's TCP ( modified or unmodified ) &/or only after a small number m of successive packets being received from the other end's TCP which all arrives within ' elapsed time interval ' of each other's immediately preceding previous packet.
  • the local end's modified TCP could instead re- send / re-transmit yet unacknowledged previously sent regular data packet/s to the other end's TCP (which would also elicit an Acknowledgement response back from the other end's TCP ) in the place of pure ' synchronization ' packet.
  • User interface may be provided in the various earlier described modified TCPs/ modified Monitor Software/ modified TCP forwarder/ modified IP forwarder/ modified firewall in the description body, to allow user inputs of various TCP tuning/ registry parameters (eg initial ssthresh, initial RTT, MTU, MSS, Delay ACK option, SACK option , Timestamp option...
  • TCP tuning/ registry parameters eg initial ssthresh, initial RTT, MTU, MSS, Delay ACK option, SACK option , Timestamp option...
  • the ' trigger ' event such as eg 300ms ' elapsed time interval ' , 3DUP ACKs, RTO actual packets retransmission timeout ...etc
  • this would only require the TCP itself to only ' pause ' ( or not even paused at all )for a defined pause-interval &/or allowing a small number of packets transmission during pause to act as probes, then either resume ( or continue without the pause ) without altering CWND/ rates limit or reduce CWND/ rates limit by x% eg 5%, 10%, 50%... etc .
  • Sender based modifications has the advantage here of knowing whether the eg 300ms ' inter- packet-arrivals ' expiration was solely due to the fact that local end Sender has no data packets to transmit to the other end thus would not need to unnecessarily ' pause ' &/or react accordingly unnecessarily ( cf where the local end acts as receiver it would have no way of knowing whether the eg 300ms ' inter-packet-arrivals ' expiration was due to ' trigger ' events or simply because the other end's Sender has no further data packets to transmit temporarily )
  • Inter-packets-arrival methods could be used in place ' uncongested RTT * multiplicant ' methods as trigger events to react accordingly, further if ' synchronisation ' packets method ( here only generated from local end modified sending sourceTCP but eliciting responses such as eg returning ACKs from the other end's unmodified TCP ) &/or timestamp options were incorporated would enable definite detection/ definite inference of which direction's link is definitely ' DOWN ' or definitely ' UP '.
  • Modified Software Monitor/ modified TCP proxy/ modified Firewall...etc here would need to perform the tasks instead of TCP stack itself .
  • the ' trigger ' event such as eg 300ms ' elapsed time interval ' , 3DUP ACKs, RTO actual packets retransmission timeout ...etc
  • this would only require the modified Software Monitor/ modified TCP proxy/ modified Firewall...etc here to only ' pause ' intercepted TCP packets forwarding for a defined pause-interval &/or allowing a small number of packets transmission during pause to act as probes, then when resuming eg ' spoof ' a fixed number of ACK to all arriving intercepted outgoing TCP packets ( to quickly restore TCP's CWND/ rates limit which might eg have been reset to 1 segment size on re-entering ' slow start ' ) , &/or even eg handle all fast retransmit 3 DUP ACKS/ R
  • Inter-packets-arrival methods could be used in place ' uncongested RTT * multiplicant ' methods as trigger events to react accordingly, further if ' synchronisation ' packets method ( here only generated from local end modified receiver TCP but eliciting responses such as eg returning ACKs from the other end's unmodified TCP ) &/or timestamp options were incorporated would enable definite detection/ definite inference of which direction's link is definitely ' DOWN ' or definitely ' UP ' .
  • Inter-packets-arrival methods could be used in place ' uncongested RTT * multiplicant ' methods as trigger events to react accordingly, further if ' synchronisation ' packets method ( here only generated from local end modified receiver TCP but eliciting responses such as eg returning ACKs from the other end's unmodified TCP ) &/or timestamp options were incorporated would enable definite detection/ definite inference of which direction's link is definitely ' DOWN ' or definitely ' UP ' .
  • TCP connection being symmetrical ie a local end may be both sending & receiving data at the same time ( even if it is not sending real data at all there is always returning ACKs generated towards the other end )
  • the local end's modified TCP/ modified Monitor Software/ modified TCP proxy/ modified Firewall...etc could of course acts as both sender based & receiver based at the same time.
  • each end may again acts as both sender based & receiver based at the same time, working together : but preferable &/or alternatively once both ends detected each others' modification presence, they could agree to each work only acting only as sender based only, or each as receiver based only , or only one end will act as both receiver based & sender based with the other end's modified operations disabled.
  • An example of the many possible ways to detect each other's modified presence is eg to send a packet to the other end with special unique fixed length Identification pattern within the ' padding field ' or fixed length data portion.
  • Time period B corresponds to the total packet buffers delay cumulative introduced & experienced by the packet while being buffered at various node's along the path traversed : setting this value to, small period of eg 20ms here would ensure other real time critical VoIP / VideoConference UDP packets' enjoyed very good guaranteed service level, since UDP packets here would not likely encounter very much more than 20ms cumulative total buffers delay along the various nodes traversed.
  • Setting B O here would ensure that TCP flows would always attempt to immediately avoid any onset of packets buffering delay, keeping the network free of buffer-delays or only very insignificant buffer-delays during the occasional intervals when they do occasionally occur.
  • the time period T ms was earlier added/ could also be added here so that with the larger rates decrement percentage the flows traversing the bottleneck link ( incrementing their transmit rates as is usual with TCPs ) would now take longer time to again reach 100% link throughput levels or more to then requires buffering which would then impact slightly on other realtime critical guaranteed service UDP packets .
  • the modified TCP upgrades or Monitor Software ...etc may whenever required effect the per TCDP flow/s rates throttle via CWND percentage decrement &/or via ' pauses ' in such manner ...etc so as achieve required desired bottleneck link's throughputs ( eg to subsequently cause 100%; 99% , 95%, 85%...etc bottleneck links bandwidths utilizations, instead of present over 100% utilization level with accompanying packets buffering delay ) subsequent to various specified ' trigger event/s ' ( eg cumulative total buffered delay of B ms encountered ...etc ).
  • Various algorithms & policies & procedures may further be devised to handle all kinds of ' trigger events ' in various different manners.
  • the modified TCP upgrades or Monitor Software ...etc do not necessarily require prior knowledge of the inter-subnets' uncongested RTTs nor the inter-subnet's uncongested OTTs between various subnets within the proprietary network. Instead here the modified TCP upgrades or Monitor Software ...etc could keep tracks of the current latest observed smallest RTT value or current latest observed smallest OTT value of the individual per TCP flows, and treat this as dynamically equivalent to uncongested RTT or uncongested OTT of the individual per TCP flows. Common sense lower & upper limits on these RTTest(min) or OTTest(min) : eg their max upper ceiling limits could be set to known most distant location pairs' RTTmax value within the proprietary network ....etc.
  • the external Internet is subject to other existing unmodified TCP flows not within control as in proprietary network.
  • the example/s in (A) above would need be further modified to take this into considerations.
  • the ' trigger events ' to cause rates throttle decrements via CWND percentage decrements &/or ' pause/s ' ...etc here needs be further modified , eg not incrementing for specified or dynamically algorithmic derived s seconds after fallback to eg 100%/ 99%/ 95%/ 85%...etc , IF again bottleneck link's throughput utilization subsequently reaches back to 100% or more causing onset of packets buffering delay within the above s seconds , then allows transmit rates to begin increments/ growths again UNTIL 'trigger event/s ' (which could be packet drops/ buffering delays threshold exceeded...etc), ELSE start allowing transmit rates increments/ growths after s seconds elapsed.
  • Very fast reaction time ( instead of existing RFCs default minimum lower ceiling value of 1 second for dynamically derived RTO value ) of the modified TCPs here to ' pause ' &/or reducing CWND upon various ' trigger events ' would minimizes packet drops percentage, earlier described 'continuous pause ' would further very flexibly reduces transmit rates decrements sizes ie from eg 64Kbytes per RTT to just 40bytes per eg 300ms).
  • Modified TCPs here could be made more aggressive in CWND increment sizes ( &/or equivalent 'pause ' interval , ' continuous pause ' interval settings eg to be of smaller values ) in many various different ways .
  • CWND could be incremented eg a specified integer multiple or dynamically derived integer multiple of MSS per ACK received &/or per RTT instead of existing RPCs 1 MSS per ACK received &/or per RTT , Ssthresh value could be initialized to specified value &/or permanently fixed to very large value such as to be the same as the Maximum Window Size negotiated during TCP connection phase...etc.
  • modified TCPs could strive to decrement rates in such a way that ensuing bottleneck link/s utilization would be maintained at high throughputs eg 100%/ 99%/ 95%/ 85%...or even at various above 100% congestive buffering delay levels etc ( assuming all TCPs traversing the path were all modified TCPs ) .
  • modified TCPs at either sender or receiver or both ) here would be in possession of prior knowledge of uncongested source- receiver-source RTT or uncongested source-receiver OTT value , or dynamic best estimation RTTest(min) / OTTest(min) equivalent of the above : when all the links traversed each does not exceed their respective 100% available bandwidths ( ie no packet buffering occurs at any of the nodes traversed ), the RTT or OTT or RTTest(min) or OTTest(min) values derived from eg the returning ACKs will now be the same as the real actual uncongested RTT or uncongested OTT value ( with very small random variances introduced by nodes processing delays/ source or receiver hosts processing delays ...etc, hereinafter refers to as V ms : this value V ms variances would usually be magnitude order smaller than other earlier described system parameters such as specified or dynamically derived B ms ...etc .
  • modified TCPs could now eg reduce transmit rates so that the bottleneck/s' link utilization thereafter would be maintained at eg 100%/ 99%/ 95%/ 85%...etc assuming all TCPs traversing the bottleneck link/s are all modified TCPs ( now knowing the latest estimation equivalent value of the actual uncongested RTT or uncongested OTT of the per TCP flows, and value of C , the required CWND decrement percentage &/or ' pauses ' intervals or sequences of appropriate required ' pauses ' could now be ascertained to achieve the required desired end results ) .
  • Modified TCP now could eg stop any further rates increments/ growth of the TCP flows for a period s seconds ( specified or dynamically algorithm derived ) as eg described earlier to then respond accordingly as eg described earlier or in various different manners further devised.
  • This particular example has the effect of achieving high utilization throughputs in addition to existing RFCs friendly fair- sharing, and also helps keeps cumulative buffering delays of the traversed path/s maintained at low level correlated to C value : in the absence of other strong dominant unmodified TCP flows, in which case modified TCP flows here would / may start allowing rates increments/ growth within s seconds , to then together with all other unmodified TCP flows eventually cause packet drops event : whereupon unmodified TCP flows would re-enter ' Slow Start ' taking very long time to re-attain previous achieved transmit rates whereas modified TCP flows could retain arbitrary high proportion of previous achieved transmit rates/ throughputs ( solving the existing responsiveness problems associated especially with long RTT long distance fat pipes ).
  • new TCP flow/s ( &/or other new UDP flow/s ...etc ) would always be able to immediately utilize up to 5% of available bottleneck link/s bandwidths to begin flow rates increments/ growth without introducing packets buffering delay/s along the route, further the bottleneck link/s would be able to immediately accommodate new additional sudden instantaneous traffics surge of X milliseconds equivalent of available bandwidths without dropping packets ( most Internet nodes commonly has between 300ms - 500ms equivalent buffer sizes ) : this is consistent with common wisdom of preserving existing flows' established throughputs while allowing gradual controlled new additional flows' growths.
  • modified TCP could always allow rates increments/ growth conservatively as in existing RFCs linear growth or more aggressively ( instead of throttling back upon
  • Website servers/ servers farm could advantageously implement above described modified TCP implementations.
  • Typical websites are often optimized to be of around 30Kbytes - 60 Kbytes for speedy downloads (for an analog 56K modem downloading at around 5 Kbytes/sec continuously uninterrupted by packet/s drops...etc this will still take around 6 seconds - 12 seconds ).
  • sending source server's modified TCP would have an initial very first estimation of the uncongested RTT or uncongested OTT of the per TCP flow/s in form of current latest observed minimum source-receiver-source RTTest(min) or source-receiver OTTest(min) value ( whether it is representative of the actual uncongested RTT or uncongested OTT value, or not ) .
  • modified TCP here could very quickly react accordingly ( much much faster than existing RPCs minimum lowest floor default reactions time of 1 second minimum ) in manners as described/ briefly illustrated in preceding above eg rates decrement to ensure certain levels of subsequent bottleneck link/s utilization/ throughput ( instead of existing RFCs rates halving & ensuing prolonged periods of bandwidths utilizations ), &/or more controlled aggressive subsequent rates increments/ growths, &/or more controlled buffer delay levels congestion avoidance ( eg ' wait s seconds before allowing rates increments/ growths...etc, instead of present existing RJFCs only scheme of ' wait for packet/s drops ' ) ...etc.
  • Monitor Software/ TCP Proxy...etc could even keep the resident host's effective transmit window &/or CWND to be permanently fixed at certain required size or even at maximum negotiated Window Size at all times with the above mentioned combinations of techniques, methods & sub-component methods, leaving the transmission rates be controlled via only ' pause ' / ' continuous 'pause ' &/or allowing 1 single or a small fixed number of packets to be forwarded during each pause intervals to act as ' probes '.
  • sending source server's modified TCP may instead now immediately begin sending the very 1 st data segments/ packets starting immediately with existing RFCs Slow Start's CWND window of 1 MSS segment size, but this may take many RTTs now to complete the contents transfer around tens of seconds to minutes as is in end users' typical common daily experience.
  • receiver modified TCP or Monitor Software could now derive the source-receiver path's estimation equivalent of the actual uncongested one ⁇ way-trip-time of arriving packets, ie current latest observed OTTest(min).
  • the cumulative total buffering delays if any, encountered by any arriving packet could be derived by subtracting arriving packet's OTT by OTTest(min) ( ignoring any usually very small random variances introduced by nodes' packets processing/ forwarding time fluctuations ) .
  • Modified TCP or Monitor Software would now be in position , now armed with estimation equivalent of uncongested source-receiver path's actual uncongested OTT & buffering delays levels, to react accordingly ( remotely cause sending source TCP to ' pauses' &/or ' continuous pause ' with 1 single packets forwarding allowed per pause interval, &/or ' unpause ', &/or increment CWND sizes via Divisional ACKs/ multiple DUP ACKs/ Optimistic ACKs, &/or pre-empts RTO timeout via early 3
  • DUP ACKs fast retransmit , &/or etc ) as desired to achieve the maximum bandwidth utilization/ throughput criteria specified while preserving friendly fair-sharing.
  • receiver modified TCP or Monitor Software may instead very simply wait specified W milliseconds (eg 250ms ) interval for the next packet to arrive since the arrival time of the latest last received immediately previous packet & if this does not arrive within W milliseconds to then treat this as ' trigger event ' ( most likely the following packet was buffer-overflowed congestion dropped ) to then immediately accordingly ( remotely cause sending source TCP to ' pauses' &/or ' continuous pause ' with 1 single packets forwarding allowed per pause interval, &/or ' unpause ', &/or increment CWND sizes via Divisional ACKs/ multiple DUP ACKs/ Optimistic ACKs, &/or pre-empts RTO timeout via early 3 DUP ACKs fast
  • latest packet RECEIVED LOCAL SYSTEM TIME received from remote sender, pure ACK or regular data packet
  • latest receiver packet's advertised window size sent by local MSTCP to remote sender
  • latest receiver packet's ACK Number ie next expected Seq Number expected from remote sender sent by local MSTCP to remote sender, requires per flow incoming & outgoing packets inspections , & we now should be able to immediately removes the per flow TCP table entry upon FIN/ FIN ACK not just waiting for usual 120seconds inactivity).
  • remote sender's CWND eg 64Kbytes user specified or dynamically algorithm derived, eg could also set to smaller or larger scaled sizes dependent on end user last mile link's bandwidth capacity.
  • eg 64K which is the usual default maximum window size negotiated unless window scaling option selected, this could enable remote external Internet website's contents to be downloaded within just a single RTT compared to usual tens of seconds experienced ).
  • TCP uses a three-way handshaking procedure to set-up a connection.
  • the initiating side makes a note of Y and returns a segment with just the ACK flag set and an acknowledgement field of Y + 1. 2. If eg 300ms ( user specified or dynamically algorithm derived ) expires without receiving next packet then :
  • transmit rates decrement via CWND size percentage reduction eg [( present observed RTT - current latest recorded RTTest(min) or present observed OTT - current latest recorded OTTest(min) ) + T ms ] / present observed RTT or OTT but note here with T 0 ms implies causing subsequent bottleneck link's throughput to be 100% of available bandwidth , &/or pause interval set to [( present observed RTT - current latest recorded RTTest(min) or present observed OTT - current latest recorded OTTest(min) ) + Tms ]
  • Inter-packets-arrivals techniques could be adapted for use, likewise ' Synchronising Packets ' technique
  • bandwidths/ links probing techniques eg pathchar/ pipechar/ pathchirp...etc could be deployed in conjuctions to derive finer levels of knowledge of the path/ nodes/ links traversed, to react accordingly better.
  • modified TCPs enables approximately double the good throughputs / bottleneck bandwidths utilization compared to existing RFCs TCPs which very much under utilise the link/s' bandwidth capacity ( as is very apparent from their AM) additive- increase-multiplicative decrease ' saw-tooths' utilizations/ throughputs graphs of existing RFCs TCPs )
  • Sender TCP may or may not want to utilise algorithm during initial 64Kbytes of data packets transfer if eg the returning ACK for 1st regular data packet sent - returning ACK RTT for SYNC ACK sent > C ms eg 100ms ( due to very sudden increase in congestions level of path traversed )
  • Broadband networks are very very low loss rate, very very low congestions.
  • Http ( port 80 signature ) flows should be allowed sending eg 64Kbytes whole content in eg 1 RTT. Even if SYNC/SYNC ACK/ACK phase encounters retransmission ( RFC default 1 sec.) this would only encourages use of initial 64Kbytes CWND since flows along bottleneck link now likely halved rates...
  • Receiver TCP ( or Receiver Monitor Software... etc ) upon SYNC/ SYNC ACK then ACK with window size of eg 4Kbytes/ l ⁇ Kbytes/ 64Kbytes/ or Wl Kbytes...etc, upon receiving 4Kbytes/ 16 Kbytes/ 64Kbytes/ or any specified number of Wl or fraction of Wl Kbytes to then increase the advertised Receiver Window Size to W2 Kbytes eg N2 * (4Kbytes/ 16Kbytes/ 64Kbytes or Wl Kbytes etc ) where N2 is a fraction eg 1.5/ 2.0 / 3.5 / 5.0 etc or algorithmically derived part of ....
  • Sender TCP ( or Sender Monitor Software... etc ) upon SYNC then SYNC ACK with window size of eg 4Kbytes/ l ⁇ Kbytes/ 64Kbytes/ or Wl Kbytes...etc, upon receiving returning ACKs acking 4Kbytes/ 16 Kbytes/ 64Kbytes/ or any specified number of Wl or fraction of Wl Kbytes to then increase the Sender Window Size to W2 Kbytes eg N2 * (4Kbytes/ l ⁇ Kbytes/ 64Kbytes or Wl Kbytes etc ) where N2 is a fraction eg 1.5/ 2.0 / 3.5 / 5.0 etc or algorithmically derived part of .... & so forth for
  • Note Sender based Monitor Software ...etc may modify intercepted incoming packets from remote receiver modifying the Advertised Receiver Window sizes ( before forwarding the modified packet to Sender TCP )...thus achieving the new TCP congestion control method based solely on the continuously incremented Advertised Receiver Window Size
  • TCP could be symmetric , one end could both be Sender & Receiver, ie the above Method then needs be implemented-directional then.
  • the method would enable arbitrary finer more flexible more variety of control/ pacing of packets transmissions, while ( if required ) preserving ( or offered similar corresponding mechanisms ) all other existing TCP error control/ congestion control mechanisms like slow start/ congestion control linear increase/ 3 DUP ACKs fast retransmit/ timeouts... etc
  • Sender's CWND should be initialised to the desired initial value 4Kbytes/ l ⁇ Kbytes/ 64Kbytes/ or W Kbytes... etc , or Receiver may eg send 3 + DupNum DUP ACKs or a series of such DUP ACKs at various times or Optimistic ACK ...etc to ramp up CWND initially ( existing RFC 2414 / 3390 already allow 4 Kbytes initial CWND value , in which case there is no need to ramp up CWND ).
  • receiver may ' rates limit ' sender's rate of packets injections without needing sender to send out packets evenly spaced/ evenly delayed inter-packets.
  • sender's max transmit rates is dependent on min( swnd, cwnd, rwnd ) - unacked sent segments ( or unacked sent segments decreases the swnd & acked segments increment the swnd, if swnd here is fixed at same initially negotiated window size throughout ) , & the continuous increment/ decrement / adjust RWND Method will consider this in the rwnd updates.
  • remote server TCP transmit rates could now be paced by adjusting only the rwnd ( remote server's cwnd, ssthresh , swnd now always could be maintained at arbitrary large or very large values )
  • receiver based software could dynamically pace the remote sender's transmit rates via dynamic selection of values of rwnd window updates thus could modify all rwnd field values in all intercepted receiver MSTCP generated packets destined for remote server TCP to the required rwnd values to pace the sender's transmit rates (this would require packet checksum recomputation modification )
  • receiver based software/ TCP could advantageously monitor arriving OTT values from timestamp fields, while the OTT values remains same as latest OTTest(min) ( or same as prior known actual uncongested OTT ) within small allowed variances ( eg due to small variances in sender's OS/ stack CPU processing time )
  • receiver software/ TCP may increment rwnd (whether emulating slow start exponential rwnd growth &/or congestion avoidance linear growth ) continuously so long as arriving OTT value does not exceed latest ( or actual uncongested OTT ) OTTest(min) ie no buffer delays along the path ( &/or optionally decrement downwards if arriving OTT exceeded Ottest(min ) , further but when the arriving OTT value then exceed latest ( or known actual uncongested OTT ) OTTest(min) by eg specified 10ms / 50ms/ 100 ms ...etc ( eg due to other non- modified existing TCP flows incrementing their rates even when packets starts to be buffered , or UDP traffics ) receiver based software / TCP may now choose to allow
  • newly established TCPs may be allowed to grow their transmit rates or rwnd or cwnd until not more than eg 100ms extra delay in OTTest(min) or RTTest(min) or their known actual values, & all modified TCPs upon experiencing eg > 100ms extra delay would all reduce their transmit rates or rwnd or cwnd... etc by certain percentage eg 10%/ 15%/ 25%...etc (this favour pre ⁇ existing established flows but also allows new established TCP to begin attaining their transmit rates growth ) .
  • Another scheme will be to allow continuous transmit rates or rwnd or cwnd...etc growth until onset of packets starts being buffered ( indicated by extra delays in OTTest(min) or RTTest(min) of latest OTT or RTT ) whereupon their transmit rates or rwnd or cwnd will be decremented backwards one step ( thus oscillating incrementing forward & decrementing backwards around the 100% utilisations level ).
  • the ' pause ' interval may also be derived from the latest OTT or RTT value just before congestion drops detected & the OTTest(min) or RTTest(min) or known uncongested actual OTT or RTT value : eg if latest OTT just before congestion drops event is 700ms & OTTest(min) is 200ms then could now set the 'required ' pause interval to eg 500ms ( 700ms - 200ms ) to just totally clear all the nodes' buffered packets or even more eg 600ms or less eg 400ms as required.
  • remote server may correspondingly choose a scaled sender window size, however it may also simply allow receiver to scale but to choose not to scale its own sender's window size : this doesn't matter much ( even if such negotiated window size/s are far too big for the last mile &/or first mile physical bandwidths eg 56K/ 500Kbs...etc ) .
  • sender does similar window scaling factor as receiver , this could enable very simple ready usage of this method , without any new software or modified TCP required, by eg simply setting the receiver PC's TCPWindowSize registry value to eg 1 & eg scale factor of eg 2 ⁇ 14 ( minimum window size resolution now being approx 4Kbytes ) thus the sender's effective transmit window will at all times be limited to approx 4 Kbytes since receiver would now only ever sets its rwnd to at most 4Kbytes at all times ( whereas with receiver PC's registry setting or application socket buffer's setting of TCPWindowSize registry value of 2 & scaled factor of 14 this gives resolution of approx l ⁇ Kbytes * 2 ie 32Kbytes )
  • receiver then where required modifies all intercepted outgoing packets ensuring each of their receiver window size field at all time does not exceed a suitable upper ceiling value eg 16Kbytes for 56K receiver last mile's dial-up or eg 96Kbytes for 500kbs receiver's last mile DSL...etc
  • the receiver may pace the sender's injection rates of packets into the network by slowly increasing the receiver window size field of outgoing packets eg immediately after TCP establishment receiver may send an evenly spaced & timed series of eg 16 pure window update packets every eg 62.5 ms for eg 1 second starting with 4 Kbytes then 8Kbytes then 12Kbytes....then 64Kbytes ( instead of advertising 64Kbytes upper ceiling window size immediately which would cause packets burst ) thus ensuring no sudden large packets burst from sender ( note returning ACKs if any during this series of window size updates would increase the packets injection rates possible , receiver however may optionally reduce the window update size values taking this into considerations ).
  • Receiver may optionally modify outgoing packets' receiver window size field values at any time where appropriate.
  • window size update/ modifications could be carried in any desired manners of increments/ decrements/ adjustments at all times, possibly taking into consideration the latest outgoing returning ACKs' values sent...etc. This could be useful to fetch http website contents in fastest optimal manner immediately after TCP connection establishment ( ie then pacing sender to send at eg receiver's last mile physical maximum line rates possible : note causing sender to immediately burst all eg 64Kbytes contents in one RTT may be counter-productive).
  • the ' pause ' method may here specify a Timeout period which is uncongested RTT/ OTT ( or latest estimated uncongested RTT/ OTT ) value between the two ends plus eg 200ms of buffer -delays, & ' pause-interval ' upon Timeout of eg 150ms - ⁇ the bottleneck link's bandwidth here could be constantly 100% utilized at all times , since the ' pause ' method here strives to keep the cumulative traversed path's buffers' occupied within a buffer occupancy small range at all times ie bottleneck link could always be 100% utilized .
  • sender's CWND mechanism here would be redundant to requirements in achieving congestion control purposes at some stage (except where other component methods such as Inter-Packet-Arrivals method plus 3 + DupNum DUP ACKs to rapidly increment CWND size upon congestion trigger events averting RTO timeout events ...etc are not incorporated, in which case hence CWND would continue to only play the part of network available bandwidth probings during the very initial stage exponential &/or linear growth to attain very large values ( even though the connection's maximum transmit rate is at all times limited to eg comparatively very small rwnd value which the receiver advertises in scaled shifted format eg instead of advertising rwnd value of 64K receiver TCP now advertises only 4 if maximum scaled factor 14 utilised signifying rwnd value of 4 left shifted 12 places ie same as 64K : NOTE even though both ends now permits/ negotiated very large maximum scaled window sizes , receiver TCP would only ever be able to advertise its usual physical current latest
  • This method at its simplest requires only users to set their local PCs TCP registry parameters to utilize large window scale factor such as scale factor of eg 12 whereas the 16 bit usual TCPWindowSize value can be set as small or as large as is required eg 1 byte to 64Kbytes : with user PC scale factor of 12 ie maximum possible scaled window size value of 256Mbyte & user PC TCPWindowSize value of just 1 , and remote server negotiated scale factor of eg 12 & remote server TCPWindowSize of eg 64Kbytes the remote server maximum transmit rates at any time will not exceed user PC scaled window size of 4Kbytes ( 1 * 2 ⁇ 12 ) per RTT ( assuming intermediate softwares, if any, do not intercept & modify rwnd field values of outgoing packets from user PCs to be larger than 4Kbytes ) .
  • remote server's Ssthresh value is usually initialized to be same as the rwnd value negotiated during TCP connection establishment.
  • To implement this method at sender remote server requires only the remote server's TCP stack to fix its SStresh values to be arbitrary very large eg to ' infinity ' & to utilize window scale option for TCP connection negotiations ( &/or fix its CWND value to its largest attained growth throughout, ie CWND could continuously increment eg from initial RFC value of 1 SMSS but never be decremented ).
  • the flow's transmit rates or throughput or CWND graph here would show the well known ' saw tooths ' pattern slow linear climbing to maximum then sudden drop back to near ' 0 ' repeatedly ie it's immediately apparent that up to half the link's physical available bandwidths are being wasted not utilized, whereas modified TCP flow would exhibit transmit rate or throughput or CWND graph of near constant 100% link's physical available bandwidth utilization ie possibly up to double the throughputs / halved the transfer completion time of unmodified TCP flows .
  • the TCP flow's graph would show a mixture of sudden dropping to half previous transmit rates level & near ' 0 ' thus modified TCP flows would show somewhere between 33% - 100% more throughputs compared to unmodified TCP flows " ⁇ enabled possibly up to instant doubling of the link's ' apparent ' physical bandwidths, where the link may be leased lines/ Intercontinental submarine optical cables/ satellites / wireless...etc .
  • Receiver TCP's should allow sender TCP to negotiate window scale option , but receiver TCP's own receive maximum window size should be kept relatively small preferably so as to just be able to fully utilise the ' bottleneck link's bandwidth capacity ' of the path traversed by IP packets (the bottleneck link here is usually either the sender's first mile media eg DSL or the receiver' first mile eg leased line ) : eg assuming the uncongested RTT between the two ends is eg 100ms & stay constant at this eg 100ms value throughout , and the bottleneck link's bandwidth capacity is 2 mbs, the receiver maximum window size here should be kept/ set relative small to just eg 25.6 Kbytes ( This ensures sender TCP's ' effective window size ' at any time does not exceed 25.6 Kbytes thus would not transmit at rates higher than 2 mbs at any time, even though sender TCP's CWND could grow to quickly attain/ far exceeed receiver'
  • sender TCP's CWND could very quickly re-attain & exceed receiver's maximum window size of eg 25.6 Kbytes in just 5 * eg 100 ms RTT ie in just 500ms ).
  • the transmit rates graph/ instantaneous throughput rates graph ( as could be seen using Ethereal's IO- Graphs traffics display analysis facility http://ethereal.com ) here would exhibit almost constant closer to 100% link bandwidths utilization ie the graph here would resemble ' square wave signal form ' with top flat plateaus closer to 100% link utilization level , compared to existing standard TCPs which almost invariably exhibits ' saw-tooths ' forms with plateaus at the valleys of the saw-tooths much farther away from 100% link utilization level .
  • the RTTs between two ends could vary by magnitude order over time ( eg from 10' s of milliseconds to 200 ms ) unless the end to end connection's RTT is guaranteed by carrier's IP transit Service Level Agreement guaranteed RTT/ bandwidth, thus it ' throttling ' sender's transmit rates to the bottleneck link's bandwidth capacity via eg receiver maximum window size...etc would suffer magnitude order throughputs &/or ' goodputs ' degradation during such times when such RTTs over public Internet lengthens : much better to set the receiver's maximum window size here to much larger values to be able to accomadate such lengthening public Internet's RTTs scenarios eg were receiver's maximum window size now be set to eg 8 * the earlier eg 25.6 Kbytes then the end- to-end throughputs &/or ' goodputs ' could be maintained to close to 100% bottleneck link's bandwidth capacity at any time assuming the RTTs does not length
  • sender TCP's CWND is stabilized & non-increasing ( eg when CWND has reached the maximum sender window size value ) it is the ACKs self- clocking feature that regulates how much sender TCP could transmit ( the TCP Sliding Window ) , ie according to the rates of arriving returning ACKs , and the maximum rate of this returning ACK is in turned limited to the bottleneck link's bandwidth capacity of the traversed path ie how fast data from sender could be forwarded along the bottleneck link & this is approximately equal to bottleneck's bandwidth in bytes per second (if ignoring the eg 40 bytes overhead required for non-data IP packet header ).
  • sender's CWND will be successively incremented by an amount equal to the bottleneck link's bandwidth capacity in each following successive RTT , each successive RTT slightly linger than immediately previous RTT due to successive eg 100 ms equivalent amount of extra buffered packet traffics introduced by incremented CWND ( or incremented effective window ) until eg the 4 th successive RTT where the bottleneck node now runs out of buffers thus causing packets to be dropped.
  • Sender would then likely fast retransmit the dropped packets upon receiving 3 DUP ACKs from receiver TCP , in which case even the now halved CWND & SSthresh valus would still almost invariably remain much larger than the relatively small receiver maximum window size value ⁇ thus sender TCP would thereafter continue to transmit at same previous rates undiminished by these packet drops events , and with ACKs returning at the rates equal to the bottleneck link's bandwidth capacity the sender's transmit rate now would continue to be at the exact maximum rates equal to the bottleneck link's bandwidth capacity ( assuming this is equal or smaller than receiver's maximum window size ) .
  • sender may also RTO Timeout retransmit the dropped packets only after minimum 1 second existing RFC default minimum time period, if not already taken care of by receiver's 3 DXJP ACKs fast retransmit request, but these will be very much rarer : in which case sender's CWND would still very quickly exponential increases in just a few RTTs to re-attain / exceeds the relatively small receiver's maximum window size value ( helped by ' arbitrary ' large Ssthresh value ).
  • Sender's CWND here would ' exponentially ' grow to very large values ( tends towards the ' maintained ' arbitrary large Ssthresh value ) despite periodic fast retransmit halving of CWND & Sstresh values .
  • sender's TCP's CWND attained/ exceeded receiver's maximum window size, it will thereafter pre ⁇ dominantly be its received share of the returning ACKs self- clocking rates, total rates of which at most equal to the bottleneck link's bandwidth capacity at any time , that will henceforth dictates sender TCP transmit rates.
  • the other end's TCP response variances in generating reply ACKs may reduce the returning ACKs' rates to below that of bottleneck link's bandwidth capacity, buffer delays at intervening nodes along path traversed( lengthening RTTs ) ... etc may reduce the total returning ACKs' rates to all TCP flows traversing the bottleneck link to below / less than 100% of the bottleneck link's bandwidths capacity ( hence setting receiver's maximum window size to be larger more than the very minimum size required, to fully utilse 100% of the bottleneck link's bandwidth capacity assuming same uncongested RTTs throughout TCP session , sufficient to compensate for such variances would enable 100% bottleneck link's bandwidths utilization at all times despite such variances )
  • sender's maximum Window Size & CWND values can be arbitrary large at any time ( helped maintained so by ' arbitrary ' large Ssthresh value )
  • relatively small receiver maximum window size value the end-to-end TCP connection utilizing above ' unrequired ' but intentional ' large scaled sender window size & relatively small receiver maximum window method ' here would tend towards a stabilized transmit rates equal to the botteleneck link's bandwidth capacity ie the transmit rates or throughput graph here would exhibit near 100% link utilization level ' square wave form ' .
  • edit TCP registry ( &/or optionally per individual application's own socket buffer size ) ensures all new TCP request large Window Scale factor 14 and TCPWindow Size 64K ( ie max 1 Gigabyte ), preferable SACK enabled, preferable no Delay- ACK.
  • CWND & Sender window size could be arbitrary large , & does not play any further part in congestion controls ( once CWND attained size much greater than receiver's maximum window size ! ! !
  • receiver thereafter its ACKs self-clocking feature that adjust maximum possible sending rates to the available bottleneck link's bandwidth, but of course, receiver can continue to dynamically adjust the advertised receiver window size to further exerts control on sender's transmit rates , or the intercept software residing at sender end may optionally dynamically modify incoming packets' receiver window size to exert similar control on sending MSTCP's transmit rates/ ' effective window ' ), OR
  • intercept software could always modify receiver window size field values in incoming packets from remote receiver to be of any required smaller maximum values (whether dynamically derived eg from latest recorded minimum inter-returningACKs-interval & uncongested RTT/ OTT values or estimnates...etc, or user may specify specific values from prior knowledge of the traversed bottleneck link's bandwidth capacity ) ,thus ensuring sender TCP's effective window size never exceeds the size level needed to match traversed bottleneck link's bandwidth capacity • ⁇ now need not recourse to receiver's system resource constraints to limit dynamic receiver's advertised window size field value, and both sender's & receiver's maximum window size values can together be both negotiated to same arbitrary very very large scaled window size values.
  • sender's CWND definitely gets built up to sufficiently large or very large value ab initio upon ftp's TCP data transfer channel establishment, else an immediate packet drop at this very initial stage may cause sender's SSThresh to be set to half of the present initial very small CWND value: this could be achieved eg by intercept software storing a number eg 10 of the very 1 st initially sent data packets & performs actual retransmissions to remote receiver of any of the eg 10 packets which were not received ( ie checking incoming returning ACKNo during this time to detect missing packets not received at remote receiver TCP, & discarding/ modifying/ or not forwarding such arriving packets back to local MSTCP to prevent local MSTCP from resetting Sstresh value to half the present initial very small CWND value at this time )
  • the above schema could ensure two or a small number of packets are available for forwarding onwards to remote receiver one immediately after another in very quick successions possible allowable by the immediate 1 st mile link's bandwidth to ensure the traversed path's latest best estimate of bottleneck link's bandwidth capacity is continuously updated from subsequent arriving latest recxorded minimum inter-returningACK-interval value ( eg waiting till two or a small number of packets are available before forwarding them onwards together ...etc,
  • the actual bottleneck link's bandwidth capacity could further be derived on the finer level of bytes per second instead of packets of certain size per second , and the transmit rate pace &/or transmit rate pause techniques could be adapted to utilise this derived common finer granularity of bytes per second knowing the actual size of the pending packet size to be transmitted onwards ).
  • the schema here could utilise own devised algorithm for inctrementing/ decrementing paced tranmit rate , different fro9m existing RFCs Sliding Window congestion avoidance mechanism .
  • the transmit rates here should exhibit same constant near 100% bottleneck link's utilisation level ' square wave form ' & at all times the transmit rates will oscillates within very small band around the near 100% bottleneck link's utilisation levels .
  • local intercept software here could generate window size update packet or modify receiver window size field values in incoming packets from remote receiver TCP , eg ' 0 ' or very small values as required , to local MSTCP to temporarily ' stop ' ( or reduce the packets sending rates of local MSTCP ) local MSTCP from generating/ sending out new packets, such as when the number of packets in the intercept software's forwarding buffer packets queue exceeds certain number or total size . This prevents excessive very large packets queue from building up which may cause eventual RTO Timeouts in local MSTCP .
  • LARGE FTP TRANSFER IMPROVEMENTS QUANTIFICATIONS SIMPLIFIED :
  • minimum link's bandwidth needs be 600 kb/s to transmit 1,000 packets in 20 seconds ( 1,000 * 1,500 * 8 / 20 )
  • minimum link's bandwidth needs be 600 kb/s to transmit 100 packets in 2 seconds ( 100 * 1,500 * 8 / 2 )
  • Such ' Square Wave form ' TCPs would be TCP friendly, were the TCPs flows traversing the botteleneck link consists of all such ' Square Wave form ' flows or a mixture of such ' Square Wave form ' flows & existing standard RPC TCP flows, the total rates/ total number of returning ACKs to all such flows/ all such mixture of flows would still be limited to not more than corresponding to the bottleneck link's bandwidth capacity of the path traversed - ⁇ such ' Square Wave form ' TCP flows could be incrementally deployed over the external Internet , maintain/ retain their attained transmit rate despite packet drops caused by other existing standard RFCs TCP flows &/or ' saw-tooth ' effect of the mixture of flows &/or public Internet congestion packet drops &/or BER packet corruptions ( bit error rates ) while able to remain TCP friendly to all such ' Square Wave form ' TCP flows &/or other existing standard RFCs TCP flows ( Note : new TCP flows could in any event almost always begin their transmit
  • An alternative method without utilizing modified TCP to pre ⁇ empt ' saw-tooths ' phenomena above, is to set the sender TCP's maximum send window size ie TCPWindowSize system parameter value ( &/or various other related parameter values ) so that sender TCP's maximum possible Bandwidth Delay Product ( max . window size / RTT ) value would never exceed the link's physical bandwidths thus there could not be congestion packet drops, assuming this TCP flow is the only flow utilizing the link at the time.
  • Standard RJFC TCPs data transfer throughput performs badly over path/ network with high congestion drops rates &/or high BER rates ( physical transmission bit error rates ), especially in long distance fat pipes network ( LFN ) with high RTT values & very large bandwidth paths .
  • Standard RFC TCPs' inherent AIMD additive increase multiplicative decrease ) sawtooths transmission waveform constantly fluctuating surges between 0% - much over 100% of physical link's/ bottleneck link's bandwidth capacity, could also contributes to packet drops itself.
  • TCPs halves its Congestion Window CWND size, thus halves its transmission rates, upon packet loss events as notified via 3 DUP ACKs Fast Retransmission requests or RTO Retransmission Timeout .
  • TCP also could't discern non-congestion-related causes of packet drops event such as BER effects, & treats all packet loss events as being caused by congestions of the path/ network.
  • MinRTT is the latest estimate of the actual totally uncongested RTT between the TCP flow's end points , thus if all flows traversing the congestion drops node are all such modified TCP flows acting in unison, this particular node here should subsequently be uncongested or near congested : minRTT here is simply the value of recorded smallest RTT of the observed so far of the modified TCP flow, which would serve as the latest best estimate of the actual physical uncongested RTT of the flow ( obviously if the actual physical uncongested RTT of the flow is known, or provided beforehand, then it should or could be used instead ) .
  • the total number of transmitted in-flights-bytes transmitted into the network during the RTT of this particular 3 rd DUP ACK triggering Fast Retransmission ie the total number of transmitted in-flights-bytes transmitted between the time of transmission of the packet with same SeqNo as the 3 rd returning DUP ACK triggering Fast Retransmission and the time of receipt of this particular 3 rd DUP ACK , could be derived by maintaining an time-ordered event entries list ( ie purely based in the order of their transmittal into the network ) consisting triplet fields of SeqNo of the packet sent , and TimeSent, total_ number_of_ bytes of this packet including encapsulation/ header .
  • the RTT value of the 3 r ⁇ DUP ACK packet with a particular Acknowledgement Number could be derived as present arrival time of this present 3 rd DUP ACK - TimeSent of the data carrying packet with same SeqNo as the present 3 rd returning DUP ACK.
  • the total transmitted in-flights- bytes could be derived as the sum of all the total_number_of_bytes fields of all entries between the event list's entry with same SeqNo as the returning 3 rd DUP ACK , and the event list's very last entry.
  • This event list size could be kept small by removing all entries with SeqNo ⁇ the 3 rd DUP ACK' s ACKNo.
  • CWND here would only be incremented in the ratio of arriving 3 rd DUP ACK' s RTT/minRTT * the number of sent segment bytes acked by this arriving 3 rd DUP ACK, rounded to the nearest bytes or fractions carried forward ( instead of the usual standard RFCs TCP increment by the number of sent segment bytes acked by arriving new ACKs) : this is continued for all subsequent multiple same or incremented ACKNo DUP ACKs or new ACKs , until the reductions is achieved whereupon this reduction process ceases .
  • This has the effect of smoothing the in-flights-bytes reduction process , so there is still an appropriately reduced continuous transmissions & reception of new packets throughout the in- flights-bytes reduction process.
  • the congestion drop/s notification event caused by RTO Timeout Retransmissions could be :
  • subsequent congestion drop notification event eg subsequent multiple DUP ACKs with unchanged same ACKNo ,third DUP ACKs with new incremented ACKNo, ( or even RTO Timeout Retransmission eg detected by TCP retransmiting without 3 rd DUP ACKs triggering Fast Retransmissions) must allow existing ' in ⁇ flight-bytes reduction ' process/ procedure to be completed if new computation does not require bigger reductions(ie does not require resulting in smaller total in-flights- bytes ) , otherwise this new process/ procedure may optionally take over . ( could also alternatively allow such process/ procedure to commence only once per RTT , based on a particular ' marked ' SeqNo returning then checking if there had been any congestion drop notification event/s during this RTT ).
  • modified TCP here could derive the RTT of the particular return ACK ( or return ACK immediately prior to the RTO Timeout Retransmission ) causing congestion drop/s event notification
  • modified software could further discern if the same event above was actually a ' false ' congestion drop/s notification & react differently if so : ie if the RTT associated with the particular congestion drop/s event notification is the same as the latest estimated uncongested RTT of the end points ( or if known/ provided before hand ), or even not differ by certain specified variance amount within bounds of a single node's smallest buffer capacity equivalent in milliseconds , then this particular congestion drop/s notification could rightly be treated as arising from physical transmission errors/ corruption/ BER ( bit error rates ) instead, & modified software could simply retransmit the notified dropped segment/ packet without needing to cause/ enter into any in-flights-bytes reductions process whatsoever.
  • modified TCP here would not necessarily automatically need to reduce/ halve/ resets CWND size upon congestion drop/s notification event caused by new 3 rd DUP ACK/ subsequent same ACKNo multiple DUP ACKs following the new 3 rd DUP ACK and/or RTO Timeout Retransmissions : modified TCP here needs only ever necessarily reduce CWND size appropriately upon congestion drop/s notification event/s to reduce the number of outstanding in-flights-bytes to appropriately derived values.
  • any bottleneck neck link would continuously forward sent packet towards receiver TCPs at the bottleneck's physical line rates, regardless of the buffer residency occupations levels at the bottleneck node &/or congestion drop/s occurrences , at any time - ⁇ thus the sum of all the bytes acknowledged during the RTT period/s associated with the returning ACKs received at all the sender TCPs would be almost invariably equal to the bottleneck link's physical bandwidth at any time if the bottleneck bandwidth is fully utilised.
  • TCP's congestion avoidance algorithm should strive to keep the bandwidth utilisation levels at close to 100% of the bottleneck/s' link bandwidth as far as possible, instead of existing standard RFC TCP's gross under-utilisation caused by CWND size halving upon congestion drop/s notification event/s .
  • the physical bottleneck link of a TCP connection over the Internet is usually either the receiver TCP's last mile transmission media or the sender TCP's first mile transmission media : these are usually 56Kbs/ 128Kbs PSTN dial-up or typical 256Kbs/ 512Kbs/ IMbs/ 2Mbs ADSL link.
  • the bottleneck link could only forward all the flows' traffics at maximum line rates limited by its bandwidth " ⁇ increasing the sending rates beyond that of the current bottleneck link's line rates (the current bottleneck link may change from time to time depending on network's traffics ) will not result in any higher throughputs of the TCP flow/s beyond the bottleneck link's physical line rates.
  • TCPs here could advantageously be modified to not send at a rate greater than the bottleneck link's maximum possible physical line rates. To do so would only cause the ' extra ' beyond bottleneck's physical line rate's amount of packets/ bytes sent during each RTT to be inevitably buffered or dropped somewhere along the two end points of the TCP flow.
  • the successive RTT values could be readily derived, since existing standard RFC TCPs already performs calculations/ derivations of successive RTT values based the a ' marked ' TCP packet with particular SeqNo for each successive RTT periods.
  • the throughput rate for each successive RTTs could be derived by first recording or deriving the total number of transmitted in-flights-bytes transmitted into the network during the RTT of this particular ' marked ' SeqNo packet ie the total number of transmitted in-flights-bytes transmitted between the time of transmission of the packet with the particular ' marked ' SeqNo and the time of its returning ACK ( or SACKed ), which could be derived by maintaining an time-ordered event entries list ( ie purely based in the order of their transmittal into the network ) consisting triplet fields of SeqNo of the packet sent , and TimeSent, total_ number_of_ bytes of this packet including encapsulation/ header .
  • the RTT value of the particular ' marked' packet with a particular SeqNo could be derived as present arrival time ofthis present returning ACK ( or SACKed) - TimeSent of the data carrying packet with the particular ' marked ' SeqNo .
  • the total transmitted in- flights-bytes could be derived as the sum of all the total_number_of_bytes fields of all entries between the event list's entry with same SeqNo as the returning 3 rd DUP ACK , and the event list's very last entry.
  • This event list size could be kept small by removing all entries with SeqNo ⁇ the 3 rd DUP ACK's ACKNo.
  • a simplified alternative in place of calculating the transmitted total number in-flights-bytes , would be to approximate them as the largest SeqNo transmitted + number of data bytes of this largest SeqNo packet - largest ACKNo received , at the time of arrival of the 3 rd DUP ACK : this gives total number of in- flights-datasegmentbytes ie pure data segments in-flights not including encapsulations/ header / non-data-carrying control packets .
  • the throughput rates for the RTT here hence could be computed as above derived total number of transmitted in-flights-bytes transmitted into the network during the RTT period / this RTT value ( in seconds ) .
  • RTT_maxT the RTT value associated with this period when largest throughput rate maxT was attained hereinafter known as RTT_maxT, together with the total number of transmitted in- flights-bytes associated with this period when largest throughput rate maxT was attained hereinafter known as In_Flights_BYTES_maxT .
  • the test formula may further include a mathematical variance tolerance value eg " IF [ total number of in- flights-bytes during this RTT period / In_Flights_Bytes_maxT ] > [ RTT value in milliseconds during this period / RTT_maxT in milliseconds ] * variance tolerance ( eg 1.05 / 1.10 ...etc )
  • modified TCP could then no longer to continuously probe for path's bandwidth as aggressively as in existing RFC standard TCPs' slow start exponential CWND increment/ congestion avoidance linear CWND increment per RTT , which invariably strives to cause unnecessary congestion packet drops &/or burst-packet-drops.
  • modified TCP may thereafter limit any subsequent increment in CWND size (optionally &/or effective window size ) in any subsequent next RTT period to be not more than eg 5% of the [ CWND size (optionally &/or effective window size ) associated with maxT at the time of maxT (which now equals the bottleneck line rate ) being attained * (the last previous ie latest RTT value in milliseconds / RTT_maxT in milliseconds) . If, very unlikely, throughput rate in any subsequent RTT becomes greater than maxT , THEN maxT would be updated and the bottleneck line rate determination process repeats again.
  • modified TCP will not unnecessarily aggressively increment CWND size &/or effective window size to cause congestion drops &/or burst- packet-drops, beyond that necessarily required to keep the bottleneck link busy at its line rate.
  • modified TCP may ensure the packets generation/ packets sending rate will be at the corresponding maxT rate (whether maxT has already attained rates equal to botteleneck's true line rate, or just latest largest maxT ) at all times, instead of packets generation/ packets sending rate as allowed/ ' clocked ' out by by returning ACKs ( or SACKed ) rates, subject to clearing of ' extra ' in- flights-bytes &/or appropriate rates reductions for dropped packets processes as described upon congestion drop/s notification event/s : ie modified TCPs optionally will be made to generate packets/ transmit at latest maxT rates not limited not limited by latest ACKs ( or SACKed ) returning rates, unless required to effect appropriate rates reductions to clear/ reduce m-flights-bytes &/or reduce rates corresponding to number of dropped packets ( eg reduce packets generation/ transmitting rate in equivalent bits per second to eg maxT * minRTT/ this period's RTT value, or
  • the invention as described in immediately preceding paragraphs could be implemented as an independent TCP packets intercept software/ agent , wherein the software keeps copy of a sliding window's worth of all sent data segments forwarded , performs all Fast Retransmit &/or RTO Timeout retransmissions, &/or rates pace forwarding onwards of intercepted packets from/towards local TCP ( according to maxT value ) , forwarding rates adjustment processes upon congestion drops notification events.
  • Intercept software intercepts each & every packets coming from TCP/ destined to MSTCP.
  • software may even performs RTO Timeout Retransmission completely, instead of MSTCP ( by incorporating RTO calculations from historical returning ACKs' RTT values ): software thus could ' spoof ACKs ' every single packets immediately upon receiving the packet/s from TCP for forwarding - ⁇ TCP now does not even do RTO Timeout Retransmissions. Software may further ' delay ' spoofing ACKs when receiving packet/s from TCP, as a technique to control TCP packets generation/ TCP packets sending rates.
  • software may instead either simulate a ' mirror CWND mechanism / mirror effective window mechanism ' within the software itself, OR to instead give equivalent effects in other equivalent ways such as reduction of in-flights-bytes via eg rates pacing to control/ adjust other parameter values like largestRcvACKNo, largestSentSeqNo , ensuring their subtraction difference to be of the required size , .... etc.
  • software may also implements various standard TCP techniques such as Checksum verification on each & every intercepted packets, SeqNo Wrap Around detections & comparisons, TimeStamp Wrap Around detection & comparisons, as defined in existing standard RFCs...etc
  • TCP techniques such as Checksum verification on each & every intercepted packets, SeqNo Wrap Around detections & comparisons, TimeStamp Wrap Around detection & comparisons, as defined in existing standard RFCs...etc
  • next ' mark ' SeqNo as the very latest forwarded packet's SeqNo ( if there are data packets, not pure ACKs, forwarded prior to the previous ' mark' SeqNo returning , otherwise wait for a next data packet to be forwarded ) etc & so forth
  • DupNumData is updated in similar manner to DupNum & DupNum processing now needs to distinguish between pure DUPACK packet & packet with data payload
  • min(RTT) minimum of [ min(RTT) , last measured roundtrip time RTT ] .
  • receiver based TCP modifications/ Receiver based TCP rates controls OTTs & min(OTT) could be utilised in the place of sender based RTTs & min(RTT) which could benefit from sender's Timestamp option, OR receiver based TCP may utilise inter-packet-arrivals technique instead of depending on needs to ascertain OTTs & min(OTT)
  • Intercept Software module now taking over all of existing MSTCP 's DUP ACKs Fast Retransmit & RTO Timeout retransmissions functions, Intercept Software could now have complete total controls over MSTCP new packets generation/ transmit rates via immediate spoofing/ temporary halting of SPOOF ACKs back to MSTCP for packets intercepted , &/or setting receiver window size field within the SPOOF ACKs to ' 0 ' to halt MSTCP packets generation.
  • NextGenFTP really should ' pause ' for an appropriate interval upon packet drops events such as 3 DUP ACKs, to clears all its own ' extra ' sent in- flights packets that are being buffered ( whereas all existing regular TCPs/FTPs drastically halves their CWND, causing severe unnecessary well documented throughputs problems ).
  • reducing CWND size by factor of ⁇ latest RTT value ( or OTT where appropriate ) - recorded min( RTT ) value ( or min(OTT) where appropriate ) ⁇ / min ( RTT ) OR reducing CWND size by factor of [ ⁇ latest RTT value ( or OTT where appropriate ) - recorded min(RTT) value ( or min(OTT) where appropriate ) ⁇ / latest RTT value ] ie CWND now set to CWND * [ 1 - [ ⁇ latest RTT value ( or OTT where appropriate ) - recorded min(RTT) value ( or min(OTT) where appropriate ) ⁇ / latest RTT value ] ] , OR setting CWND size to CWND * min( RTT ) ( or min(OTT) where appropriate ) / latest RTT value ( or OTT where appropriate ), ....etc depending on desired algorithm devised ] .
  • Note min (RTT) being most current estimate of uncongested RTT of the path recorded
  • Sent. SeqNo to be injected at the same rate corresponding to the returning ACKs- Clocking rate & not cause ' accelerative ' CWND increment/ extra accelerative exponential or linear new packet/s injection beyond the rate of the returning ACKs-Clocking rate.
  • Linux TCP should not increment CWND whatsoever even if incoming ACK now advances Sliding Window left edge...ie Linux TCP could inject new packets into network at the same rate as returning ACKs-Clocking rate BUT not to ' exponential double ' or ' linear increase ' beyond the rates of returning ACKs-Clocking rates ( easily implemented by modifying all CWND increment code lines to first check if countdown ' pause ' >0 , if so bypass increment)
  • test bed should be ( compared to unmodified Linux TCP server )
  • modified Linux TCP server [ + eg 2/ 5/ 20% simulated packet drops + eg 100/ 250/ 500 ms RTT latency ] -> router -> existing Linux TCP client
  • the link between router and client could be 500kbps, router could have a 10 or 25 packet buffer.
  • Sender & receiver window sizes eg 32/ 64 / 256 Kbytes .
  • STEP 2 here could be optional but prefers , could be added after tests with only STEP 1 ]
  • This module ( taking over all fast retransmit functions from MSTCP, &modifying incoming ACKNos of incoming DUP ACKs so MSTCP never gets to know of any DUP ACK events whatsoever ) should retransmit all ' missing gap packets ' indicated by SACK fields of incoming same SeqNo DUP ACKs , keeps a list of all retransmitted SeqNos during this same SeqNo multiple DUP ACKs, &will not needlessly retransmit what has already been retransmitted during subsequent same series of SeqNo DUP ACKs EXCEPT where the subsequent same SeqNo DUP ACK now indicates receipt of retransmitted SeqNo packet/s on this ' Retransmitted List ' : in which case the Module should only again retransmit ' earlier retransmitted missing gap packets ' ( ie already on the Retransmitted List ) with SeqNo ⁇ largest retransmitted SeqNo received indicated by newly arriving same SeqNo Dup ACKs .
  • this Module could again retransmit all ' missing gap packets ' indicated by SACK fields of incoming same SeqNo DUP ACKs afresh.
  • modified Linux TCP will not cause sudden ' burst ' transmissions utilising the returning ACKs-Clocking accumulated during the ' triggered pause ' interval to again immediately congest drop the link again : BUT after ' pause ' counted down only to transmit then at the subsequent returning ACKs-Clocking rate ( ie not including any of the returning ACKs-Clocking tokens accumulated during the ' pause ' interval
  • test bed should be ( compared to eg unmodified Linux / FreeBSD/ Windows TCP server ) :
  • modified Linux TCP server -> ( could be implemented using IPCHAIN ) simulated 1 in 10 packets drops 200 ms RTT Iatency( larger preferred ) -> router -> existing Linux TCP client
  • Sender &receiver window sizes 64Kbytes ( larger preferred ) .
  • Sender &receiver window sizes 64Kbytes ( larger preferred ) .
  • receiver TCP source code could be modified directly ( or similarly Intercept Monitor be adapted to perform / work round to achieve same ) , & will even work with all existing RFCs TCPs :
  • sender CWND size could be controlled eg to not be halved upon fast retransmit 3 DUP ACKs... or at dictated CWND size timed increments according to receiver's detect of path congestions levels ( uncongested/ onset of buffer delay of/ above certain values, congestion packet drops...etc ).
  • Receiver may also utilise sender's CWND size tracking method to help determine multiple DUP ACKs generation rates, also include 1 byte data in certain ACKs generated so sender will notify receiver of precisely which of the DUP ACKs received at sender TCP.
  • the 1 same SeqNo multiple DUP ACKs could cause Gigabyte to be transferred to completion staying with the 1 same SeqNo series of DUP ACKs, or the SeqNo maybe incremented to a larger ( or largest ) SeqNo successfully received at anytime before effective window size exhaustions to ' shift ' sender's window edges. ( may combine with technique/s to keep sender's CWND size sufficiently large at all times )
  • receiver TCP never generates 3 DUP ACKs, just let sender RTO Timeout to retransmit (preferably sufficiently large window scaled sizes negotiated to ensure sender's continuous transmissions without being halted by unacked retransmissions held up before the longer RTO Timeout period triggered ) , BUT sender's CWND resets to ' 0 ' or ' 1 ' upon RTO Timeout which receiver needs to ensures rapid exponential increments restoration of sender's CWND via a number of followed on same DUP ACKs after detecting RTO Timeout retransmissions.
  • NOTES :
  • Routers may conveniently set buffer to magnitude smaller ...like 50ms ( see google search research reports published on improved efficacies of such small buffer settings )
  • TCPs could just simply rates throttle/ ' pause ' to immediately clear onset of any bufferings/ reduce CWND size appropriately to enable clearing of onset of any bufferings.
  • Receiver TCPs above may preferably utilise SACK fields to convey blocks of received SEqNos beyond the ' clamped ' same SeqNo of series of multiple DUP ACKs , further SACK fields may also be utilised to convey occasional subsequent missing ' gap ' packets ( RFCs permit 3 blocks to be SACKed & SACKed SEqNos will not be unnecessarily retransmitted by existing RFCs TCPs )
  • Receiver TCPs here could utilise ' SACK field's blocks ' , generating ' timed ' ' clamped ' SeqNo of series of same SeqNo DUP ACKs ( thus controlling sender's Sliding Window's Snd.UNA value to control effective window sizes , also number of generated same SeqNo multiple DUP ACKS to control sender's CWND size) , setting receiver window sizes , tracking sender's CWND size techniques ...
  • the modified TCPs may each instead reduce their CWND size to eg CWND * ( latest RTT - min(RTT) ) / latest RTT , OR to eg CWND * ( latest RTT - min(RTT) ) / min(RTT)...etc depending on desired algorithms devised....
  • Receiver TCPs could have complete control of the sender TCPs transmission rates via its total complete control of the same SeqNo series of multiple DUP ACKs generation rates/ spacings/ temporary halts...etc according to desired algorithms devised... eg multiplicative increase &/or linear increase of multiple DUP ACKs rates every RTT ( or OTT ) so long as RTT ( or OTT ) remains less than current latest recorded min(RTT) ( or current latest recorded rnin(OTT) )...etc.
  • Receiver based modified TCP (or Intercept Software/ Forwarding Proxy...etc ) may ' pause ' for algorithmically devised period & during this period Receiver based modified TCPs may ' freeze ' generation of additional extra DUP ACKs except to match that required to match incoming new SeqNo packet/s ( ie generating 1 DUP ACK for each 1 of the incoming new SeqNo packet/s ' ) , this would allow reduction /clearing/ prevention of the extra sender's total in-flights-packets from being buffered along the path.
  • Receiver based TCP could include eg 1 byte garbage data to be included in ' selected marked ' DUP ACK/s , to help receiver to detect / compute RTT/ OTT/ total-in- flights-packets...etc using sender's ACKNo & SeqNo...etc subsequently received
  • countdown global variable minimum of ( latest RTT of packet triggering the 3rd DUP ACK fast retransmit or triggering RTO Timeout - min(RTT) . 300ms )
  • CWND could initially upon the 3 rd DUP ACK fast retransmit request triggering ' pause ' countdown be set to either unchanged CWND ( instead of to ' 1 * MSS ' ) or to a value equal to the total outstanding in-flight-packets at this very instance in time , and further be restored to a value equal to this instantaneous total outstanding in-flight-packets when ' pause ' has counteddown [ optionally MINUS the total number additional same SeqNo multiple DUP ACKS ( beyond the initial 3 DUP ACKS triggering fast retransmit ) received before ' pause ' counteddown at this instantaneous ' pause ' counteddown time ( ie equal to latest largest forwarded SeqNo - latest largest returning ACKNo at this very instant in time ) ] ⁇ > modified TCP could now stroke out a new packet into the network corresponding to each additional multiple same SeqNo DUP
  • CWND initially upon the 3 rd DUP ACK fast retransmit request triggering ' pause ' countdown be set to ' 1 * MSS ' , and then be restored to a value equal to this instantaneous total outstanding in-flight-packets MINUS the total number additional same SeqNo multiple DUP ACKS when ' pause ' has counteddown ⁇ > this way when ' pause ' counteddwon modified TCP will not ' burst ' out new packets but to only start stroking out new packets into network corresponding to subsequent new returning ACK rates
  • this maxfRTT is to ensure even in very very rare unlikely circumstance where the nodes' buffer capacity are extremely small ( eg in a LAN or even WAN ) , the ' pause ' period will not be unnecessarily set to be too large like eg the specified 300 ms value. Also instead of above example 300ms , the value may instead be algorithrnically derived dynamically for each different paths. 4.
  • a simple method to enable easy widespread implementation of ready guaranteed service capable network would be for all ( or almost all ) routers & switches at a node in the network to be modified/ software upgraded to immediately generate total of 3 DUP ACKs to the traversing TCP flows' sources to indicate to the sources to reduce their transmit rates when the node starts to buffer the traversing TCP flows' packets ( ie forwarding link now is 100% utilised & the aggregate traversing TCP flows' sources' packets start to be buffered ).
  • the 3 DUP ACKs generation may alternatively be triggered eg when the forwarding link reaches a specified utilisation level eg 95% / 98%...etc, or some other trigger conditions specified. It doesn't matter even if the packet corresponding to the 3 pseudo DUP ACKs are actually received correctly at the destinations, as subsequent ACKs from destination to source will remedy this.
  • the generated 3 DUP ACKs packet's fields contain the minimum required source & destination addresses & SeqNo (which could be readily obtained by
  • the pseudo 3 DUP ACKs' ACKNo field could be obtained / or derived from eg switches/ routers' maintained table of latest largest ACKNo generated by destination TCP for particular the uni-directional source/destination TCP flow/s, or alternatively the switches/ routers may first wait for a destination to source packet to arrive at the node to then obtain/ or derive the 3 pseudo DUP ACKs' ACKNo field from inspecting the returning packet's ACK field .
  • existing RED & ECN ...etc could similarly have the algorithm modified as outlined above, enabling real time guaranteed service capable networks ( or non congestion drops, &/or much much less buffer delays networks ).
  • Module builds a list of SeqNo/packet copy/systime of all packets forwarded (well ordered in SeqNo) & do fast retransmit/ RTO retransmit from this list . All items on list with SeqNo ⁇ current largest received ACK will be removed, also removed are all SeqNos SACKed,
  • Software could emulate MSTCP own Windows increment/ Congestion Control/ AIMD mechanisms , by allowing at any time a maximum of packets-in- flights equal to emaulated/tracked MSTCP' s CWND size : as an overview outline example ( among many possible ) , this could be achieved eg assuming for each ' returning ACKs emulated/tacked pseudo-mirror CWND size is doubled in each RTT when there has not been any 3 DUP ACK fast retransmit , but once this has occurred emulated/ tracked pseudo-mirror CWND size would only now be incremented by 1 * MSS per RTT .
  • This Window software could then keeps track of or estimate the MSTCP CWND size at all times, by tracking latest largest forwarded onwards MSTCP packets' SeqNo & latest largest network's incoming packets' ACKNo ( their difference gives the total in-flight-packets outstanding, which correspond to MSTCP's CWND value quite very well ).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Différentes techniques de modifications simples du protocole TCP/IP ou d'autres protocoles susceptibles et de configurations de commutateurs/routeurs de réseaux associées sont immédiatement réalisables sur l'Internet externe d'un réseau apte à garantir des services virtuellement dépourvus d'encombrement, sans nécessiter l'utilisation de techniques existantes QoS/ MPLS ni avoir recours à un quelconque logiciel de commutateurs/routeurs au sein du réseau à modifier ou contribuent à obtenir des résultats de performances de bout en bout ni pour autant nécessiter l'apport de largeurs de bandes illimitées au niveau de chaque liaison inter noeud au sein du réseau.
EP05806538A 2004-11-29 2005-11-29 Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree) Withdrawn EP1829321A2 (fr)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
GB0426176A GB0426176D0 (en) 2004-11-29 2004-11-29 Immediate ready implementation of virtually congestion free guaranteed service capable network
GB0501954A GB0501954D0 (en) 2005-01-31 2005-01-31 Immediate ready implementation of virtually congestion free guaranteed service capable network: inter-packets-intervals
GB0504782A GB0504782D0 (en) 2005-03-08 2005-03-08 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet NextGenTCP
GB0509444A GB0509444D0 (en) 2005-03-08 2005-05-09 Immediate ready implementation of virtually congestion free guaranteed service capable network:external internet nextgentcp (square wave form)
GB0512221A GB0512221D0 (en) 2005-03-08 2005-06-15 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgen TCP (square wave form) TCP friendly
GB0520706A GB0520706D0 (en) 2005-03-08 2005-10-12 Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgenTCP (square wave form) TCP friendly
PCT/IB2005/003580 WO2006056880A2 (fr) 2004-11-29 2005-11-29 Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree)

Publications (1)

Publication Number Publication Date
EP1829321A2 true EP1829321A2 (fr) 2007-09-05

Family

ID=36263750

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05806538A Withdrawn EP1829321A2 (fr) 2004-11-29 2005-11-29 Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree)

Country Status (6)

Country Link
EP (1) EP1829321A2 (fr)
KR (1) KR20070093077A (fr)
AP (1) AP2007004044A0 (fr)
AU (1) AU2005308530A1 (fr)
CA (1) CA2589161A1 (fr)
WO (1) WO2006056880A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10917352B1 (en) 2019-09-04 2021-02-09 Cisco Technology, Inc. Selective tracking of acknowledgments to improve network device buffer utilization and traffic shaping

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8116225B2 (en) 2008-10-31 2012-02-14 Venturi Wireless Method and apparatus for estimating channel bandwidth
AU2009337511A1 (en) * 2009-01-16 2011-09-08 Mainline Net Holdings Limited Maximizing bandwidth utilization in networks with high latencies and packet drops using transmission control protocol
JP6409558B2 (ja) 2014-12-19 2018-10-24 富士通株式会社 通信装置、中継装置、および、通信制御方法
EP3417585B1 (fr) * 2016-05-10 2021-06-30 Samsung Electronics Co., Ltd. Terminal, et procédé de communication associé
CN110178342B (zh) 2017-01-14 2022-07-12 瑞典爱立信有限公司 Sdn网络的可扩缩应用级别监视
US10362166B2 (en) 2017-03-01 2019-07-23 At&T Intellectual Property I, L.P. Facilitating software downloads to internet of things devices via a constrained network
WO2019003235A1 (fr) 2017-06-27 2019-01-03 Telefonaktiebolaget Lm Ericsson (Publ) Production de demande de surveillance à états en ligne pour sdn
WO2019012546A1 (fr) * 2017-07-11 2019-01-17 Telefonaktiebolaget Lm Ericsson [Publ] Mécanisme d'équilibrage de charge efficace pour commutateurs dans un réseau défini par logiciel
CN110213167A (zh) * 2018-02-28 2019-09-06 吴瑞 一种传输控制协议在网络拥塞时的处理方法和装置
CN110661723B (zh) 2018-06-29 2023-08-22 华为技术有限公司 一种数据传输方法、计算设备、网络设备及数据传输系统
US11212227B2 (en) * 2019-05-17 2021-12-28 Pensando Systems, Inc. Rate-optimized congestion management
US11140086B2 (en) 2019-08-15 2021-10-05 At&T Intellectual Property I, L.P. Management of background data traffic for 5G or other next generations wireless network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7474616B2 (en) * 2002-02-19 2009-01-06 Intel Corporation Congestion indication for flow control
US7190669B2 (en) * 2002-07-09 2007-03-13 Hewlett-Packard Development Company, L.P. System, method and computer readable medium for flow control of data traffic
JP3970138B2 (ja) * 2002-09-09 2007-09-05 富士通株式会社 イーサネットスイッチにおける輻輳制御装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006056880A2 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10917352B1 (en) 2019-09-04 2021-02-09 Cisco Technology, Inc. Selective tracking of acknowledgments to improve network device buffer utilization and traffic shaping
WO2021045924A1 (fr) * 2019-09-04 2021-03-11 Cisco Technology, Inc. Suivi sélectif d'accusés de réception permettant d'améliorer l'utilisation de tampons de dispositifs de réseau et la mise en forme de trafic
US11546262B2 (en) 2019-09-04 2023-01-03 Cisco Technology, Inc. Selective tracking of acknowledgments to improve network device buffer utilization and traffic shaping

Also Published As

Publication number Publication date
KR20070093077A (ko) 2007-09-17
WO2006056880A2 (fr) 2006-06-01
WO2006056880A8 (fr) 2007-11-01
WO2006056880A3 (fr) 2006-07-20
WO2006056880B1 (fr) 2006-09-28
AP2007004044A0 (en) 2007-06-30
AU2005308530A1 (en) 2006-06-01
CA2589161A1 (fr) 2006-06-01

Similar Documents

Publication Publication Date Title
US20080037420A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san
EP1829321A2 (fr) Realisation immediate d'un reseau apte a garantir des services virtuellement depourvus d'encombrement: san tcp convivial internet nextgentcp externe (forme d'onde carree)
US20100020689A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network : nextgentcp/ftp/udp intermediate buffer cyclical sack re-use
EP1955460B1 (fr) Contrôle de congestion du protocole de contrôle de transmission (tcp) utilisant des éléments de temps de transmission
US8462624B2 (en) Congestion management over lossy network connections
EP2148479A1 (fr) Transfert de données en vrac
WO2002033896A2 (fr) Procede et appareil de caracterisation de la qualite d'un chemin de reseau
US20090316579A1 (en) Immediate Ready Implementation of Virtually Congestion Free Guaranteed Service Capable Network: External Internet Nextgentcp Nextgenftp Nextgenudps
CN101112063A (zh) 能够支持保证实际无拥塞服务的网络的即刻可用实施方案:外部因特网NextGenTCP(方波形式)TCP友好SAN
Cardwell et al. Modeling the performance of short TCP connections
Natarajan et al. Non-renegable selective acknowledgments (NR-SACKs) for SCTP
Wang et al. Use of TCP decoupling in improving TCP performance over wireless networks
Gupta et al. WebTP: A receiver-driven web transport protocol
Mishra et al. Comparative Analysis of Transport Layer Congestion Control Algorithms
Zhang et al. Optimizing TCP start-up performance
Gupta et al. A receiver-driven transport protocol for the web
JP2008536339A (ja) 事実上輻輳のないギャランティードサービス対応ネットワーク:外部インターネットNextGenTCP(方形波形)TCPフレンドリSANの即座の準備のできた実施
KR101231793B1 (ko) Tcp 세션 최적화 방법 및 네트워크 노드
Dunigan et al. A TCP-over-UDP test harness
Raisinghani et al. Mild Aggression: A new approach for improving TCP performance in asymmetric networks
Venkataraman et al. A priority-layered approach to transport for high bandwidth-delay product networks
Noureddine Improving the performance of tcp applications using network-assisted mechanisms
French SRP—A Multimedia Network Protocol
Premalatha et al. Mitigating congestion in wireless networks by using TCP variants
Dorel et al. Performance analysis of tcp-reno and tcp-sack: The single source case

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070625

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

R17D Deferred search report published (corrected)

Effective date: 20071101

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20110601