WO2019209181A1 - Système et procédé d'accélération de livraison de données - Google Patents

Système et procédé d'accélération de livraison de données Download PDF

Info

Publication number
WO2019209181A1
WO2019209181A1 PCT/SG2019/050229 SG2019050229W WO2019209181A1 WO 2019209181 A1 WO2019209181 A1 WO 2019209181A1 SG 2019050229 W SG2019050229 W SG 2019050229W WO 2019209181 A1 WO2019209181 A1 WO 2019209181A1
Authority
WO
WIPO (PCT)
Prior art keywords
transport layer
layer packet
connection
proxy
identifiers
Prior art date
Application number
PCT/SG2019/050229
Other languages
English (en)
Inventor
Kyung Wan Kim
Original Assignee
Skylab Networks Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Skylab Networks Pte. Ltd. filed Critical Skylab Networks Pte. Ltd.
Priority to SG11202010500WA priority Critical patent/SG11202010500WA/en
Priority to AU2019261208A priority patent/AU2019261208B2/en
Publication of WO2019209181A1 publication Critical patent/WO2019209181A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/06Transport layer protocols, e.g. TCP [Transport Control Protocol] over wireless
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2876Pairs of inter-processing entities at each side of the network, e.g. split proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Definitions

  • the present invention relates to the field of communication systems and networks, and in particular, but not exclusively, to a system and method for accelerating data delivery for providing connectivity between communication devices wirelessly, e.g. providing wireless communications over geologically wide areas.
  • Mobile devices typically rely on wireless network such as Wi-Fi and mobile network (e.g. LTE and microwave).
  • wireless backhaul via satellite provide a very challenging service environment for Internet Service Providers due to the proximity at which the end users/devices are situated.
  • real-time and latency sensitive service requests often endure large round-trip delay, network congestion and service quality degradation.
  • Most conventional approaches are directed to accelerating content delivery by content caching, compression of protocol header, payload and contents.
  • these attempts no longer work when transferring contents via encrypted protocols like HTTPS, TLS, SSL or the like, because the encrypted content cannot be cached.
  • multimedia contents are highly compressed with high-performance compression algorithms, and show higher ratio of compression than real-time compression algorithms that can be used for packet level in real-time. Further compression on the already highly-compressed data using real-time algorithms will only increase the size of data and introduce additional latency in data transmission, and is not suitable for accelerating content delivery.
  • wireless networks have the tendency towards randomness in terms of availability, latency, capacity of the connectivity. Therefore, accelerating modem web applications, real-time applications and content stream will require a change in the transport layer protocol for better signalling: efficient connection handshake and reliable data transmission in reducing packet loss, improving response time, maximizing available bandwidth, and coupled with the necessity to be non-intrusive with regard to compatibility with existing network equipment.
  • the present invention seeks to provide a system and a method to overcome at least in part some of the aforementioned disadvantages.
  • Protocol support is one of most essential factors in determining the overall network service performance for wireless communications. In some instances, lack of protocol support that is not optimized for wireless transmission and routing decisions lead to snowballing effects of intermittent failures, impairing the overall service performance.
  • the invention provides a solution that provides efficient connection handshake and reliable transmission to reduce the number of packets, thereby improving response time. The invention effectively deals with constantly changing situations, to address problems with latency by analysing traffic & routing conditions in real time to find the fastest route between the data source and the destination.
  • a reliable transport protocol built atop UDP for delivering accelerated data transparently without modifying the user applications to provide higher throughput and lower latency.
  • the transport protocol enables authentication by way of a data and encapsulated packet transportation response for sending and receiving user traffic, including encrypting and decrypting packets, multiple-frames including control and data frames into a packet. This is advantageous for accelerating date delivery in a wireless environment for a plurality of connections on alternative and secured medium.
  • transport connections in the network traffic are selectively identified to be directed for acceleration based on a predetermined acceleration rule which can be configured at the accelerator proxy.
  • connection requests are selectively accelerated from source to destination, which improves transport layer performance.
  • optimization of the transport layer performance is provided to address problems associated with TCP’s three-way handshake by selectively directing network traffic over the accelerated transport protocol by means of a process mapping of a TCP/UDP connection to a stream. This advantageously improves the RTT handshake process for the transmission and obviates the need to perform TCP long-handshake process via the wireless network.
  • an adaptive congestion control that dynamically limit the bandwidth used by a user associated with IP address. This advantageously controls the amount of data being sent and ensures smooth transfer of data.
  • a method for accelerating data delivery comprises:
  • the predetermined rule is configured to select a transport layer packet based on the one or more identifiers.
  • the one or more identifiers comprise Source IP address, Source IP port number, Destination IP address, Destination port number, and stream ID.
  • connection identifier for authenticating the transport layer packet transported over the connection.
  • a transparent proxy module when directing the transport layer packet through the at least one acceleration stream, performs one or more operations comprising encrypting the transport layer packet, and multiplexing one or more data communication sessions onto the at least one acceleration stream.
  • a remote proxy module when accelerating the network traffic to the destination, performs one or more operations comprising decrypting the encrypted transport layer packet, and demultiplexing the one or more data communications.
  • the receiving of the transport layer packet utilizes a first transmission protocol
  • the non-acceleration mode comprises the first transmission protocol
  • the acceleration mode comprises a second transmission protocol different from the first transmission protocol
  • an apparatus for accelerating data delivery comprises: a transparent proxy module configured to establish a connection to a remote proxy, wherein the connection is for transporting one or more transport layer packets; and configure a predetermined rule to select a transport layer packet to undergo an acceleration mode, wherein unselected transport layer packets undergo a non acceleration mode; the transparent proxy module configured to receive the transport layer packet selected based on the predetermined rule and retrieve one or more identifiers associated with the transport layer packet; create a map among the one or more identifiers; and determine whether there is mapping between the transport layer packet and the one or more identifiers; and direct the transport layer packet through at least one acceleration stream to accelerate the transport layer packet to a destination provided by the one or more identifiers if there is mapping between the transport layer packet and the one or more identifiers.
  • a system for accelerating data delivery comprising a user device, a proxy device and a service device.
  • the system comprises a user device, a proxy device and a service device.
  • the user device connects to the proxy device via a communication means.
  • the proxy device connects to the service device via a communication means. All three devices are operable to perform the method as detailed in accordance with the first aspect of the present invention.
  • the proxy device directs any network traffic having the connection identifier to the connection and processed by the nodes.
  • a computer program product comprising a plurality of data processor executable instructions that when executed by a data processor in a system causes the system to perform the method as detailed in accordance with the first aspect of the present invention.
  • a transport protocol built atop UDP for accelerating data delivery is provided.
  • Figure 1A illustrates a wireless communication system for accelerating data delivery in accordance with an embodiment of the present invention.
  • Figure 1B is a block diagram of the software architecture of the system of Figure 1A.
  • Figure 1C is a block diagram of the transport protocol of the system of Figure 1B.
  • Figure 1D is a block diagram of mesh model deployment having a plurality of the system of Figure 1B.
  • Figure 2 is a block diagram illustrating transparent intercepting of selected packets by the first intermediary device in accordance with an embodiment of the present invention.
  • FIG. 3A is a block diagram illustrating multiplexing of TCP/UDP connections in accordance with an embodiment of the present invention.
  • Figure 3B is a flow chart illustrating an operation of mapping IP address to a predetermined data tunnel in accordance with an embodiment of the present invention.
  • Figure 3C is a flow chart illustrating the demultiplexing and replicating connection performed by the peer proxy of the Figure 3A.
  • Figure 4 is a block diagram illustrating an operation of circular queue management for handling application data in accordance with an embodiment of the present invention.
  • Figure 5 is a block diagram illustrating establishment of connection for communication between proxy devices in accordance with an embodiment of the present invention.
  • Figure 6 is a block diagram illustrating an operation of stream multiplexing in accordance with an embodiment of the present invention.
  • Figure 8A is a flow chart illustrating congestion management in accordance with an embodiment of the present invention.
  • Figure 8B is a flow chart illustrating an operation for information bundling in ACK frame in accordance with an embodiment of the present invention.
  • Figure 9 is a block diagram illustrating an operation for handling multiple protocol of the system of Figure 1B in accordance with an embodiment of the present invention.
  • Figure 10 is a block diagram illustrating the migration process of the connection ID in accordance with an embodiment of the present invention.
  • STAP connection refers to a mechanism that enables communication between the first proxy device and the second proxy device. Communication is established between the nodes of the proxy devices.
  • connection identifier refers to an identifier that is used to identify a connection between the nodes of two proxy devices.
  • the use of the term“wireless” includes 3G, 4G, 5G, Wi-Fi, and any other kinds of wireless connection.
  • the use of the term“acceleration” may include application acceleration, flow acceleration and other acceleration techniques.
  • STAP accelerator transport protocol
  • the system comprises a user device 102, a proxy device 104 and a service device 106.
  • the user device 102 connects to the proxy device 104 via a communication means.
  • the proxy device 104 connects to the service device 106 via a communication means.
  • the three devices are connected as described above for the accelerating data from one device to another device, by providing a cryptographic handshake for connection establishment to minimise packet round trip time.
  • the system and method direct selectively network traffic to be transmitted over the accelerated transport protocol based on association of connections identifier.
  • any network traffic identified with the associated connections identifier is directed to the connection and obviates the need to perform TCP long-handshake process via the wireless network, thereby improving the RTT handshake process for IP transmission.
  • FIG. 1A shows a schematic diagram of a system in accordance with an embodiment of the present invention.
  • the system comprises a user device 102, a first proxy 104-A, a second proxy 104-B and a service device 106.
  • the user device 102 connects to the first proxy 104- A via a communication means.
  • the first proxy 104-A connects to the second proxy 104-B via the communication means.
  • the second proxy 104-B connects to the service device 106 via the communication means.
  • the devices and proxies are connected as described as above for the communication of data from one point to another point.
  • the proxy device 104 may be deployed between the user device 102 and the service device 106 connected to a private network or the Internet.
  • the user device 102 may form part of a network that includes one or more clients, routers and proxy devices.
  • the service device 106 may be placed geographically apart from the user device 102.
  • the service device 106 can be connected to a wireless network.
  • the proxy device 104 may comprise a first proxy 104- A and a second proxy 104-B.
  • the first proxy 104-A and the second proxy 104-B may be in the form of a hardware or software.
  • the second proxy 104-B may also be in the form of a virtual network function (VNF) for increasing network scalability and operational efficiency.
  • VNF virtual network function
  • a first proxy 104-A may be placed between the user network and the satellite terminal, and a second proxy 104-B may be placed between the user device 102 and the satellite network.
  • the first proxy 104-A may be placed between the user network and a 3G/4G router, and a second proxy 104-B may be placed between the user device 102 and the 3G/4G network.
  • FIG. 1B illustrates the software architecture of the proxy device 104 as a transparent proxy for accelerating data delivery according to embodiments of the present invention.
  • the transparent proxy includes an accelerator proxy 104-A and a peer proxy 104-B.
  • the accelerator proxy 104 comprises a connection manager 124-A to establish a connection 126 to the peer proxy 104-B via the connection manager 124-B of the peer proxy 104-B.
  • the establishing of the connection 126 enables creating a plurality of interleaving streams to multiplex and demultiplex user payload data onto each stream.
  • At least some communication between the user device 102 and the service device 106 may be selected to pass through the proxy device 104.
  • the accelerator proxy 104-A may establish a connection 126 with the peer proxy 104-B.
  • the connection 126 established between the nodes of the proxies allows communication to be established.
  • the proxy device 104 uses the connection 126 to accelerate data delivery of the at least some communications between the user device 102 and the service device 106.
  • Configuration Web UI and management console responsible for setting the traffic proxying rule and system configurations via Web UI or console, collect and display traffic statistic, operate system commands.
  • Service Acceleration Rule Selecting only packets for the specified source IP address or IP segment which head for the specified destination IP port, which indicates the types of services e.g. HTTPS, SSH, XMMP and any other application level protocols that used by various commercial and non-commercial applications.
  • Connection Manager 124-A Establishes connection 126 between two communications devices (nodes) comprising an interleaving multi-stream, which is advantageous for congestion control in the network, efficient buffer management, preventing head-of-blocking situation due to the reliable data delivery feature.
  • the connection manager 124- A contains 1 or many accelerator processes that are responsible for establishing a connection 126 between two communications devices (nodes), mapping user traffic from each source IP into the connection 126, mapping user traffic from each source IP into the connection 126, establishing the connection 126, handling incoming user traffic and multiplexing payload data into each connection 126, sending outgoing user traffic from the connection 126.
  • Non-blocking circular ring queues A response for storing user incoming and outgoing traffic, which is used for congestion control in translating between TCP to an accelerator transport protocol, implementation of packet scheduling for QoS and CoS, Network switching, multi-path delivery / communication channel aggregation and its traffic distribution algorithms and UDP stitching.
  • Transport protocol Accelerator transport protocol over UDP as a data and encapsulated packet transportation tunnel response for sending and receiving user traffic between proxies; it contains techniques of secure connection including encrypting and decrypting packets, multiple-frames including control and data frames into a packet, packet loss detection, reliable delivery, network situation aware flow and congestion control, multi-path delivery, network change among different networks and multi-channel UDP delivery.
  • the accelerator proxy 104-A sends destination information to the peer proxy 104-B.
  • the destination information may include the destination IP address and port number.
  • the peer proxy 104-B receives the destination information, which may be mapped with the TCP or UDP socket number to establish TCP or UDP connection to the destination.
  • Configuration Web UI and management console responsible for setting system configurations via Web UI or console, collect and display traffic statistic, operate system commands
  • Connection Manager 124-B responsible for de-multiplexing user payload data from the connection 126 and regenerate TCP or/and UDP session to the destination and deliver the data thereto.
  • Non-blocking circular ring queues response for storing user incoming and outgoing traffic, which is used for congestion control in translating between TCP to Accelerator transport protocol, implementation of packet scheduling for QoS and CoS, Network switching, multi-path delivery / communication channel aggregation and its traffic distribution algorithms and UDP stitching.
  • Transport protocol Accelerator transport protocol over UDP as a data and encapsulated packet transportation tunnel response for sending and receiving user traffic between proxies; it contains techniques of secure connection including encrypting and decrypting packets, multiple-frames including control and data frames into a packet, packet loss detection, reliable delivery, network situation aware flow and congestion control, multi-path delivery, network change among different networks and multi-channel UDP delivery.
  • FIG. 1C illustrates the protocol layer of the accelerator transport protocol.
  • the accelerator transport protocol is located atop User Datagram Protocol (UDP), below the application layer by the socket layer.
  • UDP User Datagram Protocol
  • the Connection Manager 124-A handles the proxied inputs of TCP or UDP sockets types from the application layer and sends to the receiver over the accelerator transport protocol.
  • the Connection Manager 124-A translates this to the destination network.
  • the proxy 104- A offers application transparency and does not require modification of the user application source in adopting the accelerator transport protocol. From the perspective of a client and server application, the proxy 104-A is considered transparent, wherein a user request is redirected without modification to the same. Other non-proxying user traffic are bypassed as the normal protocol layer.
  • a method for accelerating data delivery comprises the following steps:
  • Step 1 Selection of packets to be intercepted
  • Step 2 Identifying and mapping each user IP address
  • Step 3 Establishing TCP connection with overall handshake improvement
  • Step 4 Securing and optimizing data by flow and congestion control
  • Figure 2 is a block diagram illustrating the selection of interested packets from a network traffic to be redirected to the proxy 104 and converted into an accelerator transport protocol.
  • the accelerator proxy 104- A may be deployed on the user side in the existing network to select interested packets from the traffic from user devices and applications. This packet selection may be based on the Service Acceleration Rules, which can be configured to select according to the source information (e.g. source IP address, port number) or destination information (e.g. destination IP address, port number).
  • source information e.g. source IP address, port number
  • destination information e.g. destination IP address, port number
  • the server receives credentials from the user. These credentials may include IP addresses, protocol state information, port number. The credentials are captured and stored by the server. In response to the request, the server determines whether the credentials are defined in the Service Acceleration Rule in order to redirect the traffic.
  • the packets may be redirected to the accelerator proxy 104-A by utilizing Destination Network Address Translation techniques, by updating the IP address and port number of the accelerator proxy 104-A.
  • the accelerator proxy 104-A may capture the original destination IP address and port number of the packet and transfer this information to the peer proxy 104- B, via a connection 126 established between the modes of the accelerator proxy 104-A and the peer proxy 106-B.
  • the peer proxy 106-B then creates a transport layer connection (TCP or UDP) to the destination and directs the packets to the destination based on the TCP/UDP socket and destination information.
  • TCP transport layer connection
  • the accelerator proxy 104-A is configured to select packets to be directed towards an acceleration mode by transmitting over the accelerator transport protocol between two nodes of the proxies. Packets that are not selected will not undergo the acceleration mode and will flow as normal.
  • New source IP from a new source may be associated with an existing connection or to potentially create a new connection between nodes of the proxies.
  • Each connection may be identified by providing a set of connection identifier (or connection ID). Packets having that connection ID may be routed back to the node and identified by the node upon receipt.
  • New source IPs from a new device 102 which is not registered with or not identified by the accelerator proxy 104-A may be mapped with an existing connection 126.
  • the existing connection 126 having a connection ID may be associated with the source IP by creating a map of the source IP with the connection ID.
  • the accelerator proxy 104- A accepts the new TCP or UDP session.
  • the accelerator proxy 104-A When the accelerator proxy 104-A receives a new TCP session or UDP session from a new user device 102 having new source IPs, the accelerator proxy 104-A creates a new connection 126 to the peer proxy 104-B.
  • a new connection 126 may be established.
  • the new connection 126 may have a connection ID retrievable from an existing pool of established connection 126.
  • Connection Identifier
  • connection ID may be provided in the form of a 64-bit number which is globally unique in all networks. This identifier may be included in accelerator protocol header section and exchanged between accelerator proxy 104-A and peer proxy 104-B to identify the respective connection 126. The identity may be used to associate with the authentication information for encryption and decryption of the packet during the lifetime of the connection 126.
  • a reserved pool of connections 126 may comprise a pool of available connections 126 established between the nodes of the accelerator proxy 104-A and the peer proxy 104-B. These connections 126 may be established during the initiation of the accelerator proxy 104- A and may be maintained to ensure fast delivery of streaming data and real-time application data, including VOIP, IoT machine data in high latency networks or over long distances without any waste of time for establishing the connection 126.
  • An established connection 126 in the connection pool remains available and may be terminated until a configured time-out (e.g. idleness due to no transaction). This value for time-out may be configured and enabled, regardless of the availability of the physical network.
  • the timed-out connections may be closed and removed from the pool, and new connections 126 may be inserted into the pool to maintain the activity of the connection pool.
  • the encrypted handshake packet may be sent and decrypted to the peer proxy using a security certificate, such as X.509 certificate. While the X.509 certificate has been described in this case, it would be appreciated that the security certificate may be modified with the security model.
  • the handshake packet contains the configuration from the accelerator proxy 104-A at the user end.
  • the packet may be validated and synchronized in both ends in the handshake process. If the handshake process is successful, the connection 126 will be accepted by the accelerator process in the peer proxy 104-B at the server end (see Figure 3C and Figure 6).
  • this handshake process is done by single round (l-RTT) of packet exchange.
  • the Connection Manager 124-B is capable of demultiplexing the interleaved stream data and extracting the destination information (e.g. destination IP address and port number) to perform the final data delivery by re-constructing the TCP or UDP session from the user side accelerator proxy 104-A and transferring data by the original protocol, which may be TCP or UDP.
  • the destination information e.g. destination IP address and port number
  • the connection 126 established between nodes of the proxies comprise a plurality of interleaving streams 128.
  • the streams are identified within the connection 126 by a stream identifier.
  • the stream identifier may be provided as a stream ID.
  • the stream IDs are unique to a stream 128 and can be used to send stream data by a node of the proxy.
  • the streams 128 may be created at either nodes and capable of sending data interleaved with other streams 128 and can be terminated.
  • a new stream 128 for the connection 126 may be created when a new TCP or UDP session is accepted or terminated for the interleaving of data channels for TCP and UDP in the connection 126.
  • a stream 128 may be created by sending data and each stream 128 is identified by a stream ID. Every newly created stream 128 comprises TCP socket number from the OS, source IP address and port number, which may be used for multiplexing and demultiplexing of the connected sessions (see Figure 3A and Figure 3B).
  • the accelerator proxy 104-A establishes the connection 126 to the peer proxy 104-B, when the accelerator proxy 104-A receives any new TCP session or UDP session including the first session.
  • a stream 128 having a stream ID may be created within the established connection 126, which will be the data channel for the respective TCP or UDP session.
  • a map may be created between the Source IP address, Source IP port, Destination IP address, Destination port number, and newly created stream ID.
  • the accelerator proxy 104-A maps the newly created stream ID with the connection identifier and the accepted TCP or UDP session socket number (file-descriptor number) may be mapped with connection identifier, destination IP address and destination port number.
  • the accelerator transport protocol comprising one connection and interleaving multi-stream is advantageous for data transmission, including better flow and congestion control, preventing head-of-blocking in the reliable data delivery and transfer of a multi-number of assets from web applications.
  • the accelerator transport protocol uses a cryptographic way of l-RTT handshake to minimize packet round trip time for a connection establishment using Public Key Infrastructure.
  • Both the accelerator proxy 104-A and the peer proxy 104-B have a key pair for the authentication of its identity in the form of X.509.
  • connection established between the nodes of the proxies has a unique connection identifier.
  • the connection identifier used in packet encryption and decryption makes use of the newly created encryption key in the handshake process. This advantageously replaces the need for IP address and port number, typically used in TCP and UDP packet authentication.
  • This way of packet authentication enables mobility in the different networks and multi-path delivery, data channel change among different networks, multi-channel UDP packet delivery (called UDP stitching).
  • the STAP may establish connections at both ends between source and destination which does without the need to establish a new TCP connection to be established over wireless and high latency networks, thereby resulting in overall handshake improvement.
  • connection establishment can be done 2 RTTs earlier than a typical TCP handshake, which is advantageous for applications that require short tail transaction and frequent connections and disconnections, for instance, IoT sensor data reports and ATMs for banking.
  • connection 126 established between the nodes and TCP connection may be closed when the accelerator proxy 104-A detects events associated with inactivity or broken connections. Such idle events may include inactivity from the user device, broken TCP connections in the network or broken connection. On detecting such events, the accelerator proxy 104-A closes the connection 126 and cleans the related data structure using the connection map.
  • the server may redirect the traffic to an accelerated transport protocol, by transporting user data via transport protocol implemented atop UDP.
  • mapping for the UDP user traffic is managed in a similar manner as TCP user traffic as described in previous sections. It would be appreciated that since the accelerator proxy handles user UDP traffic as packet level, there is a possibility that one value endpoint file descriptor mapped with many stream+STAP connection ID key.
  • the user UDP application has no reliable feature. Therefore, when accelerating by means of the accelerator proxy having reliable property, the ensuing traffic delivery is more reliable to the destination network.
  • the accelerator proxy 104- A may be configured via the Web UI or management console these services with suitable class of service (CoS) for ensuring secure and real-time delivery towards the destination network.
  • CoS class of service
  • the accelerator proxy 104-A can advantageously support Peer-to-Peer (P2P) user application with the capability of proxying service port range under a certain rule mapping filter by the specified source and destination IP.
  • P2P Peer-to-Peer
  • the accelerator proxy 104-A stores and processes the data upon certain condition of I/O operation towards user/destination network.
  • Incoming user traffic may be intercepted at the accelerator proxy 104-A and multiplexed, and in turn encrypted in the data delivery accelerator system.
  • the encrypted data may be sent via a known stream and connection is established.
  • the peer proxy 104-B decrypts multiplexes and forwards stream data to the established destination based on the connection mapping that had been made.
  • a similar process for sending user traffic by the peer proxy 104-B may be carried out in the reverse direction.
  • the peer proxy 104-B receives the user traffic, decrypts, demultiplexes, and forwards the user data to the correspondent file descriptor from the ⁇ stream+STAP connectionID, file descriptor connection map which may be previously established.
  • I/O operations happen with socket file descriptors which refer to endpoint connections to user socket, destination socket can be in busy state, the accelerator proxy 104- A stores the ongoing traffic into the circular queue for processing in proper state in later time.
  • the circular buffer maximum size per user connection may be configurable depending on the system memory capacity.
  • the accelerator proxy 104-A may apply auto tuning technique for this circular buffer queue, which drops later over max size packet to ensure the proper working state of the proxy.
  • UDP applications may require the service reliability via the accelerator proxy 104-A, while other applications may prefer retaining the nature of UDP connection on delivery.
  • the accelerator transport protocol enables 2 classes of service for the UDP accelerator, enabling high-reliability delivery, wherein the user payload is delivered in the order received with congestion control mechanism of STAP tunnel.
  • real-time delivery is enabled, wherein the user payload is delivered as soon as it is received, which results in loss of packets and there will be no retransmission.
  • an adaptive congestion control that dynamically limits the bandwidth used by a user associated with IP address.
  • Each user device 102 which utilizes the acceleration mode via the accelerator transport protocol may be configured with a bandwidth limitation associated with its IP address.
  • the proxies may be adapted to identify bandwidth limitation by applying quality of service (QoS), which enables detecting and efficient recovery from loss.
  • QoS quality of service
  • the accelerator transport protocol accomplishes the QoS with the‘adaptive congestion control’ method for controlling the rate of reading the buffer from socket.
  • Adaptive factor provides that the read rate from user socket is controlled by the configurable limitation, and the estimated bandwidth availability of the connection, measured during the acceleration phase. 2, rqueue in [0, 3 ⁇ 4 I]
  • a deployment comprising the accelerator proxy 104-A and the peer proxy 104-B and a Network Address Translation (NAT).
  • the UDP port mapping and binding (4-number- tube; subscriber IP, subscriber port, peer IP, peer port) may be changed due to the NAT service.
  • the accelerator proxy 104-A and/or the peer proxy 104-B may rebind or re-establish the mapping to maintain the lifetime of the connection over time having the same identity. This migration updates the 4-number-tube.
  • the migration process switches the tunnel backhaul from network path 1 (physical interface ETH_2 configured to the wan gateway GW 1) to network path 2 (physical interface ETH_3 configured to the wan gateway GW2).
  • Connection 126 identity STAP_ID0l may be established and mapped with the user device 102 via the in-path LAN network.
  • the migration process which is triggered by a network observation decision at accelerator device, is transparent to the user application (TCP01) since it only changes the tunnel endpoint but still keeps the stream mapping, user payload frame status and the session context of connection identity, which contains the authentication information and related control information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un système et un procédé d'accélération de livraison de données dans un environnement sans fil, pour assurer la connectivité sans fil entre des dispositifs de communication. Le système comprend un dispositif utilisateur, un dispositif mandataire et un dispositif de service permettant de fournir un établissement de liaison efficace de connexion et une transmission fiable en réduisant le nombre de paquets, ce qui améliore le temps de réponse. L'invention concerne également un protocole de transport fiable transparent, s'ajoutant au protocole de datagramme d'utilisateur.
PCT/SG2019/050229 2018-04-24 2019-04-24 Système et procédé d'accélération de livraison de données WO2019209181A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG11202010500WA SG11202010500WA (en) 2018-04-24 2019-04-24 System and method for accelerating data delivery
AU2019261208A AU2019261208B2 (en) 2018-04-24 2019-04-24 System and method for accelerating data delivery

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201803436Y 2018-04-24
SG10201803436Y 2018-04-24

Publications (1)

Publication Number Publication Date
WO2019209181A1 true WO2019209181A1 (fr) 2019-10-31

Family

ID=68295825

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2019/050229 WO2019209181A1 (fr) 2018-04-24 2019-04-24 Système et procédé d'accélération de livraison de données

Country Status (3)

Country Link
AU (1) AU2019261208B2 (fr)
SG (1) SG11202010500WA (fr)
WO (1) WO2019209181A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021155282A1 (fr) * 2020-01-31 2021-08-05 Pensando Systems Inc. Service de mandataire par accélération matérielle à l'aide d'un dispositif d'entrée/sortie (es)
US11153221B2 (en) 2019-08-28 2021-10-19 Pensando Systems Inc. Methods, systems, and devices for classifying layer 4-level data from data queues
US11212227B2 (en) 2019-05-17 2021-12-28 Pensando Systems, Inc. Rate-optimized congestion management
US11252088B2 (en) 2017-08-31 2022-02-15 Pensando Systems Inc. Methods and systems for network congestion management
US11431681B2 (en) 2020-04-07 2022-08-30 Pensando Systems Inc. Application aware TCP performance tuning on hardware accelerated TCP proxy services

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006074072A2 (fr) * 2004-12-30 2006-07-13 Citrix Systems, Inc. Systemes et procedes de mise a disposition de techniques d'acceleration cote client
US20080120426A1 (en) * 2006-11-17 2008-05-22 International Business Machines Corporation Selective acceleration of transport control protocol (tcp) connections
CN102299899A (zh) * 2010-06-24 2011-12-28 清华大学 一种恶劣信道下的tcp加速方法
US8305896B2 (en) * 2007-10-31 2012-11-06 Cisco Technology, Inc. Selective performance enhancement of traffic flows
US8340109B2 (en) * 2002-11-08 2012-12-25 Juniper Networks, Inc. Systems and methods for accelerating TCP/IP data stream processing
US20170237668A1 (en) * 2000-11-02 2017-08-17 Oracle America Inc. Tcp/udp acceleration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170237668A1 (en) * 2000-11-02 2017-08-17 Oracle America Inc. Tcp/udp acceleration
US8340109B2 (en) * 2002-11-08 2012-12-25 Juniper Networks, Inc. Systems and methods for accelerating TCP/IP data stream processing
WO2006074072A2 (fr) * 2004-12-30 2006-07-13 Citrix Systems, Inc. Systemes et procedes de mise a disposition de techniques d'acceleration cote client
US20080120426A1 (en) * 2006-11-17 2008-05-22 International Business Machines Corporation Selective acceleration of transport control protocol (tcp) connections
US8305896B2 (en) * 2007-10-31 2012-11-06 Cisco Technology, Inc. Selective performance enhancement of traffic flows
CN102299899A (zh) * 2010-06-24 2011-12-28 清华大学 一种恶劣信道下的tcp加速方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11252088B2 (en) 2017-08-31 2022-02-15 Pensando Systems Inc. Methods and systems for network congestion management
US11212227B2 (en) 2019-05-17 2021-12-28 Pensando Systems, Inc. Rate-optimized congestion management
US11936561B2 (en) 2019-05-17 2024-03-19 Pensando Systems, Inc. Rate-optimized congestion management
US11153221B2 (en) 2019-08-28 2021-10-19 Pensando Systems Inc. Methods, systems, and devices for classifying layer 4-level data from data queues
WO2021155282A1 (fr) * 2020-01-31 2021-08-05 Pensando Systems Inc. Service de mandataire par accélération matérielle à l'aide d'un dispositif d'entrée/sortie (es)
US11394700B2 (en) 2020-01-31 2022-07-19 Pensando Systems Inc. Proxy service through hardware acceleration using an IO device
IL294875B1 (en) * 2020-01-31 2023-06-01 Pensando Systems Inc Proxy service through hardware acceleration using an I/O device
IL294875B2 (en) * 2020-01-31 2023-10-01 Pensando Systems Inc Proxy service through hardware acceleration using an I/O device
US11431681B2 (en) 2020-04-07 2022-08-30 Pensando Systems Inc. Application aware TCP performance tuning on hardware accelerated TCP proxy services

Also Published As

Publication number Publication date
SG11202010500WA (en) 2020-11-27
AU2019261208B2 (en) 2024-06-20
AU2019261208A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
AU2019261208B2 (en) System and method for accelerating data delivery
US10021034B2 (en) Application aware multihoming for data traffic acceleration in data communications networks
US10911413B2 (en) Encapsulating and tunneling WebRTC traffic
JP4327496B2 (ja) ネットワークスタックをオフロードする方法
US7346702B2 (en) System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics
US8976798B2 (en) Method and system for communicating over a segmented virtual private network (VPN)
US7643416B2 (en) Method and system for adaptively applying performance enhancing functions
US9319439B2 (en) Secured wireless session initiate framework
EP1443731A2 (fr) Procédé et système permettant d'assurer la sécurité dans un éeseau avec l'amélioration de la performance
EP1443713A2 (fr) Procédé et système pour utiliser les raccordements privés virtuels du réseau (VPN) dans un réseau à performance améliorée
AU2007320794B2 (en) Selective session interception method
WO2004023263A2 (fr) Systeme pour une autorisation de trafic de reseau a travers des pare-feu
US20120269132A1 (en) Communication between mobile terminals and service providers
US8359405B1 (en) Performance enhancing proxy and method for enhancing performance
JP2010504688A (ja) ネットワーク・プロトコルスタックのハンドオフおよび最適化を実装するための方法およびモジュール
AU2020229738A1 (en) System and method for managing network traffic
JP2023033600A (ja) コンテンツ配信システム、ユニキャストマルチキャスト変換装置、コンテンツ配信方法及びコンテンツ配信プログラム
EP2280514B1 (fr) Groupage de flux de données par des réseaux commutés par paquets publiques
JP7298690B2 (ja) コンテンツ配信システム、マルチキャストユニキャスト/マルチキャストマルチキャスト変換装置、マルチキャストユニキャスト変換装置、コンテンツ配信方法及びコンテンツ配信プログラム
KR101082651B1 (ko) 멀티호밍을 지원하기 위한 가상화 드라이브 장치 및 그 방법
Ciko Improving Internet Performance with a" Clean-Slate" Network Architecture-The Case of RINA
Leng et al. All-Weather Transport Essentials
TW201349898A (zh) 多重鏈路傳輸系統及其改善合併頻寬效能之方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791954

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019261208

Country of ref document: AU

Date of ref document: 20190424

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19791954

Country of ref document: EP

Kind code of ref document: A1