WO2010136023A1 - Procédé d'optimisation d'une transmission de données par paquets et produit-programme informatique - Google Patents

Procédé d'optimisation d'une transmission de données par paquets et produit-programme informatique Download PDF

Info

Publication number
WO2010136023A1
WO2010136023A1 PCT/DE2010/000583 DE2010000583W WO2010136023A1 WO 2010136023 A1 WO2010136023 A1 WO 2010136023A1 DE 2010000583 W DE2010000583 W DE 2010000583W WO 2010136023 A1 WO2010136023 A1 WO 2010136023A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
optimizer
address
data
communication
Prior art date
Application number
PCT/DE2010/000583
Other languages
German (de)
English (en)
Other versions
WO2010136023A8 (fr
Inventor
Joerg Ott
Nils Seiffert
Carsten Bormann
Original Assignee
Lysatiq Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lysatiq Gmbh filed Critical Lysatiq Gmbh
Publication of WO2010136023A1 publication Critical patent/WO2010136023A1/fr
Publication of WO2010136023A8 publication Critical patent/WO2010136023A8/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the invention is in the field of packet-oriented data transmission.
  • User data can only be transmitted in one (unidirectional) or several directions (bidirectional), depending on the application and / or the function performed. The same applies to tax information. While most networks generally allow two-way transmission, there are network technologies (such as DVB-S / C / T) that allow unidirectional transmission regardless of the application and / or unidirectional transmission for cost or other reasons make sense and / or where the return direction is realized separately via the same and / or other transmission methods and / or networks; these limitations and / or frameworks usually come from the system design. In some cases, both control and payload data may be transmitted in one direction only, or other transmission methods and / or networks may be used for the return, including payload data and / or control information. NEN or subsets of the user data and / or control information can be exchanged over different networks.
  • a data packet contains information (also referred to as control information) which is or may be required for processing the transmission protocol and optionally user data.
  • a data packet contains, for example, one or more packet headers or protocol headers and / or packet or protocol trailers. All these are referred to simply as "headers.” These headers contain the control information, which may be addressing information, for example, and the headers are partly followed by the actual payload (voice data, parts of text, parts of files, etc.), but also higher-level control data Protocol layers - also often including their own headers - are often referred to as payload from the perspective of the underlying protocol layers.
  • a header consists of one or often several "fields", where the tax information is contained. This arrangement of fields within a header is also referred to below as the header structure of a header. Among other things, it serves to identify and / or interpret the individual fields and thus also the control information within a header.
  • Protocol stack or a protocol hierarchy, as described, inter alia, in
  • Protocol hierarchies (“Protocol Relationships") are used inter alia in
  • IPv6 IP protocol version 6
  • IPv6 Internet Protocol, Version 6 (IPv6), Specification, S. Deering, R.
  • protocol layers can be distinguished.
  • the protocols of the individual protocol layers are often stacked on top of each other, but they can also perform independent or interconnected functions parallel to each other in individual protocol layers or partial protocol stacks.
  • several protocols can also be counted concurrently or also superimposed to form a protocol layer.
  • VoIP Voice over IP
  • IP Internet Protocol
  • layer 3 of the OSI model [I] the Internet Protocol
  • HTTP Hypertext Transfer Protocol
  • HTTPS HTTP Secure
  • HTTP HyperText Transfer Protocol
  • HTTPS themselves are often classified as application protocols.
  • HTTP is usually used above the TCP protocol (Transmission Control Protocol [RFC 793]), which is assigned to the transport layer.
  • TCP Transmission Control Protocol
  • IP IP - a network layer protocol (ISO / OSI layer 3 according to [I]).
  • IP IP - a network layer protocol
  • other protocols often follow, for example, depending on the transmission medium used (such as a local area network with "Ethernet" IEEE 802.3, which would conform to ISO / OSI protocol layers 1 and 2 according to [1]).
  • VoIP Voice over IP
  • IP Internet Protocol
  • VoIP-based data transmission is often performed using, among other things, the RTP protocol ("Realtime Transport Protocol")
  • RTP protocol Realtime Transport Protocol
  • RTP itself is a protocol of the transport layer (ISO / OSI layer 4 according to [I]).
  • RTP is used above the UDP protocol (User Datagram Protocol [RFC 768]), which is also assigned to the transport layer.
  • UDP is usually used above IP - a protocol of the network layer (ISO / OSI layer 3 according to [I]).
  • IP IP- a protocol of the network layer (ISO / OSI layer 3 according to [I]).
  • IP IP
  • other protocols often follow, for example, depending on the transmission medium used (such as a local area network with "Ethernet” IEEE 802.3, which would conform to ISO / OSI protocol layers 1 and 2 according to [1]).
  • DNS hyperText Transfer Protocol
  • URIs uniform resource identifiers
  • URIs uniform resource identifiers
  • readable names which must first be translated by a name service by a so-called name resolution in IP addresses.
  • DNS domain Names - Implementation and Specification
  • DNS naming service web browsers use so-called resolvers that use the DNS protocol to direct requests to one or more directly configured and / or associated DNS servers to the end system of the web browser.
  • these first-tier DNS servers often direct requests to other DNS servers. Server.
  • the required answer does not necessarily come about at the first such request; DNS servers of the following levels can also answer incomplete queries and thereby provide references to other DNS servers.
  • the first-level (or later-stage) DNS server makes further requests to it until it receives a response from a DNS server that knows the answer it needs.
  • Responses may be time-to-live (TTL), such as an integer indicating how long (in seconds) that response should still be valid.
  • TTL time-to-live
  • DNS resolvers and / or servers implement so-called buffer memories or caches from which a repeated request for the same translation can be answered without consulting the subsequent stages as long as the lifetime of the stored Answer allows this.
  • the DNS protocol can be set up on the transport protocol UDP or also on the transport protocol TCP, both protocols usually set up on IP and other underlying protocols as described above.
  • the individual protocols and / or their specific implementations / installations can be configured differently than DNS: they do not necessarily have to communicate via a name service server; none, one or more levels of name service servers can be configured.
  • DNS usually translates a so-called DNS name (often a computer name) into an IP address.
  • directory services and name services can also perform other name resolutions that do not necessarily include classic names or addresses.
  • a directory service could also generally answer a request for, for example, a data value / computer state, and directory services could return, for example, in addition to or instead of addresses, for example, certificates, passwords, or even telephone numbers.
  • the details of the name resolution may differ depending on the data service.
  • Examples of data services with name resolution other than DNS or with DNS supplementary name resolutions include the Session Initiation Protocol (RFC 3261 and RFC 3263), ITU-T H.323 and H.225.0.
  • the name services function can also be directly linked to the data services.
  • Some of the naming services are also used to support mobility, for example in mobile networks or to implement number portability, personal call numbers, service numbers (eg 0800, 0900) etc. in the (mobile) telephone network.
  • name service server stands for a server that realizes the tasks of name services.
  • a name service server is to be understood as a logical function. It does not necessarily require a separate hardware or software system to implement the name service server function. While it may be embodied as a separate component, it may also be embodied as part of the operating system, one or more application components, peer systems, other network elements, and so on. This applies to name service servers of any level; in particular, no name service servers that can be independently identified as separate components can be present.
  • Name services can also be specific to data services: for VoIP, in addition to or in place of a general name service such as DNS, an additional function is realized in the VoIP service, which includes user names (for example, represented as a URI) whose current contact address (typically one or more) also multiple IP addresses) and thus enables the accessibility of the user.
  • the dissolution of the username into a contact address may be done in one or more steps as described above and may require one or more stages.
  • a direct and / or indirect influence on the achieved quality of service often has the transmission delay in the network.
  • the transmission delay in the network depends on a large number of factors. These include, for example, often the actual signal propagation times, the network data rate or, in the case of individual transmission sections, the data rate of the corresponding transmission section, the size of a data packet in relation to the data rate (if, for example, the forwarding of packets only / essentially takes place only after the reception of the entire data packet) , Delays in the relaying network components, buffers / buffers in the individual components, delays in the protocol evaluating / implementing components, and so forth.
  • This transmission delay is often measured together in both directions of a communication relationship and hereinafter also referred to as RTT ("Round Trip Time").
  • RTT usually stands for the time, in total, from sending a first packet by the sender to receiving the first packet by the receiver, sending a potential second packet in response to the first packet through the receiver of the first packet goes to receive this second packet by the sender of the first packet.
  • IPv4-based networks this is often done with the help of a PING command, which sends ICMP packets ("Internet Control Message Protocol" RFC 792) and waits for corresponding ICMP response packets from the other side
  • ICMP packets Internet Control Message Protocol
  • RFC 792 Internet Control Message Protocol
  • Packet loss is one of the potential causes that can lead to a reduction in the quality of service.
  • packet losses are both completely lost data packets, as well as data packets which were falsified during the transmission or were disproportionately delayed and therefore can not be used.
  • Lost packets are usually initially synonymous with lost user data and / or control information.
  • the TCP protocol often broadcasts several packets before waiting for recipients to receive acknowledgments. Thus, it can often compensate for packet losses by resending the lost packets without preventing the sender from intermittently sending new (other) packets. So it comes with occurring packet losses at least partially not to complete pauses when sending data.
  • FEC Forward error correction
  • FEC forward error correction
  • FEC forward error correction
  • FEC-based methods are often used on individual transmission sections (for example, a radio link, satellite link, but also wired transmission sections). Often they are a direct part of the link-layer protocols and are used for all information transmitted on the corresponding transmission section.
  • the second common use case of FEC procedures is end-to-end. In this case, the actual transmitter of the data FEC information is integrated into the data flow / packet flow.
  • FEC-based methods have the advantage over retransmissions in that they usually enable the receivers to reconstruct lost information without, for example, having to first wait for an RTT to receive retransmissions. Therefore, the use of FEC based methods is well suited for scenarios where transmission delays are important. These include, inter alia, live video and VoIP, where waiting for retransmissions would otherwise often lead to "dropouts" or generally a higher delay in playback.
  • a content coding can also be chosen so that (even uncorrected) packet losses have only very limited effects on the reproduction (that is, for example, only voice data of a 20ms period affected and the loss of data or only slightly on the following Voice playback).
  • transcoding an adaptation of an original content coding by re-encoding, for example, the voice / image data in a more suitable, for example, a network / a transmission situation in the network content encoding is known.
  • the RTT often has a significant influence on the resulting quality of service.
  • protocols (as well as TCP) use partial transmission windows - for example roughly the maximum amount of data that can be sent before waiting for an acknowledgment of receipt.
  • the maximum throughput is often limited by, for example, Ix size of the send window per RTT.
  • timeouts are often used. So it could be for example that, for example, in a very rapidly varying and especially very fast rise RTT protocols assume that heavily delayed packets are lost and initiate retransmissions by itself. In this case, packets that arrive too late for their respective purpose lead in part to similar reactions and also to a similarly reduced quality of service, such as a packet loss.
  • data objects are requested which, in connection with the use of web browsers, can also be referred to as web objects.
  • a web page displayed to a user on the screen usually contains several (sometimes dozens or> 100) of these objects (such as HTML pages, HTML framesets, images, style sheets, scripts, scripts-integrated HTML text objects, XML scripts). Data, JSON objects ("Ajax"), etc.).
  • an inserted Web browser Often, by evaluating received web objects, they first learn which other web objects are needed for displaying a web page and / or the web browser queries only a limited number of web objects in parallel.
  • forwarding buffers used among other things in the network components (such as forwarding routers, traffic shapers, interface drivers, etc.). Incoming data packets are often first cached in queues. This caching often results in a sometimes not insignificant, sometimes even dramatic additional transmission delay, which increases the resulting RTT.
  • the cues are filled much more, especially when transmission peaks and general overload occur, than at lower load.
  • RTTs of the order of 8 seconds are a significant limitation on the quality of service for, for example, VoIP telephone calls, but also many other applications, such as web surfing.
  • the above-mentioned QoS / prioritization / traffic shaper-based methods can also be used in relation to queue usage, for example in forwarding network components.
  • the packets to be prioritized can not be queued at all, or they can not be queued at the back end of the queues and thus experience a much lower delay.
  • these QoS / prioritization / traffic-shaper-based methods including bandwidth reservations for recognizing the data packets to be prioritized can use many different methods. These include configured source / destination addresses, marked packets (for example "TOS" field in IP headers), specific protocols (for example, detected via the "Protocol” field in IP headers and / or port numbers in headers of transport protocols) the evaluation of signaling protocols to determine the source / destination addresses and / or methods for (heuristically) recognizing particular data / application classes (for example based on packet sizes / intervals, specific fields such as version numbers and / or timespots, sequence numbers in packet headers , etc.).
  • marked packets for example "TOS" field in IP headers
  • specific protocols for example, detected via the "Protocol” field in IP headers and / or port numbers in headers of transport protocols
  • the evaluation of signaling protocols to determine the source / destination addresses and / or methods for (heuristically) recognizing particular data / application classes (for
  • Transmission interruptions - ie periods in which no data packets can be exchanged between a transmitter and a receiver - are another potential cause for reducing the quality of service.
  • a transmission interruption may relate to one or more protocol layers and be perceived differently or not uniformly on different protocol layers. Especially For example, a (transient) slow and / or delayed and / or lossy transmission of data packets through the lower protocol layers on the higher protocol layers may be perceived as an interrupt. Also, a brief interruption on the lower protocol layers may not even be noticed by the above protocol layers.
  • the cause of transmission interruptions can be manifold.
  • the reception could be disturbed (for example, because transmitters and / or receivers have moved out of the reception area, obstacles have gotten in the way or because of weather conditions such as heavy rain and / or clouds and / or fog).
  • the network or a transmission section could fail and / or, for example due to overload or high load by other possibly higher prioritized data streams, the data exchange between a transmitter and a receiver (limited in time) would not be possible.
  • a (changed) routing in the network can lead to interruptions. This may be the case, for example, when a mobile user changes from one access point (for example, access point, radio mast, base station) of one wireless network to another (also referred to as handover).
  • a transmission interruption can interfere with a communication relationship between endpoints, which can be referred to as communication endpoints in the same way, when it occurs: after the endpoints have started a communication draw, whereby the data interruption is at least temporarily impaired by the transmission interruption is prevented or while one endpoint attempts to establish a communication relationship with another endpoint, whereby the communication interruption delays or (temporarily or completely) prevents the establishment of the communication relationship.
  • the endpoints may perceive a reduction in quality of service or an error situation.
  • data services use protocol hierarchies, for VoIP consisting for example of IP, UDP and RTP, for web surfing, for example, IP, TCP, optionally TLS / SSL and HTTP.
  • protocol hierarchies for VoIP consisting for example of IP, UDP and RTP, for web surfing, for example, IP, TCP, optionally TLS / SSL and HTTP.
  • the protocol hierarchy resulting solely from RTP, UDP, IPv4 is shown roughly and by way of example in FIG.
  • the protocol hierarchy resulting from HTTP, TCP and IPv4 is exemplified in FIG.
  • the "size" of each layer provides a rough indication of the overhead that can be created by the protocol headers of each layer.
  • Each of these protocols uses its own protocol header, which quickly adds up to a large overhead when using multiple protocols or a header hierarchy.
  • the existing options for header compression are usually only used for individual transmission sections (for example, in a simple case, between two channels directly connected via a physical transmission medium or a layer 2 network). tern) used. In this case, however, they reduce the header overhead only on the corresponding transmission section.
  • the header compression (often called CRTP) described in [9] allows, for example for use over a transmission section, the headers of the protocols RTP, UDP and IP to be compressed together, thus making it possible, in total, in the order of 40 Bytes these headers to compress on the order of 2-4 bytes.
  • the existing header compression capabilities allow only a few of the affected headers to be compressed.
  • [9] describes that as an alternative to the common compression of RTP, UDP, IP, only the RTP header can be compressed for only one transmission segment. If only the RTP header is compressed, this header compression can also be used end-to-end (for example, directly from one phone to another).
  • the uncompressed maintenance of the UDP and IPv4 headers in this case allows intermediary network components (such as routers) to route the packets despite the compressed RTP headers.
  • the efficiency of the reduction also becomes correspondingly smaller.
  • the uncompressed UDP and IP headers would still be on the order of 28 bytes in size.
  • the RTP header is reduced in the order of 12 bytes to about 2 to 4 bytes. The efficiency of the protocol decreases.
  • the object of the invention is to provide improved technologies for optimizing packet-oriented data transmission between communication end points in a network with communication end points.
  • optimization is aimed at one or more of the following: reducing the effects of transmission delays, reducing the effects of packet loss, reducing the effects of transmission interruptions, and reducing the impact and / or overhead of data transmission on all or some of the networks used ,
  • a method of optimizing data transmission between communication endpoints in a network having communication endpoints comprising the steps of:
  • a waiting message for the communication endpoint initiating waiting message and / or a pseudo-address comprises, which is selected locally by the optimizer arrangement and different from the further communication endpoint associated address, Initiate the packet-oriented data transmission between the communication endpoint and the further communication endpoint.
  • the address may be determined based on the information contained in a name resolution request. This can be done by providing the same or a modified name service request to a name service server or to an optimizer of the same or another optimizer arrangement. Also, the address may be predetermined by static rules or determined by applying dynamic rules. The address may designate or refer to the further communication endpoint named in the name service request or an optimizer of this or another optimizer arrangement.
  • a waiting message can be formed with:
  • the initiation of a wait state can be supported / realized / signaled by at least one name service and / or at least one data service protocol. This form of delaying a response is independent of whether the at least one response sent was generated by the optimizer, received by another optimizer, received by a name service and / or another component.
  • a method for optimizing data transmission between communication endpoints in a network having communication endpoints comprising the steps of: Forming a communication relationship between a communication endpoint and another communication endpoint, wherein the communication relationship is configured for a packet-oriented data transmission in which a data stream is comprehensively formed exchanged data packets,
  • Optimizing the packet-oriented data transmission between the communication endpoint and the further communication endpoint by providing information about a communication behavior of the communication endpoint and / or the further communication endpoint in a future name resolution process by means of the optimizer arrangement as part of an optimization mechanism for the packet-oriented data transmission.
  • a third aspect of the invention there is provided a method of optimizing data transmission between communication endpoints in a network having communication endpoints, the method comprising the steps of:
  • the optimization mechanism is executed logically separated from the communication endpoint and the further communication endpoint,
  • the optimization mechanism is executed in protocol layers above the protocol layer 2 protocol layer 2, and
  • an advantageous embodiment of the invention can provide that logically between at least two of the optimizers or between the one optimizer and the other communication endpoint is at least one not participating in the optimization on the / the considered protocol layer (s) involved system. It can be provided that the optimizers capture as much of the data transmission in the network as possible and / or detect the subarea (s) of the network that are particularly problematic for the quality of service of the data transmission.
  • a fourth aspect of the invention there is provided a method for optimizing data transmission between communication endpoints in a network having communication endpoints, the method comprising the steps of:
  • transmission interruptions are periods in which, for example, from the point of view of a particular protocol layer, no data packets can be exchanged between two communication end points, for example a transmitter and a receiver.
  • Transmission breaks can be at the beginning, while or outside an existing communication relationship occur. They can - as described above - have many causes.
  • the planned simulation of the continuation of a communication relationship can be applied to an existing communication relationship, but in particular also to a communication relationship to be established. In the latter case, despite transmission interruption, a state of the communication relationship is simulated.
  • a preferred embodiment of the invention provides to apply the optimizations only for the establishment of a communication relationship or to apply the optimizations only during an existing communication relationship or to apply the optimizations for establishing a communication relationship and during an existing communication relationship.
  • a fifth aspect of the invention there is provided a method of optimizing data transmission between communication endpoints in a network having communication endpoints, the method comprising the steps of:
  • a sixth aspect of the invention there is provided a method of optimizing data transmission between communication endpoints in a network having communication endpoints, the method comprising the steps of:
  • One or more arbitrary optimization mechanisms can be used in combination.
  • aspects 3-6 of the invention individually or in combination, to the information exchange in the context of data services and / or name services, to the exchange of information between two optimizers and / or two components of an optimizer for the purpose of name resolution, Identify pseudo-addresses, delay responses, and / or provide information about communication behavior.
  • a further aspect of the invention relates to a computer program product with program code which is optionally stored on a computer-readable storage medium and is suitable for executing on a computing device a method according to at least one of the preceding aspects.
  • Advantageous embodiments of the invention are the subject of dependent subclaims. The following embodiments of the invention are initially assigned to simplify the illustration of the individual aspects of the invention. However, they may find application singly or in any combination with the various aspects of the invention, thus providing advantageous embodiments of the invention.
  • Locally valid addresses selected as pseudo-addresses may preferably be restricted in scope to an end system or a part of a network and include from the ranges for IPv4 addresses according to RFC 1918 or the corresponding ranges for IPv6 addresses. Addresses are selected according to RFC 4291. Furthermore, it may be advantageous to select reserved IPv4 or IPv6 addresses, such as IPv4 class E-addresses. It may also be advantageous to select HIP, cryptographically formed, multicast, broadcast or anycast addresses as pseudo addresses.
  • - is set to one second or another value of not more than one minute.
  • An advantageous embodiment provides that a condition takes into account the address to be resolved, the assumed transport protocol and / or the suspected application protocol.
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • NAT Network Address Translator
  • a preferred development of the invention provides that the provision of the information about the communication behavior comprises at least one step selected from the following group of steps: - passive monitoring of the communication relationship,
  • optimizing the packet-oriented data transmission comprises steps for generating a plurality of requests by the optimizer arrangement and / or for synthesizing responses to the requests in a response to the communication endpoint and / or the further communication endpoint.
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for suppressing subsequent requests.
  • optimizing the packet-oriented data transmission comprises a step for predicting a future request of the communication endpoint and / or the further communication endpoint.
  • optimizing the packet-oriented data transmission comprises a step for preprogramming an expected request of the communication end point and / or of the further communication end point.
  • optimizing the packet-oriented data transmission comprises a step for repeating lost requests or replies and / or redundantly transmitting at least one packet of the requests and / or replies.
  • optimizing the packet-oriented data transmission comprises a step for exchanging information between optimizers from the optimizer arrangement.
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission comprises at least one step from the following group of steps: data service optimization; an observation of a data service or the parsing of at least one data packet of a data service.
  • An advantageous embodiment of the invention provides that the data service optimization or observation or parsing concerns one of the protocols HTTP, SOAP, RTSP, SIP, XMPP, Flash or other application protocols.
  • Analyze an HTML page, a style sheet, an XML document, an SDP message (for example, according to RFC 2327 or 4566), a SOAP message, a MIME message, an FTP, SIP, HTTP , RTSP, Flash or XMPP message or certificates on content that can be interpreted as DNS names or names of other naming services;
  • Closing from names that designate a service or service provider which can be done from a statically configured and / or dynamically determined table, to the application protocol used;
  • optimizing the packet-oriented data transmission further comprises the following steps:
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • Delegate the determination of an assigned response to another optimizer Transferring the determined assigned response between optimizers of the optimizer arrangement, wherein the transmission can run between two optimizers, from one to several optimizers, from several optimizers to an optimizer and / or from several optimizers to several optimizers,
  • An expedient embodiment of the invention provides that the optimization of the packet-oriented data transmission takes place by interaction of an optimizer with a regular name service server, wherein an optimizer performs at least one of the following steps:
  • the optimizer makes an expected request to a regular name service server
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • optimizing the packet-oriented data transmission comprises a step for generating replies to name resolution requests, wherein the following steps are furthermore provided:
  • a waiting message for the communication endpoint initiating waiting message and / or a pseudo-address comprises, which is selected locally by the optimizer arrangement and different from the further communication endpoint associated address,
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • Locally valid addresses selected as pseudo-addresses may preferably be limited in scope to an end system or part of a network, and others. be selected from the ranges for IPv4 addresses according to RFC 1918 or the corresponding ranges for IPv6 addresses according to RFC 4291. Furthermore, it may be advantageous to select reserved IPv4 or IPv6 addresses, such as IPv4 class E-addresses. It may also be advantageous to select HIP, cryptographically formed, multicast, broadcast or anycast addresses as pseudo-addresses.
  • - is set to one second or another value of not more than one minute. It may be advantageous to determine a pseudo-address only as a function of at least one static or dynamic condition. An advantageous embodiment provides that a condition takes into account the address to be resolved, the assumed transport protocol and / or the suspected application protocol.
  • a further advantageous embodiment of the invention provides for combining an optimizer or a component of an optimizer with a Network Address Translator (NAT).
  • NAT Network Address Translator
  • a preferred development of the invention provides that the optimization mechanism is executed in at least one protocol layer selected from the following group of protocol layers of the protocol layer model: network layer, transport layer and application layer.
  • optimizing the packet-oriented data transmission further comprises at least one step selected from the following group of steps:
  • An advantageous embodiment of the invention provides that a step for dynamically adapting the optimization mechanism to the packet-oriented data transmission is provided.
  • dynamic adaptation may take into account the past, present, and / or expected future characteristics of the packet-oriented data transmission and / or the data path and / or selected parts of the data path. So optimization can only be applied to parts of the data path. Also, the extent of optimization (such as the amount and / or type of redundancy and / or the FEC and / or ARQ techniques used and / or the application and / or training of interleaving) may be adjusted. Furthermore, it may be advantageous to recognize the types of data packets and / or the communication protocols used and / or the type of communication relationship and to adapt the optimization accordingly.
  • a further development of the invention provides that the dynamic adaptation comprises a step for measuring transmission characteristics for the communication relationship and a step for adapting the optimization mechanism as a function of the measured transmission characteristics.
  • the measured packet loss rate and / or the measured transmission delay and / or the measured transmission rate can be taken into account.
  • optimizing the packet-oriented data transmission comprises a step for prioritizing the data packets of the data stream of the packet-oriented data transmission.
  • a preferred embodiment of the invention provides that the optimizer marks the packets according to their priority.
  • the optimizer manages at least one dedicated queue for the data packets and uses them to prioritize the data packets themselves, before or during transmission, or after / from receiving Data packets can happen.
  • the embodiment of the invention described here can be used not only in connection with the first but also with the other aspects of the invention.
  • a preferred embodiment of the invention provides that a step is provided for processing an optimization data stream formed in optimizing the packet-oriented data transmission without reverse conversion by a receiving communication endpoint.
  • the redundant information can be supplemented or constructed and / or transmitted in such a way that they can not be distinguished from data packets of a non-optimized data stream for the receiving communication endpoint. It can also be provided that the redundant information is supplemented or constructed and / or transmitted in such a way that at least parts of the redundant information are not or are not disturbed by the receiving communication endpoint.
  • optimizing the packet-oriented data transmission comprises a step for compressing header data of one or more data packets from the data stream of the packet-oriented data transmission.
  • the amount of transmitted redundant information is chosen so that it does not or substantially exceeds the amount of compression-saved data, the comparison being performed for a single packet, for several packets together and / or over a time interval can be.
  • a development of the invention provides that optimizing the packet-oriented data transmission comprises a step for retaining data from the data stream of the packet-oriented data transmission.
  • optimizing the packet-oriented data transmission comprises steps for locally generating data in the optimizer arrangement and for transmitting the locally generated data to the communication endpoint and / or the further communication endpoint.
  • An advantageous embodiment also provides that the retained or locally generated data are sent during a transmission interruption.
  • this transmission during the transmission interruption in terms of time delay and / or data rates and / or packet numbers and / or pauses between the packets is designed to bridge an expected duration of a transmission interruption or as long as possible and / or To be able to bridge the consequences of transmission interruptions.
  • An advantageous embodiment provides to determine the amount of data transmitted per unit of time as a function of the total retained quantity and / or the arrival of further data (amount per time interval) and / or the expected or calculated or predicted duration of the transmission interruption. It may also be advantageous to vary the transferred amount over time.
  • a preferred development of the invention provides that these retained data are user data and / or control data from the point of view of the optimized protocol layers.
  • One embodiment of the invention provides that the retained or locally generated data from the perspective of the / in the optimization involved prototype kolls / logs for user data.
  • a further embodiment of the invention provides that the retained or locally generated data is control data from the point of view of the protocol (s) included in the optimization.
  • a preferred development of the invention also provides that the amount of retained data is adapted to the length of the expected and / or to be tolerated transmission interruptions. Moreover, it is a preferred embodiment of the invention to adjust the amount of retained data to an acceptable retention delay. It is a preferred embodiment of the invention, the length of the expected transmission interruptions and / or acceptable retention delays by optimizer configuration settings, control signals, heuristics, for example, based on past and / or in other situations and / or in other networks and / or values measured by other optimizers to determine and / or influence.
  • a further preferred development provides that instead of and / or in addition to the retention of data by the optimizer additional data and / or data are requested ahead of time.
  • a preferred embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for predicting interruption characteristics of the communication interruption's communication interruption.
  • a preferred embodiment of the invention provides that optimizing comprises a step for additional and / or premature requesting of data from the data stream of the packet-oriented data transmission.
  • optimizing the packet-oriented data transmission comprises a step for coherently compressing headers of a plurality of data packets of the data stream of the packet-oriented data transmission.
  • several headers of a single data packet are compressed contiguously or a header of several Data packets compressed contiguously.
  • contiguous compression is meant the joint - successive or simultaneous - consideration of said headers for compression, which may be independent of the temporal relationship of the data packets and / or the spatial arrangement of the header.
  • An advantageous embodiment of the invention provides that the compression of the at least one header is executed only for a part of the data packets of the data stream of the packet-oriented data transmission.
  • optimizing the packet-oriented data transmission comprises a step of exchanging additional information comprising one or more packets selected from the following group of packets: existing control packets, additional control packets and additional data packets.
  • the compression of the at least one header of the at least one data packet comprises a step of at least partially replacing the at least one header by one or more context identifiers.
  • a preferred embodiment of the invention provides that compressing the at least one header of the at least one data packet comprises a step of at least partially compressing at least one header selected from the following group of headers: IPv4 headers, IPv6 headers, Ethernet headers, UDP headers, RTP headers and TCP headers.
  • compressing the at least one header of the at least one data packet comprises a step of incorporating information in compression selected from the following group of information: source address information and destination address information.
  • An expedient development can provide that addresses of one type are converted into addresses of a different type.
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for selecting an algorithm by the compressor for a unidirectional transmission path between the compressor and the compressor.
  • a further development of the invention preferably provides that optimizing the packet-oriented data transmission comprises a step for compressing user data of the at least one data packet.
  • optimizing the packet-oriented data transmission comprises a step for applying a protocol enhancement method.
  • the application of a protocol enhancement method can be provided as an advantageous development also in connection with the other aspects of the invention.
  • optimizing comprises a step for nested optimization of the packet-oriented data transmission with the aid of a plurality of optimizers of the optimizer arrangement.
  • the recognition of the optimization option furthermore comprises at least one step selected from the following group of steps:
  • An advantageous embodiment of the invention provides that the optimization possibility is detected during the application of a preceding optimization and an optimization mechanism. This includes, in particular, the recognition of another possibility for optimization than the preceding one; the recognition of the change of the optimization possibility; recognizing that optimization can still be applied; recognizing that applying optimization will produce better or worse results than the previous one; the recognition of the omission of an optimization possibility; and / or determining the parameters of an optimization option.
  • a development of the invention provides that the recognition of the optimization option furthermore comprises at least one step selected from the following group of steps:
  • the selection of the optimization comprises a step for selecting a header coprimization.
  • a preferred embodiment of the invention provides that a step for testing compressible headers is provided.
  • a preferred embodiment of the invention provides that a step for testing is repeated systematically with differently compressed headers.
  • An advantageous embodiment of the invention provides that it is concluded from the testing of compressed headers on which mechanisms for header compression can be applied.
  • optimizing comprises a step for nested optimization of the packet-oriented data transmission of a plurality of optimizers of the optimizer arrangement.
  • Such nesting may preferably have two or more nestings.
  • optimizer arrangements can also be arranged in series / in series and / or in parallel. It may also be advantageous to combine parallel, serial and / or nested optimizer arrangements.
  • An advantageous embodiment provides that at least two optimizers of these optimizer arrangements exchange information and / or use information contained in an optimized data stream jointly (for the purpose of at least one of their optimization functions). As a result, a usable in the various aspects of the invention embodiments is formed.
  • Conditional suppression of repeated packets the condition being that the repeated packet is only suppressed if it is received within a statically configured and / or dynamically determined period of time;
  • Conditionally suppressing repeated packets the condition being that the repeated packet is only suppressed if it is received outside of a statically configured and / or dynamically determined period of time;
  • Conditional suppression of repeated packets the condition being that the repeated packet is suppressed only if it is a request
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for applying the optimization mechanism to selected data packets of the data stream, wherein the selected data packets are selected from at least one selection criterion from the following group of selection criteria:
  • a preferred embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step of jointly applying the optimization mechanism to a plurality of data packets of the data stream.
  • optimizing the packet-oriented data transmission further comprises the following steps: determining whether a disruption or interruption of the packet-oriented data transmission with respect to the optimizing in a communication path associated with the communication endpoint or in a communication path associated with the further the communication endpoint is expected, and adjusting the optimization mechanism to the particular communication path.
  • optimizing the packet-oriented data transmission further comprises the following steps: determining a type of the exchanged data packets, determining the application protocol of the exchanged data packets and adapting the optimization mechanism to the particular type of data packets exchanged.
  • optimizing the packet-oriented data transmission further comprises the following steps: determining a current load for the communication relationship and adapting the optimization mechanism to the specific load.
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission further comprises at least one of the following steps: unidirectional, backward-channel-free optimization and bidirectional optimization.
  • a further development of the invention provides that the optimization of the packet-oriented data transmission combined with at least one step is carried out from the following group of steps: performance enhancement method, data compression / decompression, data encryption and data transcoding.
  • the communication relationship is formed comprehensively a point-to-multipoint or multipoint-to-multipoint data communication.
  • a preferred embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for selecting an optimizer as a representative of one, several or all optimizers of the optimizer arrangement.
  • the selection of the optimizer comprises a step for temporally and / or spatially dynamic selection of the optimizer as a representative.
  • An advantageous embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step of utilizing multiple network paths of the network for transmitting redundant information.
  • a preferred embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step of using several network paths of the network for forming a data load distribution for the packet-oriented data transmission.
  • a preferred embodiment of the invention provides that optimizing the packet-oriented data transmission comprises a step for controlling an optimization functionality of the optimization mechanism as a function of network signals.
  • An expedient embodiment of the invention provides that the communication between two optimizers takes place through a tunnel formed by means of another protocol.
  • a preferred embodiment uses IP, UDP, TCP, IPsec, SSH, SSL, SCTP, DCCP, ICMP, HTTP, HTTPS, SIP, FTP, NNTP, DTN, DNS, RTSP, SOAP, XMPP, XML, and / or a peer to-peer overlay as a tunnel.
  • the optimization is applied to networks and / or subnetworks and / or difficult communication paths, wherein the networks and / or subnetworks are formed by at least one of the following networks: Wide Area Networks (WAN), Metropolitan Area networks (MAN, DVB-C), internetworks such as IP networks or delay-tolerant networks (DTNs), local area networks (LAN) such as Ethernet and WLAN, PDH, SDH, DVB-C and / or ATM networks.
  • WAN Wide Area Networks
  • MAN Metropolitan Area networks
  • DVB-C internetworks
  • DTNs delay-tolerant networks
  • LAN local area networks
  • Ethernet and WLAN PDH, SDH, DVB-C and / or ATM networks.
  • Networks telephone networks, radio networks (such as mobile, WiMax, 3G, UMTS, HS (D) PA, DVB-T, LTE, UWB, OFDM, 802.1 Ib / a / g / n / p / s, Among Satellite networks, such as DVB-S / S2, DVB-RCS, S-band, proprietary satellite links, space radio networks), even in cable (broadcast) networks (such as cable networks, DSL, fiber-to-the-home,. ..) etc., but also overlay networks such as peer-to-peer networks) and also combinations of arbitrary networks of different types.
  • radio networks such as mobile, WiMax, 3G, UMTS, HS (D) PA, DVB-T, LTE, UWB, OFDM, 802.1 Ib / a / g / n / p / s, Among Satellite networks, such as DVB-S / S2, DVB-RCS, S-band, proprietary satellite links
  • Another expedient embodiment of the invention provides that the one or more optimized by the optimization of data services at least one of the following services: telephony, video calling and (video) conferences over the Internet (hereinafter collectively under the term "VoIP "), Audio / video streaming, access to web pages (" web surfing "), HTTP-based data transfer, HTTPS-based data transfer, WAP, web services, network management, access to a file system, collaborative editing of documents, presentations, file transfers, sending, receiving, and / or editing (including deleting, sorting, dropping) of email, chat, peer-to-peer applications, remote access to computers, remote control of systems, etc.
  • VoIP Voice IP
  • Audio / video streaming access to web pages
  • HTTP-based data transfer HTTPS-based data transfer
  • WAP web services
  • network management access to a file system, collaborative editing of documents, presentations, file transfers, sending, receiving, and / or editing (including deleting, sorting, dropping) of email, chat, peer-to-peer applications, remote access to computers, remote control of systems,
  • a packet-oriented data transmission apparatus or system may be provided between communication endpoints in a network having communication endpoints having an optimizer arrangement configured to provide a packet-oriented data transmission optimization mechanism according to a method of to carry out one of the aforementioned embodiments.
  • Fig. 5 roughly a protocol hierarchy for RTP, UDP and IP and
  • Fig. 6 roughly a protocol hierarchy for HTTP, TCP and IP.
  • optimizer may be a component which inserts an optimization into the exchanged data (where " Insert "is understood here and in the following also the general making an optimization on the exchanged data).
  • Insert is understood here and in the following also the general making an optimization on the exchanged data.
  • one or more optimizer components that evaluate the inserted optimization / use and usually restore the original data stream wholly or often in large parts and / or the desired semantic effect of the originally intended data stream in often large parts restore.
  • the communication with the / the two optimizer components is described below only in each case for a communication direction in order to make the description clearer.
  • the optimization in the reverse direction is also possible if applications and transmission networks require this, it appears advantageous and / or the arrangement and / or components are designed accordingly.
  • An optimizer may be a component of one of the endpoints known components of the communication relationships, such as a router, a server, a proxy or other intermediate system (Intermedia te system) from the perspective of communicating endpoints, or it may be "transparent" in the communication relationships intervene, that is, without its existence having to be communicated to the endpoints or even having to be known, the communicating endpoints are also referred to as communication endpoints in a synonymous manner.
  • An optimizer may be distributed among multiple logical and / or physical components. Also, an optimizer may be partially or wholly part of one or more of the end systems, or executed on one or more of the end systems. In particular, an optimizer or a part thereof as software and / or hardware can be an additional component of other system components (routers, servers, proxies, intermediate systems etc.). the.
  • optimizations and the optimizers performing them are, for example, header compression (executed by suitable compressors), transmission optimizations for dealing with delays, packet losses and / or interruptions.
  • header compression executed by suitable compressors
  • transmission optimizations for dealing with delays, packet losses and / or interruptions.
  • packet and data packet synonymous and also regardless of whether in a packet / data packet user data and / or control information is available used.
  • These data packets follow a transmission path (also referred to as a communication path, data path, or short path) that has been dynamically selected, statically configured, or otherwise determined, for example, by a routing protocol on the network layer. From one endpoint to another, such a path often passes through one or more intermediate systems (eg, routers, network nodes, proxies) through one or more transmission networks.
  • a path can be symmetric: then packets from one endpoint A to another endpoint B go through the same intermediate systems as endpoint B to endpoint A packets, but in reverse order.
  • a path may be asymmetric if the intermediate systems traversed in opposite directions are different. Such a symmetry can also be considered - on a coarser level - for transmission networks.
  • a path is in the simplest case composed of a sequence of sections / transmission sections ("hops"), wherein a section connects two adjacent intermediate systems (for example with regard to a protocol layer) to one another
  • a path can also contain several alternative paths, which are potentially or actually traced in each case by a part of the data packets following the path, in this text the term communication path also becomes synonymous or data path instead of and equivalent to the term path.
  • a difficult communication path is a component of the communication paths between A and B, the properties of which may possibly have a detrimental effect on the achievable quality of service, as described above in the examples transmission delay, packet loss and interruptions.
  • These characteristics of the communication path may be due to the characteristics of a single network component and / or a single network section and / or to the combination of the properties of several network components and / or network sections; In particular, none or none of the involved network components / network sections alone must already result in a difficult communication path.
  • optimizers that are preferably on the endpoint A-facing side of the particular difficult communication path considered may play a different role than optimizers with respect to possible or actual data flows the other end point B facing side of each considered difficult communication path.
  • the characteristics of a difficult communication path may be temporary (almost never, rarely, occasionally, frequently, etc.) while the same communication path has normal characteristics at other times.
  • the invention aims at optimizations for data and / or name services.
  • the following explanations are subdivided into three main sections, which cover complementary and arbitrarily combinable aspects of the invention: I. Optimizations for the use of name services, II. General optimizations for handling packet losses, high transmission delays (RTT), interruptions etc., as well as III. Further optimizations by compression of protocol headers.
  • Figures 1 to 3 show several system configurations with two communicating endpoints A and B, which may be source and / or sink for payload.
  • the two endpoints are interconnected via a series of transmission networks.
  • Connected means that A and B can exchange data packets. These data packets follow a path / communication path (also referred to as path for short) that has been dynamically selected, statically configured or otherwise determined, for example, by a routing protocol at the network layer.
  • FIG. 3 further shows by way of example different locations in some intermediate systems and / or the end points in which optimizations can be performed by optimizers.
  • optimizations can be performed by optimizers.
  • Fig. 1 a illustrates an arrangement with two endpoints (A and B) and two of these separate optimizers (X-I and X-2).
  • the endpoints send and receive data packets unchanged.
  • the data packets sent by an endpoint (for example, from A to B) can be picked up by X-I and optimized before being forwarded to X-2, so that the optimization Ol can be used for the transmission network N-X.
  • the optimized data packets are recognized by X-2, exploited on an as-needed basis, and the data packets are restored to their original form (as sent by A), either largely or completely, and then forwarded to B.
  • the data packets sent by X-2 are not or not significantly distinguishable from those sent by A, or need not be separately distinguished from them.
  • the optimization of the data packets on the path section by the network N-X is transparent to the endpoints.
  • the optimization may be applied to all data packets and / or all data packets exchanged between two or more endpoints and / or and / or two or more instances by a particular application and / or within the context of a communication relationship.
  • optical data packets here and in general for better readability in the text is simply referred to as "optimized data packets.” However, this does not only include an adaptation per data packet, but also in general any adaptation of the corresponding stream of data packets Modification of the individual packets, modification of selected packets, insertion of additional packets or other transmission of additional information, but also for example by a special treatment and / or prioritization and / or lower delay (for example by queuing) and / or a targeted delay and / or Oppression and / or Doubling / multiplying and / or one of the other optimizations of the packets described below takes place.
  • one of the optimizers (X-I) is integrated into the one end point (A).
  • the logical functions of endpoint A and optimizer X-I may be unchanged. In this way, no external component is required on the side of A, and from the point of view of the optimization Ol, the transmission networks N-A and N-X coincide. How the integration of X-I into A is done is up to the local implementation. It is conceivable that the two functions are also implemented independently of each other, such that an independent process and / or a driver of the operating system implements the optimization function. It is also possible that a plug-in card and / or the firmware on an on-board unit perform these tasks.
  • the optimizer may be executed for one, several and / or all applications and / or one, several and / or all communication relationships of one, several and / or all applications.
  • the counterpart (optimizer X-2) corresponds to that of Fig. 1 a).
  • Fig. 1 c illustrates an arrangement in which both optimizers X-I and X-2 are integrated into the end points, as just described for optimizer X-I.
  • the executions for Optimizer X-I apply analogously to Optimizer X-2.
  • the implementations in the two endpoints may correspond to each other or be designed in parts or quite differently.
  • Fig. 1 d shows an arrangement in which between the end points A and B two independent optimizations (Ol and 02) take place sequentially on different network sections (N-Xl and N-X2).
  • the data packets sent by end point A are first transmitted via the transmission network NA to optimizers XI-I and then optimized by optimizers XI-I.
  • the optimized data packets are sent via the transmission network N-Xl to the optimizer X 1-2 and received by the latter and partially or completely restored and then forwarded via the network M to optimizer X2-1.
  • There will be the Data packets are again optimized and sent via the network N-X2 to optimizer X2-2, which receives the data packets and partially or completely recovered to point B forwarded.
  • the two optimizations O1 and O2 can use the same, partially or completely different methods and algorithms that operate on the same and / or (partially) different (parts of) packets and / or packet headers and / or payload data or control data.
  • any number of such optimizations can be present in a specific arrangement.
  • one and the same optimizer may be involved in more than one optimization.
  • the optimizers X 1-2 and X2-1 may be implemented in the same system, in which case no transmission network M is in fact present.
  • the number of optimizations with the selected path through the transmission network or transmission networks can vary: between two end points A and B, this can happen, for example, in time if, for example, due to the routing decisions, the path changes during an existing communication relationship.
  • one or more of the optimizers can also be integrated into the end points.
  • optimizers can also be arranged in parallel. This may be the case (as just described), for example, if the routing changes through the network (s), but also if the routing splits the data packets of a communication relationship over several paths (for example, for load distribution in the context of trafflc engineering ). In such a case, different data packets take different paths and may be affected by different optimizations; Also, for example, can be done on any of these paths no optimization.
  • two or more optimizations may be nested.
  • the transmission network N-X2 is surrounded by the optimizers X2-1 and X2-2, which implement the optimization 02.
  • the network section consisting of the transmission networks N-XIa, N-X2 and N-XIb is surrounded by the optimizers Xl-I and X 1-2, which implement the optimization Ol. Since the optimization 02 occurs within Ol, the optimizers X2-1 and X2-2 work with partially optimized data packets.
  • a data packet P which is sent from an end point A, is initially un-optimized over transmit the transmission network NA and then optimized in optimizer Xl-I according to the optimization Ol.
  • the thus optimized data packet P ' (also here and following analogous to the optimized data stream) is then sent via the transmission network N-XIa to optimizer X2-1.
  • Optimizer X2-1 now carries out a further optimization O2, which results in the data packet P "For this optimization 02, the same and / or other criteria can be used as for oil
  • the data packet P " is transmitted via the transmission network N-X2 and received by the optimizer X2-2, where P 'or a data packet corresponding substantially to P' is reconstructed.
  • This reconstructed data packet is then forwarded via the transmission network N-XIb to optimizers X 1-2.
  • This data packet reconstructed by XI-2 is then forwarded to endpoint B via the network NB.
  • the optimization oil can be completely retained and / or even completely / partially uninterpreted by the components X2-1 and / or X2-2.
  • X2-1 could also wholly or partially cancel the optimization oil, for example, in order to more efficiently optimize some already optimized for oil in combination with the methods used for O2 or more efficiently for the following network sections / networks and / or or to share some or all of the optimization aspects in whole or in part for both methods.
  • FIG. 1 e Although only a simple nesting of optimizations is shown in FIG. 1 e), the number of nested optimizations is basically not limited. Also, sequential optimizations as described for FIG. 1 (d) may be present at any nesting depth.
  • optimizers of different nestings can be implemented in one system, for example Xl-I and X2-1 coincide (in which case the transmission network N-XIa is practically eliminated). Furthermore, single or multiple optimizers may be implemented directly in endpoints (analogous to FIG. 1 b) and FIG. 1 c). As described for FIG. 1 d), the composition of the optimizers may change temporally and / or spatially; also several optimizations can be operated in parallel.
  • FIG. 2 with optionally only a one-sided optimizer component (in Fig. 2 optimizer XI) can be realized.
  • the optimizer XI can be integrated into the end point (FIG. 2 g) or can be arranged detached from the end point in the data path of the packets in the network (FIG. 2 f)). Also possible in these cases, for example, multiple optimizations with multiple optimizers used only on one side and / or in combination with optimizer component pairs as described for example in Fig. 1.
  • Fig. 3 refers specifically to the following aspects of the invention described in section La).
  • the optimization is performed by an optimizer X-I, which according to FIG. 3 a) can be arranged as a separate component in the data path between the communicating end points A, B.
  • the optimizer X-1 can also be integrated into the end point A.
  • the optimizer X-I can also be divided into a plurality of logical or, as shown in FIG. 3 c), also a plurality of physically separate components.
  • a component X-Ia in the data path between A, B and in a component X-Ib which provides, for example NDS functions.
  • FIGS. 1, 2, 3 are shown by way of example only and to be understood by way of illustration. Any combinations of these arrangements and arrangements derived therefrom are possible. Although the following is spoken of, for example, arrangements according to FIG. 1, FIG. 2 and / or FIG. 3, the arrangements combined and derived therefrom are also meant.
  • FIGS. 1, 2 and 3 only two communicating entities are shown in FIGS. 1, 2 and 3 for the sake of clarity. Equally, however, more than two instances may occur as sources and / or sinks of payload data and / or as transmitters and / or receivers of control information.
  • the applications can transmit user data unidirectionally and / or bidirectionally in all arrangements.
  • individual, some and / or all networks can be physically designed for unidirectional and / or bidirectional transmission.
  • a transmission network or network can consist of several / many transmission sections with interconnected systems (bridges, switches, routers, gateways, proxies etc.), can consist only of individual ones Transmission sections exist (for example, a "through-line" or a direct physical connection by electrical cable, fiber optics, acoustic coupling, electromagnetic waves, etc.), but may well consist of several interconnected subnets (for example, use the Internet Protocol).
  • the networks shown may also consist of local connections or a local network (in particular, this will quite often be the case with network NA and NB, but may equally apply to the other networks).
  • the above-mentioned networks may be arbitrary networks (for example IP networks, delay tolerant networks (DTNs), local area networks such as Ethernet and WLAN, PDH, SDH and / or ATM networks, telephone networks, radio networks (such as mobile telephony, WiMax, 3G, UMTS, HS (D) PA, DVB-T, LTE, UWB, OFDM, 802.1 lb / a / g / n / s, Among other networks, satellite links (such as DVB-S / S2, DVB-RCS, proprietary satellite links, radio networks in space), and also in wired (broadcast) networks (such as cable networks, DSL, fiber-to-the-home, ...) etc., as well as overlay networks such as peer-to-peer networks.
  • Networks and also combinations of arbitrary networks of different types.
  • the optimization features are not limited to use by protocols of a particular layer, application, and / or type of application, but may or may not be specific to them.
  • Optimizations can work on individual layers or across layers. The optimization may depend on the characteristics of the surrounding networks or the Paths dependent on the network and / or the function and / or the presence of certain network elements: for example, an optimization function can work differently if the packets on the way certain other network elements such as routers, NATs and / or firewalls must pass. Different optimizations (and their optimizers) can coordinate with each other and / or work independently of each other.
  • two communication phases can be distinguished: a name service phase in which at least one name resolution is performed, and a data service phase during which the communication within a data service potentially takes place using at least parts of the information obtained by the name resolution.
  • the name and data service phases may be sequential, parallel or (partially) overlapping; for a data service phase, none, one or more name service phases may be performed, and a name service phase may include one or more name resolutions.
  • a name service In a name service phase, the use of a name service potentially requires the exchange of information (eg, sending a request and receiving a response) for name resolution.
  • This information exchange takes place within the framework of a communication relationship between, for example, an end point A and the name service server. From the point of view of using the name service, the end point A and the name service server are then two endpoints of the communication relationship for queries to the name service. This communication relationship is also illustrated in FIG. 1: endpoint B then represents a name service server.
  • endpoint B then represents a name service server.
  • name service server designates a component to which an endpoint (for example A or B) and / or thus also a name service server of other hierarchy levels / levels can request a name resolution request.
  • Answering a request may also be done by appropriately forwarding the request (for example, to one or more other name service servers), with a subsequent response then directly to the requesting endpoint, indirectly via the forwarding name service server, and / or indirectly via other logically or physically trained components is carried out.
  • a request may also be included implicitly in a message (for example, one or more packets) of a data service.
  • a name service server (as described above) may generate a response as part of the data service. Examples include HTTP or SIP redirect messages that use error code 301 or 302 and are generated by a web server, proxy, or SIP redirect server or user agent. But other protocols provide appropriate messages.
  • a name service server can also be consulted, for example, by an intermediate system (for example, a "proxy", "peer") of a data service, which then forwards the message of the data service to the intended endpoint B according to the address resolution by the name dispatch server, for example to another proxy and / or one or more other endpoints.
  • examples of data services including name resolutions include Session Initiation Protocol (SIP), Hypertext Transfer Protocol (HTTP), peer-to-peer, and other overlay networks.
  • Name resolution requests can be made by endpoints and / or name service servers.
  • An optimizer arrangement with at least one optimizer can realize an optimization for a name service. This can be particularly advantageous if the communication of an end point A in a name service phase with a name service server (NDS) B takes place via a difficult communication path (SKP).
  • the presence of an SKP can lead to queries to one (and possibly replies from one) NDS caused by the over- Delay delay be delayed. Also, requests and / or replies to the SKP may be lost, requiring them to be retransmitted after a timeout, which in turn results in a delay. Also, a limited bandwidth on the SKP may cause requests and / or replies to be delayed and / or lost.
  • a temporary transmission interruption for example, can result in an NDS being temporarily unavailable, so that name resolution can not be performed temporarily, which in turn can lead to a delay and / or an error situation.
  • An optimizer X of an optimizer arrangement may be configured for an endpoint A as a name service server (first level) - static or dynamic - or established as another name service server (second to nth level) for name resolution by the name service server of the first (to k-th) stage is used.
  • An optimizer X can also be arranged in the network topology or the network is configured such that the optimizer is in the path of the requests of an end point A or a name service server en route to a (further) name service server, so that the requests pass through the optimizer X. or be received by him.
  • endpoint A may also be a name service server that processes and / or forwards requests from other name service servers and / or endpoints.
  • the present invention reduces the influence of a difficult communication path SKP on the name resolution by two mechanisms, which are described below for clarity in I.a), Lb) and Lc).
  • the mechanisms described in the individual parts to I., their various forms and arrangements can be compared with each other and with those under IL and III. combined optimizations.
  • name resolution over a potentially difficult communication path may eventually result in a delay in the further execution of the data service because an endpoint A may or may not have to wait for the result of the name resolution before taking further steps (FIG. For example, building a communication relationship to another endpoint).
  • the name resolution performed by endpoint A may, as mentioned above, include any interaction with one or more name service servers and / or optimizers.
  • endpoint A may, for example, want to translate a DNS name into an IP address (for example, to reach a server), but also to implement an IP address in a DNS name ("reverse lookup") (for example, if a server attempts to determine the name of a communication partner contacting him.)
  • DNS name for example, to reach a server
  • reverse lookup for example, if a server attempts to determine the name of a communication partner contacting him.
  • an optimizer X which receives the request for name resolution from the end point A, to determine a pseudo-address and to send it promptly in response to A.
  • This pseudo-address does not need to match the address sought by the name service request, however, optimizer X stores at least portions of the request and / or at least portions of the pseudo-address and / or at least portions of the generated response and / or at least portions of the mapping between request and answer.
  • the pseudo-address is chosen in the context of the optimizer arrangement such that data packets sent to this address are received by a component of the optimizer X or of another optimizer of the optimizer arrangement (for example of XI or XI-a corresponding to FIG. 3).
  • the selection of the pseudo-address may be subject to certain rules (for example a locally valid, regionally valid, globally valid address or for a part of the communication path, in particular between XI or X-Ia and A according to FIG. must be valid), a specific address format, or they can be chosen freely.
  • the address can be selected at random and / or generated cryptographically.
  • a permanent or temporary validity period can be determined for a pseudo-address.
  • the period of validity may, in particular, be dependent on time, the expected
  • an address resolution between a request and a reply by means of a name service elapses.
  • the optimizer X in turn makes a request to another name service server to determine the address required to answer A's request.
  • This address - also referred to as "another address” - may be the one sought by endpoint A (eg the address of the further end system) or the address of optimizer X or another optimizer of one or the same optimizer arrangement
  • the optimizer X notifies a (logical or physical) component for the address translation (AU) of the pseudo-address, the "other address", and / or at least parts of the content of the request and / or the Answer to the name resolution with.
  • AU address translation
  • AU is not shown separately and integrated in the optimizer X-I.
  • the optimizer consists of several components (X-Ia and X-Ib) and AU can respectively correspond to the optimizer component X-Ia or functions of AU respectively integrated in Xl-a and / or AU, for example, the optimizer X-Ib enhus or functions of AU can be integrated into X-Ib. Functions of AU can also be included in both Xl-a and Xl-b.
  • the endpoint A now sends packets to the pseudo-address, these are received by at least one component of an optimizer X of the optimizer arrangement (for example from XI or X-Ia according to FIG. 3)).
  • an optimizer it may be advantageous to carry out further functions and optimizations.
  • the packets forwarded by X also deviate from the packets received by A in other header fields and, for example, packet boundaries and / or user data (limits) of the packet stream are modified.
  • An example would be an optimizer X operating in whole or in part according to a connection splitting method, which records the connections received by A (for example on one or more protocol layers such as TCP and / or HTTP). terminated in whole or in part and uses new / modified communication relationships for communication with B or intermediary instances from the point of view of the protocol concerned.
  • the translation of the pseudo-address into the "other address" (for packets sent from the endpoint A) and vice versa (for packets directed to the endpoint A) can be performed for each packet by the optimizer or forwarding a component of the optimizer.
  • the translation can be done for all packets from endpoint A and / or all local endpoints that have a pseudo-address given by an optimizer as the destination address.
  • the conversion for all packets to the end point A and / or all local endpoints can take place, which have as source address an address for which a pseudo-address has been assigned.
  • the conversion can be performed locally in an optimizer (such as optimizer X-I in Fig. 3a).
  • the conversion can be carried out in another optimizer-optionally in connection with a service service optimization (for example in optimizer X-2 in FIG. 1 a).
  • the implementation in one and / or both directions may depend on whether the communication relationship between the end point A and another end point has been initiated by the end point A, i. the first packet of this communication relationship was sent from endpoint A or through the other endpoint.
  • the implementation in one and / or both directions may depend on which transport protocol (for example UDP, TCP, SCTP, DCCP) is used.
  • transport protocol for example UDP, TCP, SCTP, DCCP
  • addressing addresses may be restricted to TCP, but UDPs may not be.
  • the conversion of addresses can be limited to those "other addresses" for which a name resolution by the end point A was previously requested.
  • the conversion of addresses can be limited to those "other addresses" for which a name resolution by the end point A was previously requested and this has received a pseudo-address in the response from the optimizer.
  • the conversion of addresses may be restricted to packets from / to endpoints A,... For which an address resolution was previously performed (and which has been given a pseudo-address by the optimizer).
  • the implementation of addresses may be limited to the duration of one or more specific communication relationships;
  • the duration of a communication relationship can be an optimizer, for example, by observing start and / or end packets (such as in TCP SYN and FIN packets) and / or by timeouts (such as when no more packets were transmitted for a static and / or dynamic predetermined period ) determine.
  • addresses can affect all packets, even if no name resolution was requested by the end point A before.
  • layer 3 headers for example IP addresses
  • transport layer 4 for example TCP or UDP port numbers
  • Statically configured and / or dynamically determined names and / or name ranges and / or addresses and / or address ranges and / or application protocols may be explicitly included in the implementation and / or be explicitly excluded from this.
  • two or more optimizers may be advantageous for two or more optimizers to provide information about the pseudo-addresses used and / or the images for other addresses and / or for the purposes of implementing and / or information about the current and / or recently active endpoints A and / or the current and / or recently active other endpoints (eg, name, address, other information) among each other. It can be provided that one or more optimizers perform the corresponding address translations. It can be advantageous for an optimizer, which assigns a pseudo-address, also to carry out the address translation locally or to have it executed (by a component AU). It may also be advantageous for an optimizer that has assigned a pseudo-address to have the address translation performed elsewhere in the network by another optimizer (or its locally assigned component AU). It may also be advantageous that an address is implemented several times. Two or more optimizers may advantageously vote over the usable ranges for pseudo-addresses and / or the already assigned pseudo-addresses.
  • the optimizer X buffers the packets sent to the pseudo address or data derived therefrom or even payload data from the end point A and / or simulates the conclusion or continuation of a communication relationship with the pseudo address, for example as compared to end point A. described under II, as long as no information about the belonging to a pseudo address "other address" exist.
  • step 1 exemplarily illustrates a possible network topology (corresponding to FIG. 3c) and shows a possible sequence of the optimization described here using pseudo addresses in steps 1 to 9.
  • an end point A can request a name resolution request (for the name "AE").
  • N the name of the other endpoint
  • the optimizer X-Ib which is configured, for example, as a name service server for the end point A.
  • the optimizer X-Ib in this example has no stored or configured information about" AE-N "and forwards therefore, continue the name service request (steps 2a, 2b, 2c, and 2d). Due to a transfer interruption, the first forwarded requests are lost.
  • the optimizer X-Ib delays the response as long as possible (as described in Lc), but generates in step 3 - before a time-out of the name service request occurs at end point A - a response with a pseudo-address PA, which the optimizer X -Ib selected locally (he could also tune with optimizer X-Ia).
  • Optimizer X-Ib also informs optimizer X-Ia about the newly assigned pseudo-address so that He knows that in future packets (from or to end point A) can be expected for this address (step 4); however, X-Ib can not yet refer to the other address at this time. Because name resolution has not yet completed, optimizer X-Ib continues retrying requests to the name service server.
  • step 2d a request reaches the name service server which replies it with the other address "AE-A" (step 7).
  • endpoint A has evaluated the response to its name service request and initiated a communication relationship with the other endpoint for which he uses the pseudo-address PA as the destination ("dst") and his own address "AA” as the sender ("src”) (steps 5a and 5b show the sending of two packets for this purpose).
  • the optimizer X-Ia receives the packets 5a, 5b, ... and stores them between (or accepts in the context of a endienstoptimierung the incoming communication relationship for the time being instead of the other endpoint), since he has no other address to which he forwarded the packets can (step 6).
  • the optimizer X-Ib receives the answer to its name resolution request (step 7), namely the address "AE-A" of the other end point, it informs the optimizer X-Ia about the now complete picture ("PA" ⁇ ->"AE
  • the optimizer X-Ia now knows the map and sends the packets destined for the pseudo-address "PA” after appropriate translation in the packet header (s) and / or content to the other endpoint using its correct address "AE
  • Further packets arriving from the end point A and addressed to "P-A" are forwarded immediately after the address has been translated (steps 5m and 5n or 5m 'and 5n'). Packets from the other endpoint are also forwarded by the optimizer X-Ia after the reverse address translation: here the source address of the other endpoint ("AE-A") is replaced by the pseudo-address ("PA").
  • An endpoint A will often ask more than one name resolution request to a name service server. This can happen, for example, because the endpoint establishes communication relationships (as part of data services) with several other endpoints and must determine their addresses. It may also happen that the answer to a first request for name resolution is insufficient because, for example, it only contains a reference to another name service server to be contacted, to which the original or a modified form of the original request is to be placed.
  • An optimization according to the invention in an optimizer arrangement may be advantageous here in order to reduce the delay and thereby increase the quality of service.
  • an optimizer can observe the name resolution requests made by an endpoint A, and infer future requests from past requests.
  • an optimizer may speculate from a request for name resolution of "tagesschau.de" that a request for www.tagesschau.de will be made in the future.An optimizer may also change from a request for one kind of name resolution to another For example, a request for an IPv4 address may result in a request for an IPv6 address (or vice versa), or the name resolution request for an alias may request a follow-up request for the address.
  • Name services can provide a variety of different types of name entries and corresponding name resolution requests, such as DNS, which not only knows A-records for IPv4 and AAAA records for IPv6, but also SRV, NAPTR, MX, alias, CNAME, TXT, PTR and other records that can be related to one another in name resolution
  • DNS name resolution
  • an optimizer may observe the exchange of information of the endpoint A with other endpoints in the context of communication relationships, for example a data service, optionally interpreting the protocols used (eg at the transport, session and / or application layer) and inferring therefrom conclusions on future name service - make inquiries. This can be done, for example, by analyzing messages - for example, requests and / or replies and / or notifications - of the application log by one of the at least one optimizer. The analysis may include header fields and / or payload data of one or more protocol layers. For example, an optimizer can analyze responses to requests from the HTTP protocol and, for example, search the contents of a "200 OK" response for a "GET" request.
  • the content may be, for example, an HTML page in which, for example, HTML elements (such as IMG) refer to other resources by means of a URI, such a URI potentially containing a future name to be resolved, for example, if the further resource is on another Web server is held as the retrieved and included in the HTTP response HTML page.
  • HTML elements such as IMG
  • a URI such as URI potentially containing a future name to be resolved, for example, if the further resource is on another Web server is held as the retrieved and included in the HTTP response HTML page.
  • an HTML page can be parsed for references to embedded objects, linked pages, style sheets, and / or frames; the analysis can be continued recursively to the contents of embedded, linked or referenced objects or resources.
  • Variants of such a procedure may be part of a data service optimization known as HTTP prefetching.
  • HTTP prefetching - as described for example in EP 1559038 - can be far more complex and perform more extensive functions than outlined here.
  • the above principle for combining this data service optimization with name service optimization remains the same: from the application protocol requests and / or responses and / or other messages, the name service optimizer closes on expected future name service requests and determines a response before the corresponding request is made to it.
  • a combination with HTTP prefetching allows recursively extending the analysis to future resolvable names to the embedded, linked, and / or referenced objects and / or resources.
  • the analysis is not limited to HTML pages, but may include any requested via HTTP or other protocols documents or general files, such as style sheets, frames, XML documents, SDP messages (for example, according to RFC 2327 or 4566), SOAP messages, MIME messages / objects, flash, video, audio, still images / files and / or certificates, Keys and / or other security objects can be extracted from which DNS names or names of other name services can be extracted.
  • HTTP or other protocols documents or general files such as style sheets, frames, XML documents, SDP messages (for example, according to RFC 2327 or 4566), SOAP messages, MIME messages / objects, flash, video, audio, still images / files and / or certificates, Keys and / or other security objects can be extracted from which DNS names or names of other name services can be extracted.
  • Such monitoring is not limited to HTTP;
  • Other data services such as HTTP, SOAP, RTSP, SIP, XMPP, Flash, or other application protocols, may also be used by the optimizer.
  • An optimizer can parse / interpret logs of any data service, and can collaborate with or act as a data service optimizer.
  • Such an optimizer may be provided on one or both sides of a difficult communication path. It can also be arranged completely or partially in the data path or be designed as an intermediate system or proxy of the data service in order to access the information exchanged in the context of the data service.
  • One, some, or all of the names found in the analyzed messages of the application log may potentially be expected to receive name service requests in the future. The same applies to potential future name resolution requests, which are suspected by analyzing messages from the name service protocol. Therefore, it may be advantageous for the optimizer to determine the answers to the expected inquiries in advance.
  • endpoint A can delegate the name resolution to XI and XI to X-2, for example - it can also be just one optimizer component (for example XI), multiple local optimizer components (for example X-Ia and X-Ib) and / or more than two optimizer components are involved).
  • an optimizer finding an unknown IP address delays the establishment of the communication relationship until a reverse lookup and / or other advantageous and / or necessary name resolutions - which can be derived from the address, from name resolutions (inquiries and / or answers) and / or from eg predicted names and / or addresses and / or other information - have been successfully performed ( about XI or X-2) and the result is XI, so that the following name resolution requests (reverse lookups) for this address and / or name and / or other information by the endpoint A or other endpoints or name service servers can be answered directly by XI.
  • a corresponding procedure is also reversed ("in the other direction") advantageous if an optimizer delays the establishment of the communication link until described name resolutions have been carried out and results in the Optiomiererkom- component X-2 on the other side of the transmission network N present
  • the optimizer or optimizer arrangement can optimize the establishment of communication relationships in both directions.
  • the response may be determined according to La) and / or by making a request to another optimizer and / or name service server and / or by consulting a local database and / or cache and / or in combination with at least one other name service optimizer.
  • the optimizer may pre-ping this expected request - directly or indirectly - to a name service server, and possibly the answer to the expected request received before the follow-up request.
  • an optimizer can then immediately answer the next inquiry using the previously received answer, so that the delay for the end point A between placing the request and getting the answer is reduced.
  • the optimizer can remember which expected requests he has already made. If a follow-up request coincides with an expected request already made by the optimizer but not yet answered, then the optimizer can suppress the forwarding of the follow-up request and wait for the answer to the expected request he has made, by which means he then can answer the follow request.
  • an optimizer arrangement can consist of two (or more) optimizers, for example XI and X-2. These optimizers can enclose a difficult communication path.
  • the optimizer XI can forward the requests to the optimizer X-2, who then requests them sent a name service server.
  • the optimizers XI and / or X-2 may - optionally individually and / or collectively - make predictions about A's expected requests, and optimizer X-2 may pre-submit this request to the name service server.
  • the responses received may then proactively send optimizer X-2 to the optimizer XI - or simultaneously or successively to a group of optimizers - who may store it to answer the corresponding follow-up questions of the end point A.
  • an optimizer arrangement can also consist of only one optimizer (which can be distributed over several components).
  • the optimizer as shown in Fig. 3, is preferably on the same side of a potentially difficult communication path or network N as an end point A that wishes to perform name resolution.
  • an optimizer can be advantageous even if it is on the other side of the difficult communication path or a transmission network N from A's point of view.
  • the optimizers may exchange query and / or response and / or other messages of a naming service protocol, where the transmission between two optimizers may be from one to multiple optimizers, from multiple optimizers to one optimizer, and / or from multiple optimizers to multiple optimizers. If the optimizer (for example, X-I in Figure 3a) is operating alone, it can perform the described procedures and make the necessary decisions locally without tuning to another optimizer.
  • An optimizer may also be provided only on the other side of the difficult communication path.
  • An optimizer when answering requests for name resolution, can also combine several answers into one and / or supplement answers with further (locally) generated and / or known and / or separately received replies.
  • An optimizer can cache responses to name resolution requests.
  • the storage may exceed the time necessary to answer a follow-up request.
  • the storage may also exceed the validity period (lifetime, time-to-live, TTL) displayed in the response. If a plurality of optimizers are provided in an optimizer arrangement and / or a plurality of optimizer arrangements are provided, the individual optimizers can exchange information with one another about the optimizations performed or in support of the optimizations to be performed. In particular, an optimizer may submit responses to name resolution requests and / or (modified) validity information to other optimizers.
  • an optimizer itself makes inquiries to a name service server, in order to induce him to forward the corresponding request and to save the answer if necessary.
  • the optimizer or other optimizer may be configured as a next-level name service server or, indirectly, another level for the regular name service server
  • the regular name service server can also use other regular name service servers for name resolution of the request made by the optimizer.
  • a name service optimizer may also initiate and / or support data service optimization. For example, HTTP prefetching may indirectly benefit from pre-available information about future name resolutions because potentially one or more name service phases may be shortened. Also, a name service optimizer may analyze name service requests and deduce from the name service requests the data services potentially used in the following data phase and / or the presumably used application protocols. For example, an optimizer can parse and / or interpret alias, SRV, NAPTR, A, AAAA, TXT, or MX requests for clues.
  • SRV or NAPTR records can provide direct indications of the data service, application and / or transport protocol, and other information.
  • names that designate a service or service provider may refer to the potentially used application be closed.
  • a statically configured and / or dynamically created or updated table and / or one or more rules and / or a database and / or a network management can be used to perform mapping of name resolution requests to suspected application protocols.
  • the name service optimizer may check, for example, whether a data service optimizer supporting the expected transport and / or application protocol is available.
  • the name service optimizer may initiate data service optimization for the expected transport and / or application protocol to the endpoint specified by the name (for example, server).
  • the name for example, server.
  • a data service optimizer that supports connection splitting can proactively establish a communication relationship (eg, a TCP connection) to the specified endpoint, such that the later setup time for the Communication relationship is reduced.
  • a data service optimizer that supports HTTP and HTTP prefetching may already pre-initiate HTTP prefetching to a suspected web server (the specified endpoint) and proactively request requests for the sites / objects / resources suspected and likely to be queried there For example, / index.htm, / index.html, /, / favicon.ico, / style.css, and / or other suspected objects then buffered by the data service optimizer and are ready according to the idea of HTTP prefetching, if the WebBrowser makes the corresponding requests, so that they can be answered immediately.
  • the optimizer can use the determined URIs and names to infer further potential name resolutions and potentially requested resources / objects / pages and optimize them Continue ng recursively.
  • optimization can be limited to particular transport and / or application protocols, specific namespaces (eg, domains, subdomains) may be explicitly included and / or exempted, the extent of optimization may depend on the system load and / or other parameters.
  • an optimizer X receives a name resolution request from an endpoint A, it may happen that - for example due to the difficult communication path and / or due to a transfer interruption and / or because another optimizer and / or a name service server is currently unreachable and / or timely can answer - a prompt answer to the request is not possible. As described in La), it may be useful to then assign pseudo-addresses. It may also be advantageous that the optimizer X does not immediately respond to the request of the end point A with an address, for example to avoid assigning a pseudo-address as long as possible.
  • endpoint A is waiting for an answer, and depending on the specific implementation, for example, the resolver or name service server at endpoint A often repeats its requests at certain time intervals if they do not receive a response. They often return an error message if they have not received an answer after one / several timeouts / repetitions of the request. Such a timeout / number of repetitions of the request can be very different (for example, between a few seconds and a minute). Therefore, an optimizer X can not simply delay the answer arbitrarily.
  • an optimizer takes into account the peculiarities of the respective name service and / or its various implementations (for example under MS Windows, Linux, MacOS) in order to design the delay of the name resolution in such a way that no error message is generated.
  • the design of the delay also includes accepted and / or measured RTTs for name service servers and / or other service providers. consider and / or accept endpoints A and / or accepted and / or include measured response times from name service servers.
  • a delay - or wait state of the endpoint - can be achieved in several ways.
  • an optimizer may simply delay sending one or more responses, even if one or more answers already exist.
  • the problem described above persists that a timeout of the name resolution on end point A can lead to an error message.
  • an optimizer may therefore be advantageous for an optimizer to choose a way to delay the response (s) to name service requests of the endpoint A such that no timeout and / or error situation occurs.
  • the optimizer can initially delay answers that he sends as long as possible, for example, according to the above statements. It may be advantageous for an optimizer to provide incomplete answers rather than complete answers, assuming that end point A then has to make any further modified requests (to get the missing information), so that the total time from the first Request extended until answer. For example, CNAME records can be used with DNS to achieve such indirection if necessary.
  • an optimizer may advantageously provide the endpoint A with a reference to another name service server to contact the endpoint for further processing of the request, rather than fully performing the resolution itself.
  • This additional name service server can be the same optimizer (possibly another optimizer component or another optimizer address) or another optimizer. It can also be provided that the end point A uses several name service servers. These can be, for example, the name service servers configured for the end point A and / or communicated to the end point A, for example as part of the autoconfiguration. It may be advantageous to simulate the existence of multiple name service servers to endpoint A, thereby allowing, for example, a longer delay of the responses. One, several or all of these fake multiple name service servers may be implemented by the same optimizer (e.g., it may have different addresses and / or be network topology in the path from endpoint A to the specified name service servers) and / or various optimizers.
  • the use of multiple name service servers by the endpoint A can lead to an extension of the timeout (for example, roughly nx timeout at n-name). service servers). It may be useful, for example, to provide several different name services (with different name service protocols, such as DNS and NetBIOS Name Service) for the endpoint to try, perhaps successively, the various name services.
  • One or more optimizers can advantageously take over the function of the name service servers of these various name services and - within the context of the respective name service protocols - for example by cross-references, incomplete and / or delayed sending responses, by references to other name services, etc.
  • an optimizer may also be useful for an optimizer to generate a response in one (or more) naming services (s) that requests endpoint A to wait for the response and / or later (approximately within or outside a specified time interval) ) again. For example, if there are messages available in the name service protocol that signal overloading of the server, they can be used by the optimizer. Also, repetitions of requests by endpoint A can be avoided by, for example, acknowledging receipt of the request (indicating that a final response follows), if the name service protocol provides such messages.
  • the Session Initiation Protocol supports responses such as "503" in combination with the "Retry-After” header (used for congestion) and "100" to acknowledge receipt of a request in that the endpoint A dynamically adjusts the timing of the repetitions of its requests and also the timeouts for its name service request, for example due to the RTT observed in the past
  • an optimizer may delay answering all name resolutions to the endpoint A, to simulate a high RTT and thus have a large timeout / many retransmissions and thus have more leeway for delaying responses, for example, it may be advantageous if the optimizer responds to the name resolution request so that the endpoint A is the Request again - for example, via another protocol - must provide example
  • a DNS request received via UDP can be answered with the TC TrunCated set in the response so that endpoint A repeats the same request over TCP.
  • Endpoint A can also communicate with the optimizer (for example, in the context of data service optimization) via a protocol, so that name resolution can potentially including the establishment of a communication relationship of the data service is explicitly delegated to the optimizer (for example, if SOCKS is used according to RFC 1928, for example, if a proxy is statically or dynamically configured in the endpoint, for example in the case of HTTP).
  • the name service and data service optimizer may act together as an optimizer.
  • endpoint A may result in an endpoint A not only performing name resolution for the other endpoint and then establishing a communication relationship (directly) to the other endpoint, but endpoint A instead establishing an (indirect) communication relationship with the other endpoint via at least one optimizer .
  • the end point A can first establish a communication relationship with the (known to him) optimizer and this example, communicate via this communication relationship, which would like to reach the other end point of the end point A.
  • this message may contain the name of the other endpoint so that prior name resolution by the endpoint A may not be required.
  • the endpoint can use the communication relationship already established for the optimizer for the later communication with the other endpoint (for example in the case of HTTP) and / or establish a further communication relationship with the optimizer which is then to be used for the communication with the other endpoint (for example in Case of SOCKS).
  • the optimizer may already accept / accept the further communication relationship (say, accept a TCP connection) before completing the name resolution for the other endpoint's name and / or reaching the other endpoint. Or, however, the optimizer may delay accepting / accepting the further communication connection until the name resolution has occurred and / or he has the further endpoint.
  • the optimizer can try the name resolution until an answer is given, and then continue to build up the communication relationship as part of the data service.
  • the optimizer - for example XI in Fig. Ia) - can delegate the name resolution to another optimizer - for example X-2 in Fig. A).
  • an intended communication relationship of the end point A can already be established from XI to X-2 and then continued to the target system as soon as the name resolution (in this example by X-2) is completed. Endpoint A remains hidden from the name resolution time; he only learns from the successfully completed connection setup as part of the data service optimization.
  • the optimizer can proactively refresh (expiration-expired) mappings at the next opportunity (such as when there is no transfer interruption), even if no request is currently in progress.
  • the optimizer can update the validity of the buffered map and / or the validity information of the propagated response by static Configuration, by dynamically determined values, by measuring the expected round-trip time (RTT) to a name service server and / or by the application protocol and / or choose freely.
  • the optimizer can also choose the validity period so that the answer only This can be done by explicitly specifying the one time use if the naming service provides such a function, but can also be done by specifying a short validity period (about 1 second or less than a minute)
  • the optimizer he can also observe the exchange of information in the context of the data service (see also Ib) and proactively perform the corresponding name resolutions on the names found by the analysis of the data packets.
  • the optimizer can also observe the packets exchanged in the context of a data service and proactively perform reverse
  • the optimizer X may consist of a component as shown in Fig. 3a) and 3b).
  • the optimizer X can also consist of several components which are arranged "locally" to one another, that is, for example, are not separated by a transmission network with a difficult communication path, but are located on a "side" of such a network, as shown by way of example in FIG. 3c). , 3d) and 3e).
  • the optimizer X may also consist of several components, as shown in FIG. 1, and these may be on different sides of a transmission network (potentially with a difficult communication path). be localized. In an arrangement as exemplified in FIG. 1, the optimization can be distributed.
  • the functions described here can be provided in one, two, or even several optimizer components (for example XI and X-2).
  • an optimizer consists of several components, in particular different components can cooperate in such a way that they pro-actively share received information with the other components. For example, if optimizer X-2 detects a name or an address (for example, due to a request and / or prediction, presumably future request as described in Lb) and performs a corresponding address resolution, then optimizer X-2 may combine the received response with the Request and / or additional information about the response (such as what name resolution was performed, why, and / or the validity of the response) to optimizer XI (and possibly other optimizers) before optimizer XI made a request. The same applies vice versa. Also, for example, optimizer X-I may delegate name resolution to optimizer X-2. The same applies to "locally" arranged components of an optimizer (such as X-Ia and X-Ib in Fig. 3c-e).
  • Optimizer arrangements can perform optimizations for name and data services, especially in combination.
  • the establishment of a communication relationship from one endpoint A to another endpoint can occur as part of a data service optimization by X-I and X-2;
  • Such optimization may also be provided only by an optimizer X-I (such as in Fig. 3a) or by optimizers X-Ia and X-Ib (such as in Fig. 3c) and in any combination thereof.
  • more than two optimizers and / or optimizer components and / or optimizer arrangements may be involved.
  • An endpoint A and / or another optimizer and / or a name service server may repeat name service requests (for example, because a timeout has occurred). It may be advantageous for an optimizer to be able to recognize and suppress and / or delay and / or answer and / or modify and / or replace repetitions of the same follow-up requests. Optionally, the optimizer may differentiate between different reasons for repeated requests, using the procedures described below. can also be combined as desired and whose combination can be statically specified and / or dynamically adapted:
  • an optimizer transmits information using one, several or all of the optimizations described under II, for example redundantly over the difficult communication path.
  • an optimizer may use communication over the difficult communication path - for example with another optimizer and / or with a name service server - to use their own rules for timeouts and / or retransmissions that are: - statically configured and / or dynamically learned / modified - better matched to the communication over the transmission network and / or a difficult communication path.
  • An optimizer identifies name service packets and detects repeated name service packets. Depending on the static and / or dynamic configuration, an optimizer may then need these repeated data packets or suppress, depending on a condition, wherein the conditions may be, for example, that the repeated packet is only suppressed if it is received within a statically configured and / or dynamically determined period of time, if it is outside of a statically configured and / or dynamically determined one Time period is received if the packet is a request, if the packet is an answer and / or if it is neither a request nor an answer.
  • the replacement and / or delaying of packages may follow appropriate or other rules and may also be conditional or unconditional.
  • the optimizer may perform the replacement of the packet with a semantically similar, equivalent, and / or equivalent packet, with or without a time delay.
  • quality of service packet loss has, among other things, the disadvantage that it requires either a specific changed or re-encoded content coding (which may be computationally intensive and requires the deployed optimizers to implement appropriate content coding techniques and / or or even assuming that the optimizers have implemented the specific content encodings used by an application) and / or optimizations specific to and using a link layer protocol (e.g., a mobile link, a satellite link, etc.).
  • a link layer protocol e.g., a mobile link, a satellite link, etc.
  • optimizers that can also be used largely independently of individual transmission sections / link-layer protocols and also largely independently of a selected content coding and / or the applications and / or terminals used. It is also advantageous for the implemented optimizers to be able to carry out their optimization, at least optionally, also largely detached from extensive administrative support measures of the network, such as end-to-end bandwidth reservations.
  • an optimization according to the invention can therefore be implemented, for example, by optimizers X-I and X-2, which can be arranged largely freely in the data path between the end points A and B.
  • the aim can be to include as much of the network as possible with the optimization oil.
  • the optimizations O1 and O2 can be designed completely, partly identically and / or specifically for the network areas N-X1 and N-X2 comprised by them.
  • the optimization according to the invention realized by optimizers can therefore for example insert redundancy as forward error correction into the transmitted data. This can be done independently of the concrete applications used, regardless of the specific content coding and regardless of possibly existing FEC method on the link layer protocols of individual transmission sections - of course, could optionally also the inserted optimization with one and / or more in this cooperate and / or benefit from the knowledge / recognition of these methods.
  • the VoIP call could be performed with an improved quality of service, even though a network L-X subject to packet loss lies in the data path between the end points A and B.
  • the optimizers X-I and X-2 are used.
  • X-I would receive data packets from Endpoint A VoIP, add redundancy to FEC (either in the packets or as additional packets / control information) and forward the resulting data packets over the N-X network.
  • Optimizer X-2 receives the optimized data stream, can compensate for all or at least some of the packet losses that occur by evaluating the FEC redundancies, and forwards the complete, almost complete, partially or fully recovered data stream to endpoint B.
  • the network NX could be the Internet. And the optimizers XI and X-2 could each insert / evaluate the optimization Ol in forwarding / receiving the data from corporate networks or home networks NA and NB to the Internet NX.
  • the network NX could also be an Internet access shared by multiple users and / or applications, for example at a public Internet hotspot (or special Internet hotspots such as those offered on trains, planes, etc.). In this case, increased packet losses may occur on this shared Internet access. While in principle all the arrangements shown in FIG. 1 can also be used here, this application scenario can be well described by means of the arrangement of FIG. 1 b).
  • Terminal A could then be a laptop, for example, which uses Internet access via WLAN and a hotspot together with other users and / or own applications. If this Internet access is overloaded and this leads to packet loss, the user of terminal A, for example, for his VoIP data (and / or all his application data to the end point B) FEC redundancies using an optimizer XI (for example, is implemented locally on his laptop) insert into its data streams.
  • Optimizer X-2 is either already in the data path to endpoint B or else, for example Optimizer XI ensures, for example, by suitable target addressing of the generated packets, that the generated packets should be forwarded to X-2 via NX.
  • X-2 could be on the Endpoint A user's home network, or on the endpoint A's corporate network, or Optimizer X-2 would be operated by a service provider serving service users (endpoint A user, as in this example). allows to transfer its data to X-2 with the optimization Ol.
  • X-2 is then connected, for example, to a transmission network NB, via which the end point B can be reached.
  • the network NB could also be (again) the Internet into which the data partially and completely recovered using oil can be (back) given and then reach endpoint B.
  • the inserted FEC redundancy information may also be designed in such a way that it can also be used and "evaluated" directly by one of the endpoints (for example endpoint B) or the application involved
  • the applications used directly implement the FEC redundancy information in, for example, their application transfer protocols end-to-end, rather than inserting FEC redundancy information through an optimizer XI largely detached from the application, such as in FIGs g).
  • the optimizer XI can, in the simplest case, double the incoming data packets (or, more generally, always and / or, for example, in the case of dynamically detected occurring packet losses, send them twice or more generally multiple times). Because many transmission protocols (including the widely used IP protocol) do not assure that packets will not be duplicated or duplicated during transmission, many receive protocols and applications tolerate (or ignore) duplicate packets. Thus, in addition to the arrangements of FIG. 1, arrangements according to FIG. 2 also become possible in which optionally only one optimizer component XI is used.
  • RTP is typically used above UDP and IP - therefore RTP packets have very high RTP, UDP, IP, etc. header overhead compared to the often relatively small volume of voice data contained in an RTP.
  • RTP packets contain timestamps and a voice data part (payload part) of variable size.
  • An implementation according to the invention could therefore insert redundancy into RTP in an application scenario with, for example, VoIP and RTP, in which optimizer XI also inserts the user data of the previously received RTP data packet into each of the RTP data packets received by endpoint A before the newly received user data (thus, for example, the double User data volume per RTP sent) and the timestamps (which in the example of RTP indicate, for example, the timestamp of the first user data contained in a packet) accordingly adapts.
  • a corresponding implementation would generate significantly less overhead than a complete duplication of the RTP packets.
  • VoIP RTP protocol and / or application implementations would often be able to easily understand these modified RTP packets even without the use of an optimizer X-2, and be able to independently (largely transparently) use the incoming redundancy information to compensate for packet losses.
  • optimizer X-2 a proxy for packets that are used by the optimizer X-2.
  • FIG. 2 arrangements in accordance with FIG. 2 are also possible in this example.
  • the optimizer XI can also design the optimized data stream such that the information necessary for the optimization is only for the optimizer X-2 are "visible", but are not perceived by the endpoint B (in the example above), for example, the information needed for the optimization can be transmitted in separate packages and addressed, for example, differently (for example to the optimizer X-2). Also, the information necessary for optimization may be "hidden" in the packets destined for endpoint B.
  • a "larger" packet of a layer may include a "smaller” packet of a higher layer (e.g., the transport layer), such that There is still room for further information at the "end" of the transport layer packet
  • the switching layer can be, in particular, IP (for example IPv4 or IPv6), at the higher layer IP, UDP, TCP, ICMP, DCCP , SCTP, and other IP-based tunneling or transport protocols.
  • a static configuration for example, the amount of inserted redundancy information and / or a dynamic adjustment, for example, based on a dynamically estimated and / or measured packet loss rate.
  • the FEC method itself can also be optionally modified and / or changed, for example.
  • an optimizer receiving an optimized data stream could first evaluate the FEC information and request a corresponding retransmission in case all packets / information were not yet received or restored.
  • the retransmission / ARQ method is used as part of the optimization, to only request retransmissions, for example, if the resulting and / or expected delay does not become "unacceptable" Transmission delay would be unacceptably large depends, inter alia, on the specific application scenario and / or the protocols used and / or the applications.For example, corresponding limit values could be configured, measured, derived from the protocols used and / or applications and / or their parameters. Also, they could be stated absolutely and / or relatively (eg relative to an RTT).
  • these other transmission paths have a relatively short RTT and / or low packet loss rates, for example because the optimizers receiving the optimized data streams obtain this information more quickly, for example in the event of packet losses, and thus forward the (recovered) packets and / or Delay information less. It may also be advantageous for similar reasons, depending on the application scenario, to transmit certain data packets / specific information (such as FEC and / or retransmissions) with a higher priority and thus for example to reduce the transmission delay and / or loss rates for these data packets / information.
  • specific information such as FEC and / or retransmissions
  • An optimizer that is on a path ahead of a bottleneck can often help prevent or limit such congestion by, for example, applying the optimization methods described above, such as suppressing, rearranging, retaining or compressing packets, or otherwise influences the data stream so that, for example, the behavior of the participating end systems contributes to a regulation of the queues.
  • a difficulty is the lack of information of the optimizer about the current state of the bottleneck, such as the achievable bitrate, the current size and structure of the queue, the used prioritization algorithms and the proportion of current through the optimizer data flow to the total data flow through the bottleneck and the behavior (and expected behavior) of not running through the optimizer Share. It may be advantageous that the optimizer, through observations of the packet streams, determines knowledge, albeit often incomplete, trailing knowledge, about these state parameters.
  • the optimizer could measure and / or track transmission rates and / or from the occurrence of retransmissions (or, when using ECN to RPC 3168, related signals such as "congestion experienced” or ECN echo and CWR at the transport level) Using other feedback signals, such as RTCP or ROHC feedback, we can conclude from these that, for example, congestion-induced losses occur at certain bitrates, and it may be advantageous for the optimizer to use statements about the course of the RTT to compare them to the observed and It can also be advantageous if the optimizer includes the expected behavior of the end systems in his forecasts - for example, an expected retransmission can be avoided by timely prioritizing a data packet - and / or influence it in a targeted way t, for example, by suppressing unnecessary retransmissions and / or by suppression and / or delay of ACK or data packets and thereby achieved a deceleration of a transmitter, for example, threatens to overload the bottleneck.
  • the optimizer actively probes the bottleneck by communicating with a remote station even without direct utilization of the triggered data transmission, for example with the above-mentioned ICMP techniques (echo request / echo reply, "ping") and / or also by implementing piggyback information on the user data and / or by using one / more management, diagnostic and / or measurement interfaces to one and / or more / all network elements, the queues implement and / or cause and / or pass on appropriate information Many of these measures can be unidirectionally observed / influenced as well as in bidirectional observation / influencing at least a part of the data flowing through the bottleneck realize.
  • ICMP techniques echo request / echo reply, "ping”
  • suitable specifications can be used (for example, a maximum desired RTT or another target from suitable characteristic values such as a combination of loss rate, RTT and RTT variance / variability, for example according to the Padhye-Firoiu equation for specific throughput values of TCP connections).
  • the goal can also be to provide not only the sum of the data streams and / or groups of data streams but also only / preferred data streams (such as interactively spoken VoIP) the required quality of service with a higher reliability than this would be possible without these measures.
  • an optimization can therefore be realized, for example, by optimizers X-I and X-2, which can be arranged largely freely in the data path between the end points A and B.
  • the two optimizers jointly optimize the use of a bottleneck in the network N-X.
  • X-I and / or X-2 could use a potentially distributed algorithm for obtaining relevant characteristic parameters of the bottleneck (s) in network N-X.
  • X-I and X-2 can send mutually active sounding packets and / or put on anyway sent packets piggyback information and about it, for example, the RTT and possibly estimate their course.
  • XI can include the inflow and X-2 the flow of data packets into the calculations and thus a more refined statement about the current size of the queue (s).
  • the outflow of data packets becoming visible in X-2 can statements about the achievable bit rate and the current packet loss rate.
  • XI can influence the parameters determined jointly with X-2 by influencing them (for example, delay, suppression, duplication, reordering, rewriting the packet fields, for example changing the window offered) to those in the forward direction (in the example described, from end point A to B) Data packets react. X-2 can do this too; however, the influence is then mediated indirectly via the returns from B to A; these influences can affect both B and A directly.
  • an optimizer in an alternative embodiment or in combination with one or more of the aforementioned methods to also provide feedback / feedback and / or information (for example via queue information of subsequent components and / or bandwidths for individual / types /
  • This information could, for example, indicate how many bytes for certain types of data and / or, in general, for example, can send to subsequent components without causing queuing or, for example, causing relatively low queuing and / or associated with Further information on current and, for example, depending on the data type and / or priority resulting or existing cue levels and / or levels.
  • Advantageous transmission types and interfaces for this information exchange / this feedback could be local / transferred tax information and / or other interfaces such as the Linux operating system often used by drivers / proc file system interface.
  • transmission interruptions often result in the loss and / or delay of data packets, which can interfere with the establishment and / or operation and / or termination of a communication relationship.
  • a delay may occur, for example, if the packets sent during the transmission interruption have been temporarily stored in a router (ie its queue) until the transmission interruption is over.
  • caching generally only relates to (relatively) few packets and to a (relatively) short interruption period.
  • packet loss can affect the transmission delay, for example, if a reliable and / or order-preserving transport protocol (for example, TCP, SCTP).
  • the interruption may happen that the communication relationship is terminated by the corresponding protocol.
  • the communication protocol is often removed from the application protocol, and the communication relationship between the applications does not materialize or can not continue or must be later (after elimination of the Interruption).
  • a new address of a communication partner can lead to the (temporary) interruption or abort of a communication relationship.
  • connection splitting or split connection when applied at the transport or application layer;
  • the invention described herein is not limited to connections, connections do not even have to be recognized as such, but it may - as partially explained below - including packets, link-layer and other frames or other transmitted information units as a basis be used.
  • Individual communication relationships or even groups of communication relationships can be considered together.
  • the methods described below which are advantageous for the invention depending on the application scenario apply to all arrangements of FIGS. 1 and 2 and any combinations of these (that is to say inter alia also for the case in which an optimizer is present).
  • the division into multiple sections can be done on any layer of the OSI reference model. Often, such a subdivision occurs at the IP, HIP, transport or application layer.
  • a solution according to the invention can be used on one and / or more layers.
  • the solutions just mentioned provide that the at least one optimizer X shields the section from one optimizer to an endpoint of any transmission interruptions from the optimizer X to another optimizer and / or to another endpoint.
  • a transport connection for example TCP
  • TCP transport connection
  • the communication relationship can be maintained in principle on one or more layers - for example by maintaining the IP addresses as in the case of Mobile IP, as in the case of HIP (Host Identity Protocol), the contact addresses are updated or, as with TCP connection.
  • IP addresses as in the case of Mobile IP
  • HIP Home Identity Protocol
  • the contact addresses are updated or, as with TCP connection.
  • Splitting the transport links on the sections to an endpoint (s) will persist, but this method is often inadequate for applications.
  • applications and / or application protocols and / or name services used by name resolution applications have their own timeouts. If an operation initiated by an instance of an application (for example, an endpoint A) (eg, a request) is not completed within that time window (for example, by a corresponding response), that operation may - after one or more repeated attempts, if to execute them - to be declared failed.
  • an error message may be presented to the user.
  • the message can be delivered, the name of the server is not known and / or the server is currently not reachable and / or a page could not be found or loaded.
  • the requested web page may be displayed incompletely (for example, missing text, missing images).
  • the responsibility for reloading a principally available web page is delegated to the user: he may decide to initiate the corresponding operation again (if necessary repeatedly), and he can decide when and how often he wants to try this.
  • an optimizer receives information (eg payload or control information) from one section (eg X1-X2 in the transmission network NX) within one or more communication relationships - for example from another optimizer or another Endpoint - and passes this information on to another section (for example, A-Xl) to an endpoint (for example, A).
  • information eg payload or control information
  • the optimizer does not forward all information as it becomes available. Instead, the optimizer passes the information only delayed, so that (for example, before an operation is completed) some of the information remains in the optimizer. If a transmission interruption occurs, the optimizer can store this remaining information in arbitrarily small units (bits and / or bytes and / or sequences of bits and / or sequences of bytes and / or packets and / or sequences of packets and / or packet fragments and / or sequences of packet fragments and / or frames and / or sequences of frames) to the endpoints.
  • bits and / or bytes and / or sequences of bits and / or sequences of bytes and / or packets and / or packet fragments and / or sequences of packet fragments and / or frames and / or sequences of frames to the endpoints.
  • This mechanism may be on any one or more protocol layer (s) (particularly, but not limited to, layers 2 and / or 3 and / or 3.5 and / or 4 and / or 5 and / or 6 and / or 7) ,
  • forwarding information it may be advantageous if the forwarding does not occur in individual bits or bytes or other arbitrary units mentioned above, but follows the structure of the higher protocol layers.
  • These may be the data structures (eg packet formats, data formats, operators, parameters, queries, responses, HTML, XML and / or other documents, etc.) and / or headers and / or payloads.
  • an application uses its own data records or if the communication takes place in whole units of this data, then in some cases it may be useful or even necessary to forward these data records as a whole. In other cases, it may be necessary to pass on these records only piecemeal. Combinations may also be necessary or advantageous. In which cases, which procedure is suitable depends on the applications and / or application protocols.
  • NOP no-operation
  • a protocol provides for an adjustable and / or negotiable timeout
  • a higher timeout may be set when a transmission interrupt occurs.
  • a new timeout can be chosen so that it corresponds to the expected duration of the interruption delay or higher. It can be set to a fixed or otherwise dynamically determined value. It can also be adapted according to the information to be forwarded which is still available in the at least one optimizer. The choice may also be determined by any combination of one, some or all of the above and / or other parameters.
  • the optimizer may be advantageous for the optimizer to generate protocol elements (for example, in response to a request and / or as a message and / or own request) informing the endpoint that it should not make further requests for a particular time slot (eg a retry-after header).
  • protocol elements for example, in response to a request and / or as a message and / or own request
  • a particular time slot eg a retry-after header
  • the optimizer may be advantageous for the optimizer to generate and supplement protocol elements and / or content and / or modify and / or supplement routed information that notifies the application and / or the user that a particular operation is in progress.
  • the mechanisms described above may also be used if there is an interruption at the time an application wishes to establish a communication relationship.
  • the optimizer can simulate the establishment of a communication relationship and thereby shield the application from an existing interrupt.
  • optimizers implementing Connection Splitting could, for example, accept an incoming connection (for example, TCP connection) according to the connection splitting, but delay the connection, and eventually a complete communication relationship between A and B, for example, delaying or possibly repeating until the interruption no longer available.
  • optimizers not operating according to the connection splitting method could for example simulate the establishment of a connection or the time span before the involved protocols emanate from the connection not being established extend
  • Case b) may be advantageous, for example, to not always delay information. but only to collect information for delayed forwarding in at least one optimizer, if there is also a need for such buffered information, since otherwise the performance of the communication relationship may possibly decrease.
  • a decreasing data rate or, if available, other indicia such as falling signal levels (e.g., signal strength and / or signal-to-noise ratio, SNR) could indicate an impending pause and / or an increasing level thereof The End.
  • SNR signal-to-noise ratio
  • GPS and / or motion indicators could be used.
  • Heuristics such as movement values and / or time values and / or network changes of the past could also be used.
  • external systems could indicate an imminent or potential impending transmission interruption or give indications of their likelihood.
  • past experience stored, for example, in one of the optimizers) could provide important clues.
  • the participating transmission networks themselves or (digital) maps with information about the network coverage) could give hints. The same could be done by a user in at least some cases. Individuals, some and / or all of these and / or other information may be combined to make such predictions.
  • the invention can be used unidirectionally and / or bidirectionally.
  • the two transmission directions can be operated independently or dependent on each other.
  • the independent and / or dependent operation may relate to individual data packets, individual communication relationships, and / or groups of communication relationships, and that reference or independent and / or dependent operation may change over time once and / or repeatedly.
  • the optimizers can ensure that received optimizers will be able to obtain the optimized data stream and / or the opti - miert to interpret incoming data packets and / or forward accordingly.
  • the use of pure (often anyway) unidirectional FEC method the determination of available bandwidth without bidirectional example PING protocols presuppose and / or the use of alternative transmission paths in return, only for less data and / or only for control information be used.
  • Optimizers can be integrated into the data path to a large extent transparently in many application scenarios, i. the applications do not need to know about them and therefore do not necessarily have to address the data packets directly to the optimizers.
  • the optimizers could also work as a proxy.
  • a proxy setting is often supported by many applications, such as browsers, without changing the application itself and without very complex configuration.
  • an automatic proxy detection is provided, so that the actual proxy (or its addresses) does not always have to be configured directly in the applications.
  • the protocols used for automatic proxy detection also allow optimizers and / or external components to automatically specify the optimizers as proxies directly to the applications, so that, for example, at least one manual proxy configuration may not be necessary per application.
  • these methods often also allow for reconfiguring the proxies, for example in the event of a failure, as load sharing and / or for directing the data streams to other proxies and / or through other networks.
  • the data exchange between the components may be advantageous to tunnel the data exchange between the components (in particular also between participating optimizers such as XI and X-2 in FIG. 1) by additional protocols.
  • additional protocols a variety of known and optionally also specially designed for this purpose protocols or combinations of both.
  • the communication over a TCP tunnel could be made from one or more parallel TCP connections, also the employment of for example the protocol IPsec and / or IPsec Nat Traversal could be advantageous, since at the same time they can implement additional procedures like encryption.
  • tunneling packets in different network-level protocols may be advantageous (for example, if the optimizers also support IPv6, however, potentially (parts of) the network between optimizers only support IPv4; the same applies to similar or in reverse scenarios). It is also possible to tunnel known packets as well as specially designed protocols for tunneling of packets of different network level protocols.
  • a corresponding "tunneling" can also take place very indirectly, for example by exchanging address information only at the beginning of recognized communication relationships If, for example, a new IPv6 communication relationship begins, the optimizers allocate an identification number for example, and for example only initially assign this identification number to another optimizer
  • the tunneling protocol between the optimizers could be based on IPv4, while for the detected communication relationships internally IPv6 address information is exchanged internally (and / or similarly) in the tunneling protocol or in reverse scenarios).
  • optimizers can, for example, autonomously detect packets and / or data streams of different protocols and / or be instructed and / or supported by external components via control information / control signals and / or marking of the packets themselves. Possible for this purpose many procedures.
  • Measurements such as the packet loss rate and / or RTT and / or transmission interruptions are at least often also possible if, as in the arrangements of FIG. 2, no optimizer X-2 is used.
  • the optimizer X-I for example, adapted to the deployment scenario, exploit other functionalities implemented in the endpoint B and / or in its environment or in the transmission path to endpoint B in order to access the required information.
  • the command PING ICMP protocol, with standard PING packet sizes or with typical packet sizes as they appear in the data stream to be optimized
  • the command PING could be used to reach a corresponding remote site without specific X-2 optimizer components and without special optimization Functions to estimate packet losses and / or RTTs and / or detect transmission interruptions.
  • similar information can also be obtained via the RTCP protocol, via which VoIP implementations provide feedback to the opposite party about the received data.
  • a combination with a compression of the transmitted user data and / or transcoding / change of the content coding can take place.
  • both lossless and lossy compression methods are available (such as reducing image resolution, image quality, or filtering out optional additional information). NEN, etc.). This applies generally to the use of arrangements according to FIG. 1.
  • it is also possible to use these techniques in arrangements according to FIG. For example, when reducing image resolution, image quality, filtering out additional information, changing content encoding / transcoding (but often depending on the functionality / supported content coding of the applications used).
  • the HTTP protocol allows the transmitted Web objects to be compressed directly by the Web server or even by intermediate components such as an optimizer. Because common web browsers often support several of these compression methods, an optimizer can optionally also compress web objects with one of these compression methods, and could even persist compression to the receiving end system / application (in this case the web browser).
  • Protocol enhancement techniques exist for a variety of protocols and objectives and / or networks. Very often used are, for example, protocol enhancement methods for TCP and / or HTTP and / or file-sharing protocols (such as SMB, CIFS, NFS, NetBios). These protocols are either replaced for example for certain transmission sections by other protocols and / or modified protocol parameters in the terminals and / or the exchanged data packets. There are many potential targets for such protocol enhancement methods.
  • TCP protocol enhancement may be the task of TCP protocol enhancement to allow high transmission bandwidth even with high transmission delays (and / or transmission delays remaining despite optimization) and / or high packet loss rates (and / or packet loss rates remaining despite optimization) / or keep the protocol overhead, for example caused by control packets, low.
  • HTTP protocol enhancements For example, these should also be Nes usual Internet browser resulting page load times are also reduced for networks with high transmission delays and / or high packet loss rates. Ways to do this include, for example, intermediary proxies and / or proactively sending objects contained on web pages or even behind links.
  • HTTP is also an example of how it may be beneficial to combine the various methods mentioned here (but HTTP is representative of many protocols that apply to this, such as, but not limited to, many text-based protocols such as SIP, RTSP, SOAP, SDP, etc.).
  • HTTP uses TCP and IP, so a relatively large protocol hierarchy is used, with higher-layer protocol layers often directly benefiting from optimizations for lower-level protocol layers. In this example, optimization could, for example, reduce packet loss rates and / or RTTs.
  • HTTP often benefits directly from this optimization. But HTTP often benefits from the optimizations already by relying on TCP, and TCP in turn often benefits significantly from low packet loss rates and / or shortened RTTs.
  • HTTP is also a good example that in a corresponding deployment scenario, additional methods such as HTTP-specific protocol enhancements, compression, encryption and / or header compression methods can often be advantageous in addition to the optimizations.
  • the individual potentially to be combined with the optimization method / process types could be realized independently of each other and / or independently of the optimization, which among other things Increased flexibility and / or interchangeability.
  • implementation in combined system components and / or devices potentially reduces the overall complexity and / or configuration effort.
  • the individual types of methods could be implemented and used in a simplified manner in the case of a completely / partially integrated realization and / or in a realization in which at least individual control information is exchanged between the components of the method types.
  • the optimizer / optimization in many application scenarios it is advantageous to use the optimizer / optimization to be performed in combination with other methods (such as compression of Payload, header compression, encryption, protocol enhancement, etc.).
  • these other methods can be implemented, for example, directly within the optimizers or, for example, as external, independent components. Also, these other methods may be applied to the packets depending on the deployment scenario and arrangement chosen before or after optimizing the packets / data streams.
  • packet headers (or parts of them) and / or payloads (or higher headers) Protocol layers) may be unrecognizable by encryption and / or content / payload compression and / or header compression for subsequent components (or more generally for other components), as long as they are not decrypted and / or decompressed again, for example
  • Examples can also be here of marking packets (for example via the TOS field of the IP headers), signaling / classifying packets / data streams based on address information (which, for example, are made known to other components via configurations and / or signaling protocols) of tunneling protocols and / or special protocols that contain, for example, in additional information header and / or control information otherwise unrecognizable information of the packets / data streams.
  • address information which, for example, are made known to other components via configurations and / or signaling protocols
  • tunneling protocols and / or special protocols that contain, for example, in additional information header and / or control information otherwise unrecognizable information of the packets / data streams.
  • address information which, for example, are made known to other components via configurations and / or signaling protocols
  • tunneling protocols and / or special protocols that contain, for example, in additional information header and / or control information otherwise unrecognizable information of the packets / data streams.
  • a component could completely transfer these additional information headers to subsequent components. For example, it could
  • the data packets may be transmitted not only on a path through the network / subnetworks and / or in parallel over several transmission sections.
  • the resulting advantages can be, for example, load sharing and potentially lower transmission delays and / or packet loss rates, an increase in the total available transmission capacity and / or, in particular in the case of redundant transmission of all and / or some of the information, also higher reliability / or robustness to, for example, leaving reception areas and / or switching between networks.
  • different methods may be advantageous for the division of the data packets to be transmitted over several paths through the network / subnetworks and / or in parallel over several transmission sections.
  • a division taking into account the transmission delay / RTT of the individual paths can be advantageous.
  • a corresponding method could, for example, regulate the data volumes routed via the individual paths in such a way that the individual paths have, for example, a similar RTT and / or an RTT which, for example, does not exceed a configured and / or determined maximum and / or as little as possible.
  • control signals of the user or external systems can also influence and / or directly control, for example, nature, extent, amount, optimization method / its parameters and / or the selection of the data streams included in the optimization.
  • control signals of the user or external systems can also influence and / or directly control, for example, nature, extent, amount, optimization method / its parameters and / or the selection of the data streams included in the optimization.
  • there may be an optional or always-present / used functionality of the optimizers including, for example, various optimizations, optimization scopes, parameters, and / or the inclusion of data streams in the optimization, also dynamically and / or controlled by inputs from the user and / or external systems such as a network management system controls.
  • Such a mechanism may be implemented distributed between two or more optimizers and / or one-way into a compressor and / or even external to the optimizers.
  • the detection mechanism may be passive (eg, only observe packet flows) or active (eg sending out packets to identify optimization possibilities).
  • the mechanism (whether implemented in one or more optimizers or externally) can provide the necessary information from the network (such as through routing, middlebox signaling protocols such as RSVP, NSIS, SOCKS, MIDCOM, etc. and / or other control protocols) and / or one Maintain network management and / or determine it by interaction of two or more optimizers. Hints can be given by initial and / or continuous configuration.
  • Such a mechanism recognizes dynamically, partially or completely independently, which methods can be included in the optimization and / or which additional (possibly combined with the optimization realized) methods should be used. This determination of the compression possibilities can take place in advance of the commissioning of the optimization, before / during the establishment of one or more communication relationships and / or continuously during the active optimization.
  • the mechanism automatically detects errors during operation (for example, from the transmission or non-transmission of optimized data packets themselves, their loss rates and / or their other transmission characteristics and conclusions on changes in the transmission path (new routing, adding a or multiple other nodes, load sharing on multiple routes, etc.) Based on this information, the mechanism can then adjust the header optimization accordingly.
  • Such a mechanism can be active simultaneously in various forms and can also be operated in parallel in addition and / or offset in time to a static configuration.
  • Different forms different dynamic determinations and / or static configurations and / or negotiations
  • the identification of which mechanisms are to be applied to which (parts of) a communication relationship (s) and / or groups of communication relationships and / or the entire data stream can again be done statically or dynamically and / or by the properties of the data packets and / or the protocols used and / or dependent on the network load (present, past, expected in the future) and / or on the observed transmission characteristics (error rate, round trip time, etc.).
  • the technical feasibility for example, encrypted packets can be compressed less well than unencrypted packets
  • the efficiency of the optimization and / or the effort for example, computing power, memory, etc.
  • An optimizer may identify packet types, and in particular name service and data service packets, and prioritize, for example, name service packets over some or all of the data service packets; also selected data service packages can be compared. Naming service packages and other data service packages. An optimizer can also recognize repeated name service and data service packets.
  • an optimizer may then unconditionally or at variance suppress conditions, such conditions may be, for example, that the repeated packet is suppressed only if it is received within a statically configured and / or dynamically determined period of time, if it is received outside of a statically configured and / or dynamically determined period of time if the package is a request, if the package is an answer and / or if it is neither a request nor an answer.
  • the replacement and / or delaying of packages may follow appropriate or other rules and may also be conditional or unconditional.
  • the optimizer may perform the replacement of the packet with a semantically similar, equivalent, and / or equivalent packet, with or without a time delay.
  • the optimization may be made contingent on certain involved endpoints and / or applications and / or the load in the transmission networks and / or on individual / groups of transmission sections and / or available memory and / or the CPU / processor load the components involved. Depending on individual or combinations of such criteria, the optimization can be fully / partially activated, limited and / or completely / partially deactivated or corresponding decisions can be made for methods combined with the optimization.
  • this decision can be made unilaterally by individual components or components of a transmission side or jointly by several involved components or also by "neighboring" system components such as a network management system, whereby it is also possible in many application scenarios Optimizing connections first and stopping the optimization (and vice versa) while the connections / parts of the connections continue.
  • the invention is also suitable for use in point-to-multipoint communication (as is often the case, for example, in a satellite or terrestrial broadcast network) in many application scenarios. The same optimization methods can be used.
  • the (multiple) optimizers and / or (multiple) end systems / applications may have different capabilities and that the transmission paths to these may have different characteristics.
  • An optimizer should then allow the most important, the majority, and / or all optimizers to interpret the optimized data packets. This can be done by selecting optimization methods which are suitable for all intended recipients. And / or an optimizer may send differently optimized data packets / data streams to individual receivers and / or groups of receivers that are specific to the particular transmission path and / or optimization.
  • And / or an optimizer may send additional information (in existing and / or other data and / or control packets) to individual and / or groups of receivers (and / or nodes of the transmission networks) to ensure successful routing and / or reception and / or or to enable evaluation of the received optimized data stream.
  • additional information in existing and / or other data and / or control packets
  • receivers and / or nodes of the transmission networks
  • the same is true in many cases for multipoint-to-multipoint grain communication. This can often also be mapped to multiple point-to-multipoint communication relationships.
  • an optional combination with header compression techniques is also available in order to reduce the transmission volume.
  • this often implicitly reduces, for example in the case of congested networks / transmission sections, the packet loss rates and, in particular in the case of narrowband networks / transmission sections, the RTT.
  • the use of header compression can often (such as in VoIP and RTP) make a significant contribution to reducing the redundancy information increasing bandwidth requirement again, in whole or in part, or even below the original bandwidth requirement.
  • the invention also makes it possible to use the optimization (for example, oil) via virtually arbitrary and / or even changing networks / network paths / transmission sections. Therefore, and among other things, in spite of at least relatively efficient header compression To provide techniques relatively low or only largely determinable requirements for the type of networks / network paths / transmission section used, it is also advantageous to use the invention optionally in combinations with partial header compression method - as described in Section II below - use.
  • This section describes a specific form of optimization. This aspect is in the field of packet-oriented data transmission and the reduction of overhead generated by packet headers. The invention makes it possible to save packet headers both completely and partially in the transmission. The prior art has already been described at the beginning.
  • a header compression as described below is a specific expression of an optimization function, a compressor a possible embodiment of an optimizer.
  • the system arrangements of FIGS. 1 and 2 can be used, wherein the optimizers of FIGS. 1 and 2 are designed as compressors and / or decompressors and the optimization of FIGS. 1 and 2 are the specific characteristics of compression.
  • the system arrangements for the specific expression of the compression reference is made to the above descriptions of FIGS. 1 and 2. The explanations apply analogously for optimization and compression, for optimizers and (de) compressor.
  • header compression and other optimizations may be arbitrarily integrated and / or linked to the same and / or other components.
  • the remarks in Section I on networks and specific network technologies also apply analogously.
  • some data of the parallel or nested optimizations or compressions can be used in whole or in part jointly by both optimizations or compressions , Examples include connection IDs, length fields or (sub) length information, but also many other field types.
  • the compression functions are not limited to information in packet headers of a particular layer. However, individual compressions may specialize in particular packet headers, particular layers, particular protocols, and / or particular applications. Individual compressions can work on individual layers or across layers. The compression may depend on the nature of the surrounding networks or paths through the network and / or on the function and / or presence of particular network elements: for example, a compression function may work differently if the packets on the way are certain other network elements such as routers , NATs and / or firewalls have to happen. Different compressions (and their compressors) can co-ordinate with each other and / or work independently of each other.
  • compressors can also modify the contents in order to make the communication more efficient and / or performant and / or robust, or to enable communication in the first place.
  • a compression function (or compression) may not always lead to a reduction in the volume of data. If, for example, a transmission network is unable to transmit data packets of a specific type (for example, a specific application, a particular transport protocol), then a compression function can rewrite data packets in such a way that a transmission over the network in question nevertheless occurs.
  • Header compression reduces and / or removes header information on a leg between two systems in the network.
  • These two systems may be both the endpoints and other nodes in the network ("in the middle, ie between the endpoints") .
  • You may be neighbors, ie directly through a transmission section of a physical network (e.g.
  • One or more further nodes for example, routers
  • the route between both systems may change over time, the latter case being often (but need not exist or may be in other situations and / or constellations) when the compression executing systems are two endpoints.
  • two IP routers are neighbors if there is no other router between them and they are performing compression on the IP layer.
  • two endpoints with any number of IP routers between them are neighbors on the application layer, as long as no application proxies are used and the compression takes place only on the application layer.
  • two application proxies are neighbors when there are no more proxies between them and the application layer compression occurs.
  • header compression parts or all information of the header (s) to be compressed in one system (the sending one) is removed and / or replaced and reconstructed in the other system (the receiving one).
  • the two systems involved in the compression have common knowledge (context) and / or local knowledge (state information) and / or a common understanding of the compression algorithms to be used.
  • This knowledge and the algorithms can for example, predefined and / or dynamically exchanged and / or dynamically constructed ("learned") and / or adapted in the course of one or more exchanged data packets (one or more communication relationships) and / or independently of one another.
  • the data packets generated by the compressing system must be essentially unaltered in the decompressing system (and, depending on the method used, in some, but often not all, in the correct order). For this purpose, it is necessary that any existing nodes between the two systems are able to forward the compressed data packets and thereby distort any information required for the decompression. This is easiest if the two systems are directly neighbors (i.e., only one "hop" apart) because there are no "interfering" network nodes in between. In this case we are talking about hop-by-hop header compression.
  • end-to-end compression refers to header compression between two endpoints. Mid-to-mid compression is used in all cases where the two systems involved in the compression are neither adjacent nor the two endpoints.
  • the compression method is generally more efficient the more headers from the header hierarchy can be included in the compression. This often allows hop-by-hop techniques (which do the compression for one section of transmission) to achieve much higher compression rates than end-to-end techniques (the latter can not eventually compress the headers used for forwarding through the additional nodes are required). However, a packet often takes paths that do not realize hop-by-hop compression on each section can be, for example, for reasons of performance of the components used or because components for the particular task are not available with installed header compression, if necessary, because perhaps for profit / billing reasons, a compression is not desired.
  • a center-to-center header compression that provides some of the efficiency advantages of hop-by-hop header compression with the less integration overhead of end-to-end header compression, at least on a particularly relevant one Part of the path connects with each other.
  • Such center-to-center header compression will be most efficient in maximizing the area covered in the header hierarchy.
  • One limit to this maximization is that the systems on the path between the compressor and the decompressor will require part of the header hierarchy for their respective functions and, as mentioned above, this "bottom part" can not simply be compressed away.
  • a compressor X-I of a compression C receives a data packet and selectively compresses the headers that are not needed for forwarding the data packets by other systems. It makes sense to include as many of the compressors 'visible' (i.e., present in unencrypted form) headers as possible.
  • header compression need not be limited to including or excluding entire headers. If a system on the path between the compressor and the decompressor requires only certain fields of a header, the other fields of this header are available for integration into the compression method if at least the structure of the header can be retained and / or reconstructed during this compression.
  • the compression of header fields can be done individually for each or for some of the data packets related.
  • data packets may refer to other (previously or later sent) data packets, thereby increasing, for example, the compression efficiency.
  • the selection of the data packets to be compressed and / or the data packets for a contiguous compression may be based on fixed predefined and / or dynamically generated rules and / or on the basis of the packet properties and / or the time sequences of the packets etc.
  • the (sequences of) data packets (n) of different communication relationships can be considered end-to-end independently of each other and / or some (or all) communication relationships can be considered together. Independent and / or collective consideration of compression may involve individual (arbitrary or by their properties) and / or all data packets. Finally, between individual and joint viewing of the data packets of different communication relationships can be switched back and forth over time.
  • the compression can also increase the data volume (per packet) transmitted between the compressor and (decompressor) (possibly only in the short term), for example by additional headers, larger headers, additional packets and / or other supplementary and / or redundant transmission of information. It can also be provided not to use the compression for individual data packets and / or not to reduce the data volume of individual data packets despite compression. The same applies generally / for a limited time to entire communication relationships and / or groups of data packets and / or all data packets. It can be advantageous to transmit additional control packets in one or both directions between the compressor and the decompressor in addition to the (compressed or uncompressed) forwarded data packets.
  • control information including implicit or explicit acknowledgments about received and / or non-received data and / or additional control packets, between compressor and decompressor. It may be advantageous to retransmit some data packets or additional control packets and / or transmit further information as separate packets and / or additional information in other packets from which to recover portions of the information and / or entire packets.
  • Such a compressed data packet (and / or a sequence of data packets) is forwarded by the compressor XI to the (de) compressor X-2 and the compression C in this completely or largely reversed, so that the original data packet wholly or substantially Parts is reconstructed.
  • header compression may also begin, for example, already in the one or more of the involved endpoints.
  • FIGS. 1 and 2 Some exemplary embodiments will be described. It should be noted that the examples described illustrate the aspects of the invention in a particular context as an example. However, the actual aspects of the invention can also be used differently or more generally.
  • IPv4 In the case of an implementation with IPv4, if the systems on the path between compressor and decompressor evaluate the IPv4 destination address, in many cases the IPv4 source address could still participate in the compression. In one of the possible implementations, this could mean that repetitive source addresses can be replaced by a shorter context identifier and, to increase the structure of the header, at least in substantial parts, the remaining bits of the source address field are padded with compressed data from the higher layer compression, for example UDP and RTP.
  • a corresponding context identifier could also be used more generally, be common to multiple fields of one / more protocol headers, not specifically serve to compress the IPv4 source address, etc.
  • the context identifier (or equivalent) would not need to be translated within the bits of the source address field but would often (implicitly) be replaced anywhere within the resulting packets and / or even partially or completely by header information of, for example, underlying protocol layers can be.
  • the IPv4 source address information can also be omitted without substitution. This also makes it possible to fill the bits of the source address field with other data and still obtain the actual structure of the header, at least in its essential parts.
  • n-bit available space in which k bits to reconstruct the data packet to the decompressor, generally k ⁇ n applies.
  • the remaining n-k bits are available for recording (possibly also compressed) control information from other headers or for recording user data.
  • a special case occurs when one or more headers and / or parts of one or more headers can be completely saved by the compression.
  • headers can not or should not simply be replaced by a more or less static context identifier.
  • this could be due to the nature of the header fields to be compressed, but also objectives such as reducing complexity and / or increasing robustness and / or shortening transmission delays, etc. may make sense to omit individual / some header fields (just ) by a context identifier but by additional information bits (more generally: additional information) in the compressed header.
  • header fields could be, for example, sequence numbers contained in headers, which are included in the compressed header in all or some of the packets, for example as a difference value or, for example, to the last bits / bytes.
  • k bits are also always used in the following. Whereby k may just contain a context identifier and / or further information and k may well be different depending on the transmitted packet, for example.
  • IPv4 header checksum In the case of the IPv4 header checksum, this is done by simple recalculation; in the case of other header fields which must meet such consistency requirements, it may also be necessary to include them in compression in the compressor and to restore them from context and compressed fields during decompression. (If none of the systems on the path between compressor and decompressor evaluates the IPv4 header checksum, this, as well as the other fields of all packet headers, is a candidate for participating in header compression.)
  • This description took advantage of IPv4 and the IPv4 source address as an an example; however, the described aspect can be applied to any headers (or portions thereof) such as Ethernet headers and other layers 1 and 2 headers, IPv6 headers and other layer 3 headers, as well as the headers of the layers above them.
  • compression need not be limited to a single header, but may be protocol-spanning.
  • one or more fields of UDP such as the checksum
  • TCP such as the Urgent pointer
  • UDP and TCP port numbers are often used to identify communication relationships for other nodes in the network, so it may be necessary to keep them unchanged.
  • IP source address If this is not required, the entire tuple can be compressed consisting of IP source and / or transport source and / or destination port number and / or transport protocol identifier.
  • IPv4 headers and more or less simple IP routers as the nodes that are passed by the compressed packets, there are a variety of other header fields that would be potential candidates for inclusion in the partial header compression.
  • the fields Protocol, Identification, Fragment Offset and / or the MF bit (for example, if there is no (further) fragmentation between XI and X-2) / TOS, TTL (or a part of it, if, for example, intermediate routers check and reduce only to> 0 - in this case, for example, one could set the lower 4 bits to 1 and, for example, the upper 4 bits into the compression Include), Total Length (for example, if this results from the underlying protocol headers and the intermediary components only evaluate these) and IP Header Length (for example, if ignored or simply implicit of the intermediary components, for example, depending on the IP version number Is accepted).
  • the fields can really be included in the compression results, as described above, among others through the deployment scenarios and the intermediary components.
  • IPv4 destination addresses might be included in the compression in whole or in part. This could be advantageous, for example, if the data is transmitted via a broadcast (or broadcast-like) network in which the data regardless of the destination address anyway (always or usually or for certain packets) all the receiver or the / Achieve decompressors.
  • the use of partial header compression is advantageous, inter alia, if components are used on the transmitting side or receiving side (or in the network itself) which have an IP header (or something entirely / partially in the structure and / or size of a IP headers).
  • an unmodified network card and / or an unmodified network card driver could be used to send packets that require it to receive data packets with an IP header.
  • a similar example could be packets sent to an IP multicast address. For example, if it is known which addresses are being used, or perhaps even known, that only one IP multicast address is being used in a network and / or the receivers or decompressors are recovering the IP multicast addresses using context identifiers , for IP multicast packets, much of the IP (v4) destination address could be included in the compression.
  • IP destination address is to be included in a compression independently of IP multicast, it may be useful and / or necessary to support components used so that they can handle these IP destination addresses filled with other content.
  • IP packets with an IP destination address included in the compression are to be transmitted, for example, via an "Ethernet" (for example according to IEEE 802.3)
  • the Address Resolution Protocol can be supported, for example by using a local ARP cache or an ARP proxy / responding ARP requestor responds to ARP requests with a pseudo-IP destination address created by compression with appropriate ARP responses / values.
  • IPv4 IP-to-Network Interface
  • IPv6 IP-to-Network Interface
  • Protocol headers provide appropriate space for IEEE 802 LAN protocols.
  • transport protocol headers and / or application protocol headers can be compressed.
  • the bits "extracted" on individual layers by compression of the corresponding header fields can be combined and shared across layers, for example saving space for context identifiers, for example because they no longer need to be assigned, managed and / or transmitted on each layer and / or These can even be derived in whole / in part from fields underneath protocol headers.
  • this method can be used unidirectionally and / or bidirectionally.
  • the two directions of transmission can be operated independently or independently of each other.
  • the independent and / or dependent operation may relate to individual data packets, individual communication relationships, and / or groups of communication relationships, and that reference or independent and / or dependent operation may change over time once and / or repeatedly.
  • the compressor can ensure that the decompressor is capable of doing so by appropriate selection of algorithms (such as the use of DEFLATE) and / or additionally transmitted control information is to reconstruct one or more Compressor compressed (s) and forwarded (s) data packets in whole or in substantial part.
  • An alternative usage arises when, instead of partial header compression, maintaining sub-headers or (sub-) header structures (for example, to support intermediate network components that evaluate these headers) headers of another (for example, those network components supported) protocol.
  • sub-headers or (sub-) header structures for example, to support intermediate network components that evaluate these headers
  • the significantly larger IPv6 headers could be compressed and the compressed information provided with inserted IPv4 headers could be transmitted.
  • parts of the contents of the IPv4 header can then optionally be compressed and / or replaced for transmission of other information and / or user data.
  • a combination with a compression of the transmitted user data can also be carried out.
  • both lossless and lossy compression methods are available (such as reducing image resolution, image quality, or filtering out optional additional information, etc.).
  • Protocol enhancement techniques exist for a variety of protocols and objectives and / or networks. Very often used are, for example, protocol enhancement methods for TCP and / or HTTP. These protocols are either replaced for example for certain transmission sections by other protocols and / or modified protocol parameters in the terminals and / or the exchanged data packets. There are many potential targets for such protocol enhancement methods. For example, it may be the task of a TCP protocol enhancement to enable a high transmission bandwidth even with high transmission delays and / or high packet loss rates and / or to keep the protocol overhead, for example caused by control packets, low. Similar goals often have HTTP protocol enhancements.
  • the page load times that arise when using a standard Internet browser should also be reduced for networks with high transmission delays and / or high packet loss rates.
  • Ways to do this include, for example, intermediary proxies and / or proactively sending objects contained on web pages or even behind links.
  • HTTP is also an example of how it can be very useful to combine the various methods mentioned here (but HTTP is only representative of many protocols that apply to this).
  • HTTP uses TCP and IP, so it uses a relatively large protocol hierarchy, with typical header compression and partial header compression techniques; the HTTP headers themselves are often largely text-based / -coded.
  • typical (even partial) header compression methods could be used.
  • User data with a conventional compression method such as DE-FLATE
  • HTTP is a protocol for which enhancement procedures are recommended in many networks and for which encryption is often useful.
  • Each of these mentioned types of methods can be inserted, for example, for HTTP in all the arrangements mentioned in FIGS. 1 and 2.
  • the individual types of process could be implemented independently, which among other things increases the flexibility and / or interchangeability.
  • implementation in combined system components and / or devices potentially reduces the overall complexity and / or configuration effort.
  • the individual types of methods could be implemented and used in a simplified manner in a completely / partially integrated realization and / or in a realization in which at least individual control information is exchanged between the components of the method types.
  • the sharing of status information and / or context identifiers across methods can, in some cases, reduce and / or more efficiently use the amount of control information to be exchanged over the network.
  • the time span for establishing new connections and / or exchanging data can also be reduced.
  • Such a mechanism can be distributed between two or more com- be realized pressors and / or one-sided in a compressor.
  • the detection mechanism may be passive (for example, only observe packet flows) or active (for example sending out packets for determining compression possibilities).
  • the mechanism (whether implemented in one or more compressors) can provide the necessary information from the network (such as routing, middle-box signaling protocols such as RSVP, NSIS, SOCKS, MIDCOM, etc. and / or other control protocols) and / or one Maintain network management and / or determine them by interaction of two or more compressors. Hints can be given by initial and / or continuous configuration.
  • Such a mechanism dynamically recognizes partially or completely independently which headers or header fields can be included in a particular compression. This determination of the compression possibilities can take place in advance of the startup of the compression, before / during the establishment of one or more communication relationships and / or continuously during the active compression.
  • the mechanism automatically detects errors during operation (for example, from the transmission or non-transmission of compressed data packets themselves, their loss rates and / or their other transmission characteristics and conclusions about changes in the transmission path (new routing, Add one or more other nodes, load sharing on multiple routes, etc.) Based on this information, the mechanism may then adjust the header compression accordingly.
  • Such a mechanism can be active simultaneously in various forms and can also be operated in parallel in addition and / or offset in time to a static configuration.
  • Different forms different dynamic determinations and / or static configurations and / or negotiations
  • the identification of which mechanisms should be applied to which (parts of) a communication relationship (s) can in turn be done statically or dynamically and / or by the properties the data packets and / or the protocols used and / or the network load (currently, past, future expected) and / or dependent on the observed transmission characteristics (error rate, orbital period, etc.).
  • provision may be made for actively exchanging, for example, test packages according to a previously (statically or dynamically) agreed scheme in addition to and / or instead of the configuration as to which fields / subfields may be included in the compression.
  • a previously (statically or dynamically) agreed scheme in addition to and / or instead of the configuration as to which fields / subfields may be included in the compression.
  • the basis for decision making can be the technical feasibility (for example, encrypted packets can be compressed less well than unencrypted packets) and / or the efficiency of the compression and / or the effort (for example, computing power, memory, etc.).
  • the compression may be made contingent on certain involved endpoints and / or applications and / or the load in the transmission networks and / or on individual / groups of transmission sections and / or available memory and / or the CPU / processor load of the components involved. Depending on individual or combinations of such criteria, the compression may be fully / partially activated, limited and / or totally / partially disabled. In both static and dynamic compression decisions, this decision can be made unidirectionally by individual components or components of a transmission side, or shared by multiple involved components or even "adjacent" system components such as a network management system to compress and terminate the compression (and vice versa) during the continuity of the connections / parts of the connections.
  • an application or endpoint could already generate (header) compressed data packets.
  • these could be compressed RTP headers, whereas the underlying UDP and IP headers could have remained uncompressed according to [9].
  • internal compression can be done as supplemental and / or replacement compression.
  • the internal compression may be complementary by compressing further headers and / or header fields, which may perhaps be saved on the relevant transmission network (N-X2 in the case of Fig. Ie) in the transmission, in a nested compression step.
  • Compression may be a replacement if decompression is performed before recompression, for example, because multiple header or header fields can then be compressed more efficiently together.
  • Supplementary and substitute compression may be active simultaneously and / or at different times.
  • Data packets not recognized by the external compression and / or uncompressed data packets can be detected by the internal compression and vice versa.
  • the detection of compressed data packets and the recognition of the header information to be compressed can in turn be statically configured and / or dynamically determined and / or obtained through interactions with the components involved in the compression or external.
  • the individual compressions can-as described above-relate to individual packets and / or packet sequences and / or all packets of one and / or a group of communication relationships. They can also be applied differently in chronological order.
  • "inner" compression may use the context specifiers of "outer” compression, for example, to further reduce the volume of data and / or reduce the complexity and / or control of internal compression.
  • internal compression can detect CRTP headers in the incoming data packets, and then most or all partially or completely compress the UDP and IP headers retained in the outer compression (eg, by using internal compression) Uses context identifier / outer compression flow ID and includes all or part of the information to be compressed in the UDP, IP header in the context referenced by the context identifiers). Such nesting may continue recursively or may continue sequentially similar to FIG. Id).
  • internal compression may also compress fewer headers and / or header fields (for example, if intermediate components are used in an internal transmission network that allow for compression of particular headers and / or header fields).
  • the compression of protocol headers need not be limited to a distance between two compressors, but may include more than two compressors.
  • two types of communication are possible, which can be (but do not have to) be defined by the underlying network: a) unidirectionally from exactly one node S to many nodes Rl, ..., Rn (n> 1) without the nodes Rl, ..., Rn have the possibility to also send packets to the node S; b) bidirectional, so that the transmission of packets from the nodes Rl, ..., Rn to the node S is possible.
  • the packets from the nodes Rl, ..., Rn to the node S can only be control packets and / or also compressed data packets.
  • each node Ri which also sends (compressed) data packets, then acts equally as a sending node S.
  • the present invention is also suitable for the use of header compression in point-to-multipoint communication (such as exists in a satellite or terrestrial broadcast network).
  • the same compression techniques can be used.
  • the decompressors may have different capabilities in (or "behind") the nodes R1, ..., Rn, and that the transmission paths to the different decompressors may have different characteristics , the majority and / or all decompressors allow a decompression of the compressed data packets. This can be done by the compressor selecting methods and header / header fields for compression that are appropriate for all intended decompressors. And / or the compressor may send differently compressed data packets to individual decompressors and / or groups of decompressors that are specific to the particular transmission path and / or decompressor.
  • And / or the compressor may send additional information (in existing and / or other data and / or control packets) to individual and / or groups of decompressors (and / or nodes to the transmission networks) to allow successful forwarding and decompression of the data packets ,
  • decompressors Since data packets can be lost in IP networks (eg due to bit errors or overload), it is possible that one, several or all decompressors lack information for the correct decompression of a data packet. In such a case, it is envisaged that a decompressor will notify the compressor (so bidirectional communication is possible directly through the same or indirectly via partially or completely different transmission networks) that information is missing. The compressor may decide if and when to transmit further information in existing and / or additional data and / or control packets to reconstruct the missing information (context).
  • This decision may depend on the communication relationship (type of data, duration, etc.) and / or on the decompressor (s) in question and / or the number of decompressors that need this information and / or other configuration information and / or specifications and / or the general and / or current transmission characteristics of the network.
  • a compressor may also transmit at regular or irregular intervals redundant information for the eventual reconstruction of the context, for example by using FEC, the bit rate for the redundant information depending on the network, assumed or real network load, assumed or observed bit and or packet error rate or by configuration or by signaling a network management system over time.
  • One or more or all of the decompressors may be able to provide feedback on missing information and / or context and / or local knowledge. stand) to the compressor to send. In such a case, it may be advantageous that not all possible decompressors do so, for example, to avoid overloading the compressor or the rearward transmission path with too much information.
  • one or more decompressors may be selected as designated decompressors of a group or of all decompressors; only these designated decompressors provide feedback on behalf of the respective group or all recipients. Not all decompressors must be represented by designated decompressors.
  • the selection of the designated decompressors may be static and / or dynamically negotiated (for example, the compressor may determine the decompressors) and / or determined based on the transfer characteristics to the decompressors and / or by the functional characteristics (characteristics) of the decompressors and / or or the characteristics relating to the compressible headers and / or header fields on the respective transmission path; in all these cases, a random component (true random numbers, pseudo-random numbers, cryptographically-calculated functions) can also be used to further constrain the actual selection.
  • Various decompressors can also send specific parts of the feedback information useful for the compressor based on one or more of these criteria.
  • the selection can be made permanently and / or up to an explicit reconfiguration and / or vary in time.
  • the selection may apply to all packets transmitted by the compressor and / or to the packets of individual communication relationships and / or groups of communication relationships and / or to packets determined by their type and / or other characteristics.
  • Such a method can be used, for example, via a terrestrial radio network (such as DVB-T, DVB-H, WLAN, WiMAX, mobile radio such as GSM, UMTS, HS (D) PA, LTE, UWB, OFDM, etc.). It can also be used over any satellite networks, radio networks in space, etc. It can also be used in wired broadcast networks (such as cable networks, DSL, fiber-to-the-home, Ethernet, etc.). These networks can be used individually or in any combination for broadcasting. As previously described, all compression may be accompanied by complete and / or partial encryption of the information.
  • the context identifiers and context information used for identifying (individual) communication relationships can also be produced cryptographically, so that, for example, an intermediary unauthorized recipient does not even discover which packets are to be assigned, for example, to a communication relationship.
  • Cryptographic information can also be used to authenticate / authorize or prioritize feedback information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne une technologie permettant d'optimiser une transmission de données entre des points terminaux de communication dans un réseau comprenant des points terminaux de communication.
PCT/DE2010/000583 2009-05-25 2010-05-25 Procédé d'optimisation d'une transmission de données par paquets et produit-programme informatique WO2010136023A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE102009022499.8 2009-05-25
DE102009022499 2009-05-25
DE102009034357.1 2009-07-17
DE102009034357 2009-07-17

Publications (2)

Publication Number Publication Date
WO2010136023A1 true WO2010136023A1 (fr) 2010-12-02
WO2010136023A8 WO2010136023A8 (fr) 2011-02-17

Family

ID=42829370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2010/000583 WO2010136023A1 (fr) 2009-05-25 2010-05-25 Procédé d'optimisation d'une transmission de données par paquets et produit-programme informatique

Country Status (1)

Country Link
WO (1) WO2010136023A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037466A (zh) * 2012-12-17 2013-04-10 南京理工大学连云港研究院 一种轻型机步野战旅场景下的dtn路由策略
DE102016223533A1 (de) 2016-11-28 2018-05-30 Audi Ag Verfahren zum Übertragen von Nachrichten zwischen Steuergeräten eines Kraftfahrzeugs sowie Switchvorrichtung und Kraftfahrzeug
CN110213330A (zh) * 2019-04-28 2019-09-06 北京奇艺世纪科技有限公司 预推送系统、方法、装置、电子设备和计算机可读介质
CN113891310A (zh) * 2020-07-03 2022-01-04 华为技术有限公司 协作通信方法、用户设备及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1533982A2 (fr) * 2003-11-19 2005-05-25 The Directv Group, Inc. Système et procédé pour la pré-extraction de contenu dans une architecture de proxy au moyen des connexions sécurisées transparentes
EP1559038A2 (fr) 2002-11-06 2005-08-03 Tellique Kommunikationstechnik GmbH Procedes de pre-transmission de quantites de donnees structurees entre un dispositif client et un dispositif serveur
EP1718034A1 (fr) * 2005-04-25 2006-11-02 Thomson Multimedia Broadband Belgium Procédé et passerelle pour la gestion de requêtes d'adresse
US20080225728A1 (en) * 2007-03-12 2008-09-18 Robert Plamondon Systems and methods for providing virtual fair queueing of network traffic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1559038A2 (fr) 2002-11-06 2005-08-03 Tellique Kommunikationstechnik GmbH Procedes de pre-transmission de quantites de donnees structurees entre un dispositif client et un dispositif serveur
EP1533982A2 (fr) * 2003-11-19 2005-05-25 The Directv Group, Inc. Système et procédé pour la pré-extraction de contenu dans une architecture de proxy au moyen des connexions sécurisées transparentes
EP1718034A1 (fr) * 2005-04-25 2006-11-02 Thomson Multimedia Broadband Belgium Procédé et passerelle pour la gestion de requêtes d'adresse
US20080225728A1 (en) * 2007-03-12 2008-09-18 Robert Plamondon Systems and methods for providing virtual fair queueing of network traffic

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BORMANN ET AL.: "IETF RFC 3095", ROBUST HEADER COMPRESSION (ROHC): FRAMEWORK FOR FOUR PROFILES: RTP, UDP, ESP, AND UNCOMPRESSED., July 2001 (2001-07-01)
CASNER ET AL.: "IETF RFC 2508", COMPRESSING IP/UDP/RTP HEADERS FOR LOW-SPEED SERIAL LINKS, February 1999 (1999-02-01)
E. RESCORLA: "IETF RFC 2818", HTTP OVER TLS, May 2000 (2000-05-01)
KOREN ET AL.: "IETF RFC 3544", IP HEADER COMPRESSION OVER PPP, July 2003 (2003-07-01)
KOREN ET AL.: "IETF RFC 3545", ENHANCED COMPRESSED RTP (CRTP) FOR LINKS WITH HIGH DELAY, PACKET LOSS AND REORDERING, July 2003 (2003-07-01)
R. FIELDING; J. GETTYS: "IETF RFC 2616", HYPERTEXT TRANSFER PROTOCOL - HTTP/1.1, June 1999 (1999-06-01)
RODRIIGUEZ P ET AL: "Session Level Techniques for Improving Web Browsing Performance on Wireless Links", PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, 17 May 2004 (2004-05-17) - 22 May 2004 (2004-05-22), XP040180034 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037466A (zh) * 2012-12-17 2013-04-10 南京理工大学连云港研究院 一种轻型机步野战旅场景下的dtn路由策略
DE102016223533A1 (de) 2016-11-28 2018-05-30 Audi Ag Verfahren zum Übertragen von Nachrichten zwischen Steuergeräten eines Kraftfahrzeugs sowie Switchvorrichtung und Kraftfahrzeug
WO2018095604A1 (fr) 2016-11-28 2018-05-31 Audi Ag Procédé de transmission de messages entre des appareils de commande d'un véhicule à moteur ainsi que dispositif de commutation et véhicule à moteur
US10771282B2 (en) 2016-11-28 2020-09-08 Audi Ag Method for transmitting messages between control units of a motor vehicle, and switch apparatus, and motor vehicle
CN110213330A (zh) * 2019-04-28 2019-09-06 北京奇艺世纪科技有限公司 预推送系统、方法、装置、电子设备和计算机可读介质
CN110213330B (zh) * 2019-04-28 2023-02-03 北京奇艺世纪科技有限公司 预推送系统、方法、装置、电子设备和计算机可读介质
CN113891310A (zh) * 2020-07-03 2022-01-04 华为技术有限公司 协作通信方法、用户设备及系统

Also Published As

Publication number Publication date
WO2010136023A8 (fr) 2011-02-17

Similar Documents

Publication Publication Date Title
US10021034B2 (en) Application aware multihoming for data traffic acceleration in data communications networks
US10158742B2 (en) Multi-stage acceleration system and method
US7894364B2 (en) Method for the transmission of data packets in a tunnel, corresponding computer program product, storage means and tunnel end-point
Alani Guide to OSI and TCP/IP models
US8335858B2 (en) Transparent auto-discovery of network devices logically located between a client and server
Fairhurst et al. Services provided by IETF transport protocols and congestion control mechanisms
EP3075110B1 (fr) Contrôle de dimension de fenêtre de protocole de contrôle de transmission
EP2774340B1 (fr) Compression de contenu inaperçu dans un réseau de télécommunication
Lederer et al. An experimental analysis of dynamic adaptive streaming over http in content centric networks
US20070064618A1 (en) Method of forming protocol data units, protocol data units and protocol data unit generation apparatus
DE202021103381U1 (de) Computerlesbares Medium und Systeme zur Implementierung eines regional zusammenhängenden Proxy-Dienstes
DE102015004668A1 (de) Aufgeteilte netzwerkadressenübersetzung
EP2385682B1 (fr) Procédé d'optimisation d'une transmission de données orientée paquets et produit de programme informatique
US7543072B1 (en) Method and system capable of performing a data stream over multiple TCP connections or concurrent interleave of multiple data streams over multiple TCP connections
WO2010136023A1 (fr) Procédé d'optimisation d'une transmission de données par paquets et produit-programme informatique
JP2005520374A (ja) Tcp/ipに対する変更
Al-Qudah et al. Anycast-aware transport for content delivery networks
EP3136684B1 (fr) Transmission multidiffusion au moyen d'un réseau programmable
JP2009015392A (ja) 通信装置および通信方法
Shamieh et al. Dynamic cross-layer signaling exchange for real-time and on-demand multimedia streams
US20140334502A1 (en) System and method for relaying data based on a modified reliable transport protocol
JP4292884B2 (ja) リアルタイムデータ通信システム、リアルタイムデータ通信装置およびリアルタイムデータ通信方法
JP2002077263A (ja) 送受信方法
Lai et al. DCCP partial reliability extension with sequence number compensation
EP2802117B1 (fr) Système et procédé permettant de relayer des données basées sur un protocole de transport fiable modifié

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10735179

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 1120100021155

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10735179

Country of ref document: EP

Kind code of ref document: A1