WO2015158389A1 - Procédés de compression de trafic efficace sur des réseaux ip - Google Patents

Procédés de compression de trafic efficace sur des réseaux ip Download PDF

Info

Publication number
WO2015158389A1
WO2015158389A1 PCT/EP2014/057912 EP2014057912W WO2015158389A1 WO 2015158389 A1 WO2015158389 A1 WO 2015158389A1 EP 2014057912 W EP2014057912 W EP 2014057912W WO 2015158389 A1 WO2015158389 A1 WO 2015158389A1
Authority
WO
WIPO (PCT)
Prior art keywords
header
network node
fingerprint
uncompressed
field set
Prior art date
Application number
PCT/EP2014/057912
Other languages
English (en)
Inventor
Paolo Spallaccini
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2014/057912 priority Critical patent/WO2015158389A1/fr
Publication of WO2015158389A1 publication Critical patent/WO2015158389A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC

Definitions

  • Embodiments herein relate generally to a first network node, a method in the first network node, a second network node, a method in the second network node. More particularly the embodiments herein relate to handling traffic flows in a communications system.
  • Ethernet Header Compression Header compression is a mechanism that compresses the header in a traffic flow before the traffic flow is transmitted. Header compression reduces network overhead and speeds up the transmission of traffic flows and also reduces the amount of bandwidth consumed when the traffic flows are transmitted. Ethernet framed traffic may be compressed by processing the information comprised in the Ethernet frame header. Such compression methods are meant to be an enabler for the improvement of the payload throughput over the hop - and for the bandwidth efficiency as a consequence of that. In the following, such methods will generally be referred to as Ethernet Header Compression (EHC).
  • EHC Ethernet Header Compression
  • the achieved compression gain given a certain Frame Length (FL), may be measured as the ratio:
  • compression may be described as the running of a data set through an algorithm that reduces the space required to store the data set or the bandwidth required to transmit the data set.
  • Decompression may be described as the act of reconstructing a compressed header.
  • the compression gain figures thus obtained have to be considered high in comparison to results obtainable by means of other lossless source coding techniques.
  • the high compression gain figure is obtained at the expense of a high amount of 5 memory needed, as it is reached due to the conceptual choice of performing compression by maintaining two separate learning codebooks storing the whole uncompressed information at both the endpoints managing the compressed traffic.
  • the term learning codebook indicates that it is a codebook structure that may change dynamically, i.e. by "learning" its content from an evolving context. For example, traffic flows may cause the o codebook to be changed at runtime by "learning" about new headers.
  • the encoding operation is subsequently performed by substituting at one endpoint (for in- band exchange) the uncompressed information with the references, or the pointers, to the codebook location storing such information for the sake of reconstruction at the other 5 endpoint. Said reconstruction is possible thanks to the alignment in the codebooks
  • a communications system 100 comprises a first network node 101 which is arranged to communicate with a second network node 1050 over a communications link 110 by a point to point hop.
  • a compression mechanism 113 comprised in the first network node 101 receives ingress data traffic to be further transmitted to the second network node 105.
  • the ingress data traffic may come from an input traffic interface comprised in the first network node 101 , for example a direct or buffered connection to a port of an Ethernet switch (so that data traffic is carried5 at Layer 2, with reference to the OSI stack).
  • the compression mechanism 1 13 determines whether or not to compress the ingress data traffic.
  • the compressor codebook 1 15 is a database.
  • the first network node 101 comprises compression look-up tables 116 which may be described as a data structure for carrying all data needed by the first network node 101 .
  • the data may be ordered in entries, each entry storing data for a traffic flow (including the header) at a time.
  • the compressor codebook 1 15 is the same as the compression look-up tables 1 16.
  • the data traffic (compressed or not) is transmitted over the communications link 1 10 by a first radio circuitry 118 comprised in the first network node 101 to a second radio circuitry 122 comprised in the second network node 105.
  • the second network node 105 comprises decompression look-up tables 123 which may be o described as a dual data structure with respect to the compression look-up table 1 16 in the first network node 101 .
  • a decompression mechanism 125 is able to determine if the traffic is compressed or not. In case the traffic is uncompressed, the decompression mechanism 125 just forwards the traffic being egress data traffic without any change. In case the traffic is compressed, the information 5 comprised in the compressed header is sufficient to the decompression mechanism 105a to reconstruct the original data traffic by substituting the compressed header with the original header resulting from processing the contents of a decompressor codebook 130 retrieved using the compressed header as a unique key.
  • the decompressor codebook 130 is a database.
  • the decompressor codebook 103 is the same0 as the decompression look-up tables 123.
  • the decompressor codebook 130 needs to be in-band aligned with the compressor codebook 1 15, for those entries used in communication between the first network node 101 and the second network node 105.
  • both the first network node 101 and the second network node 105 comprise a respective codebook 1 15, 130.
  • An extension of the compression capability of the framed Ethernet traffic to headers belonging to upper layers protocols (such as IP and transport layer) with respect to Ethernet Layer 2 (L2), and usually representing the Ethernet L2 Frame payload, may be0 seen as a natural way of obtaining a further increase in the payload throughput over the hop.
  • such compression extension capability will be referred to as Multi- Layer Header Compression (MLHC).
  • MLHC Multi- Layer Header Compression
  • a first estimate of the amount of memory that would be reserved for those structures necessary to the implementation of the compression method may be calculated in dependence on the size of a "single entry" of uncompressed information.
  • Such memory occupancy estimate is based on the assumption that, alongside two big structures, i.e. the compressor codebook 1 15 and the decompressor codebook 130, to be o reserved in portions of associative memory, some additional ones, like look-up tables, should also be allocated.
  • the compression gain figure of 87.5% expresses the degree of reduction in size of the data that need to be transferred to the second node. For this particular case, (96-88+4) data bytes need to be transferred to the second node, instead of 96.
  • An objective of embodiments herein is therefore to obviate at least one of the above disadvantages and to provide improved handling of data traffic in a communications 5 system.
  • the objective is achieved by a method in a first network node for handling traffic flows in a communications system.
  • the first network node obtains a fingerprint of an uncompressed header comprised in an incoming traffic flow.
  • the o fingerprint is a cryptographic hash value identifying the uncompressed header;
  • the first network node compresses the uncompressed header by replacing the uncompressed header with the tag.
  • the first network node transmits the traffic flow comprising the compressed header and the payload data to a second network node.
  • the objective is achieved by a method in a second network node for handling data traffic in a communications system.
  • the second network node comprises a codebook comprising information indicating associations between uncompressed headers and fingerprints identifying the uncompressed header and tags0 associated with the fingerprint.
  • the second network node receives a traffic flow
  • the compressed header comprises the tag.
  • the second network node decompresses the received compressed header by replacing the tag with the uncompressed header from the codebook.
  • the objective is achieved by a first network node for handling traffic flows in a communications system.
  • the first network node being adapted to obtain a fingerprint of an uncompressed header comprised in an incoming traffic flow.
  • the fingerprint is a cryptographic hash value identifying the uncompressed header;
  • the first network node is further adapted to compress the uncompressed header by replacing the uncompressed header with the tag.
  • the first network node is adapted to transmit the traffic flow comprising the compressed header and the payload data to a second network node.
  • the objective is achieved by a second network node for handling data traffic in a communications system.
  • the second network node comprises a codebook comprising information indicating associations between uncompressed headers and fingerprints identifying the uncompressed header and tags associated with the fingerprint.
  • the second network node is adapted to receive a traffic flow comprising a compressed header and payload data from a first network node.
  • the compressed header comprises the tag.
  • the second network node is adapted to decompress the received compressed header by replacing the tag with the uncompressed header from the codebook.
  • An advantage of the embodiments herein is that the amount of memory is drastically reduced compared to solutions which requires to symmetric codebooks, i.e. one at each end.
  • Fig. 1 is a schematic block diagram illustrating embodiments of a
  • Fig. 2 is a graph illustrating compression gain. is a schematic block diagram illustrating embodiments of a
  • the embodiments herein allow an efficient implementation of Header Compression extended to Internet and Transport layers (i.e. MLHC) based on the use of learning codebooks as the main encoding algorithm.
  • the embodiments herein comprise three main steps.
  • Mentioned fingerprints may be used for flows identification in compression. Mentioned fingerprints may be calculated according to suitable algorithms, with a requirement of keeping the probability of issuing false positives in identifications suitably low.
  • Errors induced by residual "false positive in identification" occurrences may be minimized by means of a feedback mechanism in which the decompressor notifies the compressor about information reconstruction errors, in order to let the compressor immediately drop compression for those flows.
  • the mentioned flows may eventually be blacklisted and excluded from further compression. In the following, such topics will be thoroughly discussed.
  • this steps aims at differentiating those fields in the header showing the lowest degree of variance in their value - in some statistic sense - to the extent that it becomes possible to determine a finite set of values that they may assume.
  • the traffic flow header fields in the traffic header are
  • a huge memory reduction means allows an all- internal FPGA memory based solution: the latency in accessing the codebook during the decompression is kept as low as possible, thus crucially contributing to minimizing the end-et-end round trip delay of data packets in transported traffic and facilitating/permitting implementation on environments requiring low round trip time latency budgets
  • FIG. 3 depicts a communications system 300 in which embodiments herein may be implemented.
  • the communications network 300 may in some embodiments apply to one or more access technologies such as for example Long Term Evolution (LTE), LTE Advanced, Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), or any other Third Generation Partnership Project (3GPP) radio access technology, or other access technologies or radio access technologies such as e.g. Wireless Local Area Network (WLAN).
  • LTE Long Term Evolution
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • 3GPP Third Generation Partnership Project
  • the embodiments herein may be apply to any environment in which endpoints are exchanging source data demanding errorless transmission (for instance packet layered data carrying overhead in headers) that has not been previous compressed.
  • the communications system 300 comprises a first network node 301 which is arranged to communicate with a second network node 305 over a communications link 310 by a 5 point to point hop.
  • the first network node 301 may also be referred to as a compressor and the second network node 305 may be referred to as a decompressor, or the first network node 301 may be referred to as comprising a compressor and the second network nod 305 may be referred to as comprising a decompressor.
  • Each of the first and second network nodes 301 , 305 may be nodes which exchange Ethernet layered traffic o with each other, such as e.g. an evolved NodeB (eNB) or other types of mobile
  • eNB evolved NodeB
  • the second network node 305 may be a node located in close proximity to the core network with respect to the first network node 301 .
  • a compression mechanism 313 comprised in the first network node 301 receives ingress data traffic to be further transmitted to the second network node 305.
  • the first network 5 node 301 further comprises a header-fields classifier 314, a cryptographic hash
  • the data traffic (compressed or not) is transmitted over the communications link 310 by a first radio circuitry 318 comprised in the first network node 301 to a second radio circuitry 322 comprised in the second network node 305.0 At the second network node 305, a decompression mechanism 325, is able to
  • the second network node 305 further comprises decompression look-up tables 328 and a header-fields de-classifier 329.
  • the decompression lookup tables 328 in figure 3 comprises the uncompressed header in transcoded form.
  • the decompression mechanism 325 just forwards the traffic being egress data traffic without any change.
  • the information comprised in the compressed header is sufficient to the decompression mechanism 325 to reconstruct the original data traffic by substituting the compressed header with the original header resulting from processing the contents of a
  • the decompressor codebook 330 is a database. To reconstruct the original packet without errors, the decompressor codebook 330 needs to be in-band aligned with the compressor flow fingeprints table 316, for those entries used in communication between the first network node 101 and the second network node 105. As mentioned earlier, the5 embodiments herein provide feedback for blacklisting flows with errors in the reconstruction.
  • the use of tables e.g. compressor flow fingerprints table 316, decompression look-up tables 328 etc.
  • Other types of structures may also be used such lists, a tree structure etc.
  • the two data structures 316, 328 may be handled together 5 and managed as linked lists, for the convenience of the activity of searching the entries within the data structure.
  • the communication link 310 in the communications system 300 may be of any suitable kind including either a wired or wireless link.
  • the link 310 may use o any suitable protocol depending on type and level of layer (e.g. as indicated by the Open Systems Interconnection (OSI) model) as understood by the person skilled in the art.
  • OSI Open Systems Interconnection
  • the communications system 300 according to the embodiments herein and as seen in figure 3 comprises three additional or modified blocks
  • the cryptographic hash, or traffic flow fingerprints calculator 315 The cryptographic hash, or traffic flow fingerprints calculator 315.
  • the header fields de-classifier and decoder 329 (dual operations with respect to point 1 ).
  • the classifier database 1 15 in figure 1 has been reduced to a much smaller and more manageable memory table, referred to as a compressor flows fingerprint table 316 in figure 3.
  • the decompressor database 330 maintains its basic5 structure and purpose, but it experiences a significant reduction in size.
  • FIG. 4a comprises steps 401 -410 and figure 4b comprises steps 41 1 -416, i.e. figure 4b is a continuation of figure 4a.
  • the method comprises the following steps, which steps may as well be carried out in another suitable order than described5 below.
  • the first network node 301 may detect an uncompressed header in an incoming traffic flow (314).
  • This step 401 is part of an identification of the traffic flow.
  • the identification also comprises recognizing and classifying of header fields (i.e. a different classification compared to steps 406 and 407 described below).
  • header fields i.e. a different classification compared to steps 406 and 407 described below.
  • the uncompressed header may be an Ethernet header, an IP header (e.g. IPv4, IPv6, IPsec etc), a transport layer header (e.g. TCP or UDP), a MPLS layer header etc.
  • IP header e.g. IPv4, IPv6, IPsec etc
  • transport layer header e.g. TCP or UDP
  • MPLS layer header e.g. MPLS layer header
  • the first network node 301 may obtain a fingerprint of the uncompressed header detected in step 401 .
  • the fingerprint may be obtained by calculating the fingerprint by using a suitable cryptographic hash algorithm. For example, the first network node 301 may obtain the fingerprint 123 for the uncompressed header xxxxxxxxx, the first network node 301 may obtain the fingerprint 456 for the uncompressed header yyyyyyyyy and the first network node 301 may obtain the fingerprint 789 for the uncompressed header zzzzzzzzz.
  • the first network node 301 may store the obtained fingerprint in the compressor flow fingerprints table 316 and manages the compressor flow fingerprints table 316 for a future fingerprint search.
  • the extraction of the fingerprint is an operation performed for each and every uncompressed header, as soon as it has been received as an input for the compressor of the first network node 301 and optionally after having successfully "classified” it (meaning by classified that the fields of the uncompressed header have been recognized, i.e. Ethernet fields separated from IPv4 or IPv6 fields and separated from UDP fields etc.). From that moment onwards, the uncompressed header may be identified by its fingerprint.
  • the first network node 301 may determine whether or not a fingerprint has previously been obtained for uncompressed header.
  • the check for a match of fingerprint may be performed on the fingerprint value once it has been extracted from the uncompressed header rather than on the whole header itself. In this sense the fingerprint value "replaces" the uncompressed header for the identification purposes.
  • the codebook at the compressor 101 is no more present. It is replaced with a table associating fingerprints with "unique tags" (one for each fingerprint) which will represent the compressed information sent over the hop.
  • the uncompressed header is stored within the decompressor codebook and it is associated with the same "tag". Information needed for creating such association is exchanged during setup of compression operations, for any compressible flow, by means of an in-band control protocol.
  • the first network node 301 obtains the fingerprint by retrieving it from a place where it has been store, e.g. from a memory space. In case the first network node 301 has determined that a fingerprint has not previously been o obtained, the first network node 301 may proceed to step 404 unless available storage
  • the first network node 301 assumes that steps 404-41 1 have previously been performed for that given traffic flow.
  • the flow may
  • the first network node 301 may send the header in the compressed form by replacing, in the
  • the first network node 301 may obtain a tag associated with fingerprint.
  • the tag may be used to identify or point to the position within the compressor codebook table in which the actual fingerprint is stored.
  • the first network node 301 may store the obtained fingerprint and the associated tag. This may be stored in the form of for example a table as illustrated in figure 5 5a.
  • the table may also be referred to as a compressor flow fingerprints table 316 as seen in figures 3 and 5a.
  • the uncompressed header may comprise a plurality of header o field sets, e.g. a first header field set and a second header field set.
  • the first network node 301 may classify a first header field set in a first class.
  • a first header field set in the first class may have high entropy.
  • the first class may also be referred to as a high entropy class.
  • the first network node 301 may classify a second header field set in a second class.
  • a second header field set in the second class may have low entropy.
  • the second class may also be referred to as a low entropy class.
  • a header field set in the first class may have a higher entropy than a header field set in the second class. 0
  • the cardinality of a first set to which the first header field belongs may be larger than the cardinality of a second set to which the second header field belongs.
  • the first network node 301 may transcode the second header5 field set in the second class, i.e. the class associated with low entropy.
  • the first network node 301 may transmit information associated with the transcoding to the second network node 305 (not shown in figure 4) so that the second network node
  • transcoded database 335 comprises a transcoded database 335.
  • An example of the transcoded database 3350 is seen in Figure 5b.
  • the example in figure 5b is on a bit level. For example, a second
  • header field value 1 101001001000001 in the second class may be transcoded in to 0000 (k bits), a second header field value 01 1 101010001 101 Oin the second class may be transcoded in to 0001 , a second header field value 1 1 1001 10 in the second class may be transcoded in to 0010, a second header field value 01 1001 1 1 in the second class may be5 transcoded in to 1 101 , a second header field value 1 101010010101 1 1 1 1 1010010 in the second class may be transcoded in to 1 1 10, a second header field value
  • 1000010000001 10100001 1 10 in the second class may be transcoded in to 1 1 1 1 , etc.
  • a second header field value comprising n bits may be transcoded into k bits.
  • the second network node 305 is aware of the "transcoding" table in use. This table
  • Second class may be fixed and predefined, as well as the set of fields in the header classified as a "low entropy" (second class) is. There is a fixed association between the value assumed by the second class field, and a different value, which is the "transcoded" one. Both nodes 31 , 305 may have a predefined knowledge of such association. A traffic flow showing values o assumed by a "second class" header field which is not comprised in that table may result
  • Transcoding is a term that may be used to indicate a further encoding level upon already 5 coded values, or a translation between one coded form and another one.
  • a further encoding step is to be made prior to storing values in the codebook. Such latter values therefore come in a "transcoded form”.
  • the transcoding may also be referred to as a further or second encoding. In decompression,0 while reconstructing the uncompressed information, for those header fields alone a
  • transcoding (“further” decoding) step must be made after having fetched the stored info from the compressed tag sent over the hop.
  • a first level of encoding is the actual
  • Transcoding is applied on top of the first encoding, i.e. a second decoding of the low entropy header fields.
  • Steps 406-408 related to the classification and transcoding may be performed
  • step 401 may be performed first and then steps 402 - 405 (without performing steps 406-408) and vice versa.
  • the first network node 301 may transmit information indicating an association between the uncompressed header, fingerprint and tag to the second network node 305. This may also be referred to as in-band codebook alignment (learning) as illustrated in5 figure 3. This step may be performed after step 408 or after step 405. Step 410
  • the second network node 305 may store the received information into its decompressor codebook 330.
  • An example of the decompressor codebook is 5 seen in figure 5c.
  • the first network node 301 may compress the uncompressed header. In one embodiment, this may be performed by replacing the uncompressed header in o the traffic flow with the tag obtained in step 405. In another embodiment, this may be performed by replacing the second header field set in the uncompressed header with the transcoded second header field.
  • the first network node 301 may send the traffic flow comprising the compressed header to the second network node 305.
  • This tag may be associated with the fingerprint in compressor database (at the compressor 305) and with the whole0 uncompressed header in the decompressor codebook.
  • the second network node 305 may decompress the received compressed header.
  • the second network node 305 may detect an error in the decompression. Such error may be that there may be at least one other header with substantially the same fingerprint. The error may be used to indicate that there are at least two0 uncompressed headers with substantially the same fingerprint.
  • the second network node 305 transmits information indicating the error in the decompression to the first network node 301 .
  • This may also be referred to as5 providing feedback for blacklisting flows with errors in reconstruction as seen in figure 3.
  • the first network node 301 may prevent the particular traffic flow associated with an error from future compression. This may also be referred to as blacklisting 5 of traffic flows. In case an error in decompressing a traffic flow is detected at one endpoint, the compressor at the other endpoint is promptly informed in order to put that traffic flow into a "managed blacklist" with the aim to immediately stop its compression and, temporarily or permanently, prevent compression if the same traffic flow is received again. o Classification and transcoding
  • the Header field classifier block includes the capability of recognizing and properly classifying each and every header field in the traffic flows to come to a proper identification in classes
  • the first class fields in the compressible header may be described as all the fields in the header 5 which is considered as data sources with high entropy.
  • a data source may be considered as showing a high entropy when the cardinality of the set to which the values it may assume belong is very large, and potentially approaching the maximum amount of values that it can take, given the field size.
  • the second class fields in the compressible header may be described as all the fields in the header which is considered as data sources with low entropy.
  • a data source may be considered as showing low entropy when the cardinality of the set to which the values it may assume belong is small or even predictable. In any case, for the second class fields, it may always be determined an integer k for which the mentioned cardinality results minor or equal to 2k.5
  • source coding theory and more in general in the information theory; it is considered as a measure of the degree of uncertainty of the output of a source of information which is expected to produce randomly distributed data. The more predictable are the expected0 values of data produced by said source of information, the less is the resulting degree of
  • cardinality of a set mentioned above may be described as being a measure of the "number of elements of the set".
  • the same block may assign and provide transcoded information to the rest of the compression logic mechanism and look-up tables, limiting to those header traffic flow fields classified as second class (for instance in the form of a table associating transcoded information to the header fields as exemplified in Figure 5b).
  • Such information may be stored at the decompressor side, i.e. the second network node 305, after successful establishment and completion of the codebook learning phase, together with the rest of the traffic flow information in an "uncompressed" format.
  • One criteria of the classification of the header fields in the traffic flows may be dependent on the particular configuration of the communications system 300 in which the header compression feature will operate. If the header compression feature is placed in the context of mobile backhauling, a well given and restricted set of values may be applicable to many (or some) fields of the traffic flows.
  • the figures measuring the gain obtained and reported in figure 2 are related to the particular MLHC case. The compression of different sources might lead to even higher compression gain figures.
  • the transcoding step may be performed on the basis of a pre-determined transcoding table, known at both the near and far end terminating the function (i.e.
  • a traffic flow may comprise first class header fields, second class header fields and
  • the traffic flow may further comprise payload data, but the payload data is not illustrated in Figure 6.
  • the header size is reduced as schematically illustrated in Figure 6.
  • the compressed header comprises the first class header fields, the transcoded second class header fields and the uncompressible header fields.
  • the compressed header is of a substantially smaller size than the uncompressed header.
  • the compressed data comprises both first class and second class fields. Both kinds of fields belong to the header only.
  • the difference between two classes is that in the Decompressor codebook, the first class header fields are stored "as they are", while the second class ones are stored in the "transcoded" form.
  • the packet classification enables locating header fields within the traffic flow, and within 5 the header itself, identification of each single header field belonging to OSI layers, starting
  • FIG. 3 the position of the cryptographic hash block calculator 315 has been depicted.
  • a cryptographic hash value of its header may be calculated right after successful traffic flow header recognition and classification.
  • the hash value calculation may be performed by means of suitable cryptographic algorithms and for the mere purpose of 5 identifying the header belonging to the traffic flow itself for any successive occurrence of the same header instance (and thus of the same traffic flow). This is compared to the current technology where the presence of the compressor codebook is in fact intended to host the header in the uncompressed format in order to maintain a repository used for the same sole purpose of identifying each incoming flow.
  • the cryptographic hash value used as a fingerprint may be extracted from the incoming header.
  • the cryptographic hash is here used for purpose of uniquely identify the header.
  • the header of the flow may be uniquely tagged with such value.
  • Such value may also be referred to as header traffic flow fingerprints. It may be seen that a look-up table structure in which fingerprints for all compressed traffic headers are stored is functionally able to replace the compressor codebook database depicted in figure 1 . This is valid provided that that the size of the flow fingerprint has0 been dimensioned according to some criteria, as it will be detailed later on, so that the
  • the fingerprint calculation may be repeated every time a valid traffic flow header is fed into the first network node 301 .
  • the only way to determine if, and particularly which valid traffic header5 has been fed into the Header Compression (MLHC) macro-block, is represented by performing fingerprint values comparisons between newly coming headers and the fingerprint look-up table content.
  • MLHC Header Compression
  • the fingerprint enables identification of the header in uncompressed form at the second 5 network node 305.
  • the fingerprint may be described as being extracted from the traffic
  • the first network node 301 performs a check about the
  • the embodiments herein provides an advantage of dramatically reducing the amount of associative memory needed by the header compression mechanism 313 by0 avoiding using a full header information at the compressor side, i.e. the first network node
  • a second action aiming at minimizing compression errors induced by possible false identification may be put in place.
  • a mechanism based on checking integrity of traffic flows reconstructed at the decompressor side i.e. the0 second network node 305) (for instance - but not exclusively - a failing Ethernet FCS check might be leveraged) is implemented. This may be done in order to feedback the classifier at the compressor side about blacklisting for compression a well given traffic flow after a stated number of consecutive reconstruction errors.
  • Such action ensures fixing a well determined upper bound to the reconstruction error events induced by false identification.
  • the fingerprint is used to create a fixed association between the header and the compressed value which is ultimately transmitted over the hop.
  • This value may be considered as a random tag, or label, chosen without any constraint between the available ones.
  • the compressed values might be taken from a set of much smaller cardinality than the set of values of the fingerprints.
  • the fingerprint may be extracted from the uncompressed header.
  • the next free tag in the free tags set is chosen, prior to make the association with the fingerprint.
  • the association between the tag and the fingerprint may also be removed, at o runtime. In this way it is possible to drop the compression for that given header.
  • removal of the association between the tag and the fingerprint may be performed during normal compression operation when there is the need to stop compressing a header. This may take place when it is detected that compressing that particular header is, for some reason, no more useful (for instance, an "aging" mechanism has detected that a
  • a fingerprint may also be referred to as a particularity, indication, footmark, reproduction,
  • the target of the fingerprint size dimensioning may be to achieve a desired level of protection5 against "collision in identification” using a yet-to-determine quantity of (reduced) hash values over all the available set of values to be identified.
  • the fingerprint size is smaller than the uncompressed header size.
  • the fingerprint size may depend on the wanted amount of protection on hash collision (the0 event in which two different headers - inputs for the fingerprint calculating function - lead to the same fingerprint).
  • the following expression may be used:
  • H defined as the total number of values that hash key (fingerprint) may assume given its dimension
  • the size of the fingerprint may therefore be dimensioned upon the expected number of different traffic flow combination for MLHC (compression of Ethernet, IP, UDP layers headers) that may be experienced taking into consideration a real mobile backhaul network topology. Such number may also be dependent on the particular position in the communications system 300 in which the embodiments herein will be operating. In one embodiment, it may be fair to assume 4K as the possible maximum number of flows at level 2 in a typical mobile backhaul topology. If it is assumed that, on the average, 5 or 6 different flows at L3 (IPv4 and IPv6) and L4 level (UDP only) may be supported on top of a L2 flow, it might be possible to obtain 25K as a fair estimate of the maximum number of flows to be supported by MLHC. Such consideration on maximum number of traffic flows to be supported by MLHC may be important in dimensioning the total number of entries in the codebooks.
  • the size of a single compressed source (header) is greater.
  • the implementation data compression and decompression may be realized by means of an FPGA or IC based solution.
  • the Multi-Layer Header Compression introduces a great increase in the amount of size of the single source (the Header in the uncompressed format). This is illustrated in Figure 7.
  • the MLHC Source Header is maximum 104 bytes. This is four time the maximum EHC Source Header which is 26 bytes.
  • the size of the codebook will grow linearly with the size of the single entry of the source. Furthermore, since the cardinality of the codebook must also be increased due to the fact that the compression is extended to traffic flows which belong to a larger set of network layers, the total size of the codebooks in the current technology will undergo, with respect to EHC solution an increase of more than one order of magnitude, potentially reaching dimensions of some Mbits per supported channels. This is illustrated in Figure 8. That situation impairs the feasibility of the embodiments herein, especially for hardware based (for a low added latency) codebooks solutions and when the embodiments herein are deployed in a multi-channel product scenario (Mini-Link TN, Smart Packet).
  • the number of supported uncompressed traffic flows is exemplified to be 2048, which is a number that may vary due to requirements on different products.
  • the traffic flow maximum size i.e. the total compressible fields is exemplified to be 102 bytes per flow.
  • Ethernet, MPLS, Ethernet PW, IPv4, IPv6, UDP Headers are compressed.
  • the first class fields in the traffic flow are exemplified to be in total 84 bytes in the flow. Note that the number of bytes may take any value and that transcoding is not possible for the first class fields.
  • the second class fields in the incoming traffic flow are exemplified to be in total 18 bytes in the flow. Note that transcoding is possible for the second class fields.
  • the memory space in the second network node 305 for transcoding may be 2 bytes per flow.
  • the fingerprint size is exemplified to be 8 bytes per flow.
  • the additional structures of the compressor 301 (bytes per link) is exemplified to be 4 bytes per flow.
  • the additional structures of the decompressor 305 (bytes per flow) is exemplified to be 4 bytes per flow.
  • the size of the compressor 301 is exemplified to be 24576 bytes and the size of the decomprisesor 305 is exemplified to be 176128.
  • the grand total is then 200704, i.e. 1 ,605632 Mbits.
  • the 5 bytes per flow ratio are exemplified to be 98 in figure 8.
  • the total amount of necessary associative memory to be reserved for the codebook is greatly reduced, allowing implementation scenarios that were previously even not considerable.
  • a first estimate of the extent of such advantage might be inferred from o the estimated bytes per flow occupation of the embodiments herein with respect to the current technology.
  • This is exemplified in Figure 9.
  • the number of supported uncompressed traffic flows is 2048, which may vary, due to requirements on different products.
  • the traffic flow maximum size, i.e. the total compressible fields, is 102 bytes per flow.
  • the Ethernet, MPLS, Ethernet PW, IPv4, IPv6, UDP Headers are 5 compressed.
  • the additional structures of the compressor, i.e. first network node 301 (bytes per link) and the additional structures of the decompressor 305 may each be 4 bytes per flow.
  • the compressor may comprise 217088 bytes and the decompressor 305 may comprise 208896 bytes, such that the grand total number of bytes is 425984. This results in a grand total of0 3,407872 Mbits and a ratio of 208 bytes per flow.
  • Figure 10 is a flowchart describing the present method in the first network node 301 , for handling traffic flows in the communications system 300.
  • The5 method comprises the further steps to be performed by the first network node 301 :
  • This step corresponds to step 401 in figure 4a and is seen in figure 10a.
  • the first network node 301 detects the uncompressed header comprised in0 the incoming traffic flow.
  • the first network node 301 obtains a fingerprint of an uncompressed header comprised in an incoming traffic flow.
  • the fingerprint is a cryptographic hash value identifying the uncompressed header.
  • the fingerprint may be obtained by calculating the cryptographic has value if a fingerprint 5 has not previously been obtained for the uncompressed header.
  • the fingerprint may be obtained from a table if the fingerprint has previously been obtained for the uncompressed header.
  • This step corresponds to step 403 in figure 4a and is seen in figure 10a.
  • the first network node 301 determines if a fingerprint has previously been obtained for the uncompressed header.
  • This step corresponds to step 404 in figure 4a and is seen in figure 10a.
  • the first network node 301 obtains a tag associated with the fingerprint.
  • This step corresponds to step 405 in figure 4a and is seen in figure 10a.
  • the first network node 301 stores the obtained fingerprint and the
  • The5 uncompressed header may comprise a first header field set and a second header field set.
  • the first network node 301 classifies the first header field set in a first class when it is associated with a first degree of entropy.
  • This step corresponds to step 407 in figure 4a and is seen in figure 10a.
  • the first network node 301 classifies the second header field set in a second class when it is associated with a second degree of entropy. The first degree is higher than the second degree. 5 Step 1008 This step corresponds to step 408 in figure 4a and is seen in figure 10b.
  • the first network node 301 transcodes the second header field set in the second class. The transcoding may be performed based on transcoding information known at both the first network node 301 and the second network node 305.
  • transcoding information may comprise an association between the transcoded second header field set and the second header field set.
  • This step corresponds to step 409 in figure 4a and is seen in figure 10b.
  • the first network node 301 transmits information indicating an association between the tag, the uncompressed header and the fingerprint to the second network node 305.
  • This step corresponds to step 41 1 in figure 4b and is seen in figure 10b.
  • the first network node 301 compresses the uncompressed header by replacing the uncompressed header with the tag.
  • This step corresponds to step 412 in figure 4b and is seen in figure 10b.
  • the first network node 301 transmits the traffic flow comprising the compressed header and the payload data to a second network node 305.
  • This step corresponds to step 415 in figure 4b and is seen in figure 10b.
  • the first network node 301 receives, from the second network node 301 , information indicating an error associated with decompression of the compressed header.
  • the error may indicate that there are at least two uncompressed headers with
  • This step corresponds to step 416 in figure 4b and is seen in figure 10b.
  • the first network node 301 prevents the header indicated with an error from further compression.
  • Embodiments of the first network node 301 configured to perform the method actions for handling traffic flows in a communications system 300, as described above in relation to Figures 4a, 4b, 10a and 10b, is depicted in Figure 11 .
  • the first network node 301 may be adapted to detect, e.g. by means of a detecting module 1101 , the uncompressed header comprised in the incoming traffic flow.
  • the detecting module 1 101 may be a processor 1102 of the first network node 301 .
  • the detecting module 1 101 may also be referred to as a detecting unit, detecting circuit, detecting means or means for detecting.
  • the first network node 301 is adapted to obtain, e.g. by means of an obtaining module 1103, a fingerprint of an uncompressed header comprised in an incoming traffic flow.
  • the fingerprint is a cryptographic hash value identifying the uncompressed header.
  • the obtaining module 1 103 may be the processor 1 102 of the first network node 301 .
  • the obtaining module 1 103 may also be referred to as an obtaining unit, obtaining circuit, obtaining means or means for obtaining.
  • the first network node 201 may be further adapted to determine, e.g. by means of a determining module 1105, whether or not a fingerprint has previously been obtained for the uncompressed header.
  • the determining module 1 105 may be the processor 1 102 of the first network node 301 .
  • the determining module 1 105 may also be referred to as a determining unit, determining circuit, determining means or means for determining.
  • the first network node 301 is further adapted to obtain, e.g. by means of the obtaining module 1 103, a tag associated with the fingerprint.
  • the fingerprint may be obtained by calculating the cryptographic has value if a fingerprint has not previously been obtained for the uncompressed header.
  • the fingerprint may be obtained from a table if the fingerprint has previously been obtained for the uncompressed header.
  • the first network node 301 may be further adapted to store, e.g. by means of a memory 1106, the obtained fingerprint and the associated tag.
  • the memory 1 106 may comprise one or more memory units.
  • the memory 1 106 may be adapted to be used to store data, received data streams, power, fingerprint, tags, headers, traffic flows, errors, information indicating the first and second classes, transcoding information, threshold values, time periods, configurations, schedulings, and applications to perform the methods herein when being executed in the first network node 301 .
  • the memory 1 106 comprises instructions executable by the processor 1 102.
  • the first network node 301 may be further adapted to transmit, e.g. by means of a 5 transmitting module 1108, information indicating an association between the tag, the uncompressed header and the fingerprint to the second network node 305.
  • the uncompressed header may comprise a first header field set and a second header field set.
  • the transmitting module 1 108 may also be referred to as a transmitting unit, transmitting circuit, transmitting means, means for transmitting or output unit.
  • the o transmitting module 1 108 may be a transmitter, a transceiver etc.
  • module 1 108 may be a wireless transmitter of the first network node 301 of a wireless or fixed communications system.
  • the first network node 301 may be is further adapted to classify, e.g. by means of a 5 classifying module 1110, the first header field set in a first class when it is associated with a first degree of entropy and to classify the second header field set in a second class when it is associated with a second degree of entropy.
  • the first degree may be higher than the second degree.
  • the first network node 301 may be adapted to transcode the second header field set in the second class.
  • the classifying module 1 1 10 may be the0 processor 1 102 of the first network node 301 .
  • the classifying module 1 1 10 may also be referred to as a classifying unit, classifying circuit, classifying means or means for classifying.
  • the first network node 301 is adapted to compress, e.g. by means of a compressing5 module 1113, the uncompressed header by replacing the uncompressed header with the tag, and to transmit the traffic flow comprising the compressed header and the payload data to a second network node 305.
  • the compressing module 1 1 13 may be the processor 1 102 of the first network node 301 .
  • the compressing module 1 1 13 may also be referred to as a compressing unit, compressing circuit, compressing means or means for0 compressing.
  • the first network node 301 may be further adapted to receive, e.g. by means of a
  • receiving module 1115 from the second network node 301 , information indicating an error associated with decompression of the compressed header, and to prevent the5 header indicated with an error from further compression.
  • the error may indicate that there are at least two uncompressed headers with substantially the same fingerprint.
  • receiving module 1 1 15 may also be referred to as a receiving unit, receiving circuit,
  • the receiving module 1 1 15 may be a receiver, a transceiver etc.
  • the receiving module 1 1 15 may be a wireless receiver of the 5 first network node 301 of a wireless or fixed communications system.
  • the first network node 301 may be adapted to, e.g. by means of a preventing module
  • the preventing module 1 123 may be the processor 1 102 of the first network node 301 .
  • the 1 o preventing module 1 123 may also be referred to as a preventing unit, preventing circuit,
  • the first network node 301 may be further adapted to, e.g. by means of a transcoding module 1128, to transcode the second header field set in the second class.
  • the transcoding module 1128 may be further adapted to, e.g. by means of a transcoding module 1128, to transcode the second header field set in the second class.
  • the transcoding module 1 1 10 may be the processor 1 102 of the first network node 301 .
  • the transcoding module 1 128 may also be referred to as a transcoding unit,
  • transcoding circuit 20 transcoding circuit, transcoding means or means for transcoding.
  • the detecting module 1 101 the obtaining module 1 103, the determining module 1 106, the transmitting module 1 108, the classifying module 1 1 10, the compressing module 1 1 13, the receiving module 1 1 15, the preventing
  • module 1 123 and the transcoding module 1 128 described above may refer to a
  • processors configured with software and/or firmware, e.g. stored in a memory, that when executed by the one or more processors such as the processor 1 130 perform as described above.
  • processors as well as the other digital hardware, may be included in a single
  • ASIC application-specific integrated circuit
  • SoC system-on-a-chip
  • a computer program may comprise instructions which, when 35 executed on at least one processor, e.g. the processor 1 102, cause the at least one processor to carry out the method steps 401 -408, steps 41 1 -412, steps 415-416 in figure 4 and steps 1001 -1013 in figures 10a and 10b.
  • a carrier may comprise the computer program, and the carrier is one of an electronic signal, optical signal, radio signal or computer readable storage medium.
  • FIG. 12 is a flowchart describing the present method in the second network node 305, for handling traffic flows in the communications system 300.
  • the second network node 305 comprises a codebook comprising information indicating
  • the second network node 305 may comprises information indicating an association between a transcoded second header field set and the second header field set classified in the second class.
  • the method comprises the further steps to be performed by the second network node
  • the second network node 305 receives the information indicating the association between the tag, the 20 uncompressed header and the fingerprint from the first network node 301 .
  • This step corresponds to step 410 in figure 4a.
  • the second network node 305 stores the received information in the codebook.
  • the second network node 305 receives a traffic flow comprising a compressed header and payload data from a first network node 301 .
  • the compressed header comprises the tag.
  • This step corresponds to step 413 in figure 4b.
  • the second network node 305 decompresses the received compressed header by replacing the tag with the
  • Step 1204 is a substep of step 1204.
  • the second header field in the compressed header may be represented by the transcoded second header field.
  • the second network node 305 performs the decompression of the received compressed header by replacing the transcoded second header field set with the second header field set from the information indicating the association between the transcoded second header field set and the second header field set classified in the o second class.
  • the second network node 305 detects an error associated with the decompression of the compressed 5 header.
  • the error may indicate that there are at least two uncompressed headers with substantially the same fingerprint.
  • the second0 network node 305 transmits, to the first network node 301 , information indicating the detected error.
  • Embodiments of the second network node 305 configured to perform the method actions for handling traffic flows in a communications system 300, as described above in relation5 to Figures 4a, 4b and 12 is depicted in Figure 13.
  • the second network node 305 may be being adapted to receive, e.g. by means of a receiving module 1301 , a traffic flow comprising a compressed header and payload data from a first network node 301 .
  • the compressed header comprises the tag.
  • the receiving0 module 1301 may also be referred to as a receiving unit, receiving circuit, receiving
  • the receiving module 1301 may be a receiver, a transceiver etc.
  • the receiving module 1301 may be a wireless receiver of the second network node 305 of a wireless or fixed communications system.
  • the second network node 305 may be further adapted to decompress, e.g. by means of a decompressing module 1305, the received compressed header by replacing the tag with the uncompressed header from the codebook.
  • the decompressing module 1305 may be a processor 1308 of the second network node 305.
  • the decompressing module 5 1305 may also be referred to as a decompressing unit, decompressing circuit,
  • decompressing means or means for decompressing.
  • the second network node 305 may be further adapted to, e.g. by means of a detecting module 1310, detect an error associated with the decompression of the compressed o header, and to transmit, to the first network node 301 , information indicating the detected error.
  • the error may indicate that there are at least two uncompressed headers with substantially the same fingerprint.
  • the detecting module 1313 may be the processor 1308 of the second network node 305.
  • the detecting module 1310 may also be referred to as a detecting unit, detecting circuit, detecting means or means for detecting.
  • the second network node 305 may be further adapted to receive, e.g. by means of the receiving module 1301 , the information indicating the association between the tag, the uncompressed header and the fingerprint from the first network node 301 . 0
  • the second network node 305 may be further adapted to store, e.g. by means of a
  • the memory 1313 may comprise one or more memory units.
  • the memory 1313 may be adapted to be used to store data, received data streams, power, fingerprint, tags, headers, traffic flows, errors, information indicating the first and second classes, transcoding information, threshold5 values, time periods, configurations, schedulings, and applications to perform the
  • the memory 1313 may comprise instructions executable by the processor 1308.
  • the second network node 305 may comprise information indicating an association between a transcoded second header field set and the second header field set classified in the second class.0
  • the second header field in the compressed header may be represented by the
  • the second network node 305 may be further adapted to, e.g. by means of the decompressing module 1305, decompress the received compressed header by replacing5 the transcoded second header field set with the second header field set from the information indicating the association between the transcoded second header field set and the second header field set classified in the second class.
  • the second network node 305 may be further adapted to, e.g. by means of a 5 transmitting module 1315, transmit, to the first network node 301 , information indicating the detected error.
  • the transmitting module 1315 may also be referred to as a transmitting unit, transmitting circuit, transmitting means, means for transmitting or output unit.
  • the transmitting module 1315 may be a transmitter, a transceiver etc.
  • the transmitting module 1310 may be a wireless transmitter of the second network node 305 of a wireless 1 o or fixed communications system.
  • decompressing module 1305, the detecting module 1310 and the transmitting module 1315 described above may refer to a combination of analog and digital circuits, and/or
  • processors configured with software and/or firmware, e.g. stored in a
  • processors such as the processor 1315 perform as described above.
  • processors such as the processor 1315 perform as described above.
  • processors may be included in a single ASIC, or several processors and various digital hardware may be distributed among several separate components, whether individually
  • a computer program may comprise instructions which, when executed on at least one processor, e.g. the processor 1308, cause the at least one processor to carry out the method steps 409-410 and 412-415 in figure 4 and steps 1201 - 25 1206 in figure 12.
  • a carrier may comprise the computer program, and the carrier is one of an electronic signal, optical signal, radio signal or computer readable storage medium.
  • the present mechanism for handling data traffic in a communications system 300 may be implemented through one or more processors, such as the processor 1 102 in the first
  • the processor may be for example a Digital Signal Processor (DSP), ASIC processor, Field-programmable gate array (FPGA) processor or microprocessor.
  • DSP Digital Signal Processor
  • FPGA Field-programmable gate array
  • the program code mentioned above may also be provided as a
  • 35 computer program product for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into at least one of the first network node 301 and the second network node 305.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into at least one of the first network node 301 and the second network node 305.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code can furthermore be provided as pure program code on a server and downloaded to at least one of the first network node 301 and the second network node 305.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Les modes de réalisation de l'invention portent sur un procédé dans un premier nœud de réseau (301) pour traiter des flux de trafic dans un système de communication (300). Le premier nœud de réseau (301) obtient une empreinte digitale d'un en-tête non compressé compris dans un flux de trafic entrant. L'empreinte digitale est une valeur de hachage cryptographique identifiant l'en-tête non compressé. Le premier nœud de réseau (301) obtient une étiquette associée à l'empreinte digitale. Le premier nœud de réseau (301) compresse l'en-tête non compressé par remplacement de l'en-tête non compressé par l'étiquette. Le premier nœud de réseau (301) transmet le flux de trafic comprenant l'en-tête compressé et les données de charge utile à un second nœud de réseau (305).
PCT/EP2014/057912 2014-04-17 2014-04-17 Procédés de compression de trafic efficace sur des réseaux ip WO2015158389A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/057912 WO2015158389A1 (fr) 2014-04-17 2014-04-17 Procédés de compression de trafic efficace sur des réseaux ip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/057912 WO2015158389A1 (fr) 2014-04-17 2014-04-17 Procédés de compression de trafic efficace sur des réseaux ip

Publications (1)

Publication Number Publication Date
WO2015158389A1 true WO2015158389A1 (fr) 2015-10-22

Family

ID=50639452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/057912 WO2015158389A1 (fr) 2014-04-17 2014-04-17 Procédés de compression de trafic efficace sur des réseaux ip

Country Status (1)

Country Link
WO (1) WO2015158389A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220159099A1 (en) * 2019-03-27 2022-05-19 Apple Inc. Ethernet header compression
US11576079B2 (en) 2017-10-16 2023-02-07 Ofinno, Llc Ethernet header compression in a wireless network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134244A (en) * 1997-08-30 2000-10-17 Van Renesse; Robert Method and system for optimizing layered communication protocols
US20090319547A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Compression Using Hashes
US20120257630A1 (en) * 2011-04-11 2012-10-11 Qualcomm Innovation Center, Inc. Interactive header compression in peer-to-peer communications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134244A (en) * 1997-08-30 2000-10-17 Van Renesse; Robert Method and system for optimizing layered communication protocols
US20090319547A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Compression Using Hashes
US20120257630A1 (en) * 2011-04-11 2012-10-11 Qualcomm Innovation Center, Inc. Interactive header compression in peer-to-peer communications

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11576079B2 (en) 2017-10-16 2023-02-07 Ofinno, Llc Ethernet header compression in a wireless network
US11743767B2 (en) 2017-10-16 2023-08-29 Ofinno, Llc Compression of ethernet packet header
US20220159099A1 (en) * 2019-03-27 2022-05-19 Apple Inc. Ethernet header compression
US11792302B2 (en) * 2019-03-27 2023-10-17 Apple Inc. Ethernet header compression

Similar Documents

Publication Publication Date Title
US9294589B2 (en) Header compression with a code book
JP2022031735A (ja) ハイブリッドデータ圧縮および解凍のための方法、デバイス、およびシステム
JP3900435B2 (ja) データパケットのルーティング方法およびルーティング装置
US10015285B2 (en) System and method for multi-stream compression and decompression
US20200250129A1 (en) Rdma data sending and receiving methods, electronic device, and readable storage medium
EP3163837B1 (fr) Compression d'en-tête pour messages ccn utilisant un dictionnaire statique
CN109075798B (zh) 可变大小符号基于熵的数据压缩
EP3813318B1 (fr) Procédé de transmission de paquet, dispositif de communication, et système
EP3166277A1 (fr) Compression d'en-tête cadrée par bit à l'aide d'un dictionnaire pour messages ccn
WO2015158389A1 (fr) Procédés de compression de trafic efficace sur des réseaux ip
US10523790B2 (en) System and method of header compression for online network codes
EP3163838B1 (fr) Compression d'en-tête pour messages ccn utilisant un dictionnaire apprenant
Vidhyapriya et al. Energy efficient data compression in wireless sensor networks.
CN106878054B (zh) 一种业务处理方法和装置
US8077742B1 (en) Data transmission using address encoding
US10375601B2 (en) Condensed message multicast method and a system employing same
Iatrou et al. Efficient OPC UA binary encoding considerations for embedded devices
US10742783B2 (en) Data transmitting apparatus, data receiving apparatus and method thereof having encoding or decoding functionalities
Sepulcre et al. Can Beacons be Compressed to Reduce the Channel Load in Vehicular Networks?
Kho et al. Joint LZW and Lightweight Dictionary-based compression techniques for congested network
Kho et al. Application of Data Compression Technique in Congested Networks
Grant REAL-TIME COMPRESSION OF SOFTWARE TRACES

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14721261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14721261

Country of ref document: EP

Kind code of ref document: A1