US6812873B1 - Method for decoding data coded with an entropic code, corresponding decoding device and transmission system - Google Patents

Method for decoding data coded with an entropic code, corresponding decoding device and transmission system Download PDF

Info

Publication number
US6812873B1
US6812873B1 US10/111,833 US11183302A US6812873B1 US 6812873 B1 US6812873 B1 US 6812873B1 US 11183302 A US11183302 A US 11183302A US 6812873 B1 US6812873 B1 US 6812873B1
Authority
US
United States
Prior art keywords
decoding
channel
lattice
source
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US10/111,833
Other languages
English (en)
Inventor
Pierre Siohan
Lionel Guivarch
Jean-Claude Carlac'h
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telediffusion de France ets Public de Diffusion
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from FR9914321A external-priority patent/FR2800941A1/fr
Application filed by France Telecom SA filed Critical France Telecom SA
Assigned to TELEDIFFUSION DE FRANCE, FRANCE TELECOM reassignment TELEDIFFUSION DE FRANCE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CARLAC'H, JEAN-CLAUDE, GUIVARCH, LIONEL, SIOHAN, PIERRE
Application granted granted Critical
Publication of US6812873B1 publication Critical patent/US6812873B1/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • H03M13/6312Error control coding in combination with data compression
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4115Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors list output Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4138Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions
    • H03M13/4146Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions soft-output Viterbi decoding according to Battail and Hagenauer in which the soft-output is determined using path metric differences along the maximum-likelihood path, i.e. "SOVA" decoding

Definitions

  • the field of the invention is that of the transmission or broadcasting of digital data signals. More specifically, the invention relates to the decoding of transmitted digital signals and, especially, source decoding. More specifically again, the invention can be applied to the decoding of data encoded with a source encoding method using entropic codes such as the VLC or variable length code.
  • the digital communications systems commonly used today rely on encoding systems that implement, on the one hand, source encoding and, on the other, channel encoding. Conventionally, these two encoding systems are optimized separately.
  • the purpose of the source encoding is to achieve maximum reduction in the redundancy of the source signal to be transmitted. Then, to protect this information from the disturbance inherent in all transmission, the channel encoder introduces controlled redundancy.
  • the invention thus relates to the decoding of entropic codes, and especially but not exclusively to the joint source-channel decoding of a system implementing an entropic code.
  • Combined decoding has many fields of application, for example video image transmission, especially according to the MPEG 4 (Moving Picture Expert Group) standard.
  • MPEG 4 Motion Picture Expert Group
  • Variable length codes are well known.
  • appendix A gives a quick review of the Huffman code.
  • the special embodiment of the invention described hereinafter can be applied especially but not exclusively to this type of entropic code.
  • Variable length codes are of vital importance in the limitation of the band occupied by the transmitted signal, but their use makes transmission less error-robust. Furthermore, it is difficult to use the a priori probabilities of the source when decoding, because it is not possible to know the beginning and the end of each word, since the length of these words is, by definition, variable.
  • K. Sayood and N. Demir have proposed [4] a decoding of VLC words.
  • the two major drawbacks of this type of decoding are found here. These are lattice complexity that increases rapidly with the number of different VLC words and decoding that remains at the symbol (or word) level;
  • K. P. Subbalaskshmi and J. Vaisey [6] give a lattice structure that can be used to know the beginning and end of each word and therefore enables the use of the a priori information available on the VLC words sent.
  • This decoder works on the words and sends no extrinsic information on the decoded bits;
  • Yet another goal of the invention is to provide a decoding technique of this kind that improves the performance obtained with known channel codes, especially encoders implementing turbo-codes.
  • this method implements a decoding lattice for which each transition corresponds to a binary value 0 or 1 of one of the bits of a sequence of bits corresponding to one of said words.
  • the invention is based on a novel approach to the decoding of variable length codes, it is the transitions at bit level that are considered and not, conventionally, the transitions at the word level or the symbol level.
  • Said entropic code may take the form of a binary tree comprising a root node, a plurality of intermediate nodes and a plurality of leaf nodes, a sequence of bits corresponding to one of said words being formed in considering the successive transitions of said tree from said root node up to the leaf node associated with said word.
  • the states of each stage of said lattice comprise a single state known as an extremal state, corresponding to said root node and to all of said leaf nodes, and a distinct state, called an intermediate state, for each of said intermediate nodes.
  • a piece of likelihood information is associated with each transition of said lattice.
  • Said likelihood information is then advantageously a metric taking account, firstly of a piece of information representing the transmission channel and secondly a piece of a priori information on said entropic code.
  • said a priori information belongs to the group comprising:
  • Said entropic code may belong especially to the group comprising:
  • RVLC Reversible variable length codes
  • said a priori information is used for a forward phase in the path through said lattice and a backward phase in the path through said lattice.
  • the invention also relates to a method for the joint source-channel decoding of the digital signals received, based on this approach, where the source encoding implements an entropic code associating a distinct bit sequence with each of the words of an alphabet, the length of the sequence being a function of the probability of occurrences of said word.
  • this joint decoding method implements a source decoding operation using at least one decoding lattice, each transition of which corresponds to a binary value 0 or 1 of one of the bits of the bit sequence corresponding to one of said words, said source decoding operation delivering a piece of information extrinsic to the channel decoding.
  • the channel decoding may advantageously implement a turbo-code type of decoding that can rely on a parallel type implementation or a serial type approach.
  • the joint decoding method of the invention relies on an iterative implementation.
  • each of the iterations may sequentially comprise a channel decoding step and a source decoding step, said channel decoding step delivering a piece of channel information taken into account in said source decoding step, this source decoding step delivering an a priori piece of information taken into account in said channel decoding step.
  • the method may comprise:
  • a second channel decoding step fed by said first channel decoding step and said first source decoding step, through an interleaver identical to the interleaver implemented at the decoding, and feeding said first channel decoding step, through a de-interleaver symmetrical with said interleaver;
  • the invention also relates to a method for the joint source-channel decoding of a received digital signal, where the source encoding operation implements an entropic code associating a distinct bit sequence with each of the words of an alphabet, the length of this bit sequence being a function of the probability of occurrence of said word, said method implementing a channel decoding lattice, similar to the channel encoding lattice, in which, with each state of each stage, there is associated a piece of information representing a sequence, passing through this state, of bits decoded from the past, with respect to the path direction of said lattice, designating the position of the bits considered in the tree representing said entropic code and/or a piece of information for the verification the number of decoded words and/or the value taken by said decoded bits.
  • the joint decoding method advantageously comprises, for each of said states, the following steps:
  • this method implements an iterative procedure which may for example comprise the following steps:
  • a first channel decoding operation implementing a channel decoding lattice for which each state has a piece of information available designating the position of the bits considered in the tree representing said entropic code
  • a second channel decoding operation fed by said first channel decoding, through an interleaver identical to the interleaver implemented at the decoding, and feeding said first channel decoding step, through a de-interleaver symmetrical with said interleaver;
  • turbo-code type decoding is implemented.
  • each iteration of said turbo-code type decoding implements a block matrix, having rows and columns, on which a row decoding operation (or column decoding operation respectively) is performed followed by a column decoding operation (or row decoding operation respectively), and this a priori information is used for said row decoding operation (and column decoding operation respectively).
  • each row corresponds to a code word formed by k information bits and n-k padding bits, and each piece of a priori information is used on said k information bits.
  • the code used is an RVLC code
  • the decoding method also implements a step to detect the end of the sequence of code words to be decoded.
  • This detection step may rely especially on the implementation of at least one of the techniques belonging to the group comprising:
  • the method furthermore comprises a step of searching for the most probable authorized sequence by means of a reduced symbol lattice having a single initial state (d0) from which there are as many outgoing and arriving words as there are code words.
  • the invention also relates to all the digital data decoding devices implementing one of the decoding methods described here above, as well as digital signal transmission systems implementing an entropic source encoding operation and a channel encoding operation at transmission and a decoding operation, as described further above, at reception.
  • FIG. 1 shows an exemplary Huffman code in the form of a binary tree
  • FIG. 2 illustrates a symbol lattice according to the invention, known as a reduced symbol lattice, corresponding to the tree of FIG. 1;
  • FIG. 3 illustrates a binary lattice according to the invention, known as a reduced binary lattice, corresponding to the tree of FIG. 1;
  • FIG. 4 is a drawing illustrating a joint turbo-code type of decoding structure, implementing a “SISO_huff” module according to the invention
  • FIG. 5 illustrates the progress of the joint source-channel decoding algorithm according to a second mode of implementation
  • FIG. 6 shows an iterative scheme of joint decoding implementing the algorithm illustrated in FIG. 5;
  • FIG. 7 shows a combination of the methods of FIGS. 4 to 6 ;
  • FIG. 8 commented upon in the appendix A, illustrates the construction of a Huffman code
  • FIG. 9 shows the Huffman code block turbo-decoding matrix
  • FIG. 10 illustrates a joint decoding of RVLC codes
  • FIGS. 11 and 12 show two modes of implementation of a joint decoding of RVLC codes with turbo-codes, with the use of a reduced lattice.
  • the goal of the invention therefore is to present a decoding technique that reduces the symbol error rate, by using a priori information on the source, probabilities on the VLC code words or probabilities of transition between the words. It has been observed that this reduction is higher if the decoders use a priori information on the source at the bit level, because information on the channel decoding can be extracted only at this bit level.
  • the technique of the invention can be used to determine not only the optimum word sequence but also the dependability associated with each of the bits of each word.
  • the invention uses a piece of a priori information on the source, namely the a priori probabilities of the words of the VLC code, or that of the Markov transition between the words of the code.
  • This technique may advantageously be used in a turbo-code type iterative scheme and improve transmission performance.
  • a soft decision corresponds to a non-binary real decision.
  • a threshold-setting operation on this decision will give a hard decision.
  • the ratio E b /N O corresponds to the ratio between the energy received per useful bit divided by the mono-lateral spectral density of the noise
  • the DC (Direct Component) band of an image is the direct component of the image after transformation by wavelet or DCT.
  • a bit sequence received is referenced by X or Y.
  • This sequence consists of words referenced x i , these words consisting of bits referenced x i j ;
  • an RVLC ⁇ Reversible VLC>>) [8] is a variable length code whose words are decoded in both directions
  • a SISO ( ⁇ Soft-In Soft-Out>>) has soft values as inputs and delivers soft outputs.
  • Appendix B shows how it is possible to extract information from the variable length word sequences sent by a Huffman encoder (see appendix A), by modeling the source either by a random process generating independent symbols or by a Markovian random process.
  • FIG. 1 shows an exemplary Huffman code in the form of a binary tree. It consists of five words a, b, c, d, e with probabilities p(a), p(b), p(c), p(d), and p(e), and respective lengths 3, 3, 2, 2 and 2.
  • Table 1 table of the VLC codes of the tree of FIG. 1 .
  • the reduced lattice symbol uses the a priori probabilities on the words of the source, in working at the word level.
  • This lattice may also be used as a complementary lattice during the decoding using the Markov probabilities.
  • this reduced lattice may advantageously be implemented as a complement to the decoding of the invention.
  • the method for decoding a VLC source used the a priori probabilities of the VLC words at the word level. This averts the need to send dependability values on each bit of the decoded sequence. It is shown hereinafter how the a priori probabilities on the words may be broken down into a product of a priori probability on the bits. Then, the decoding methods that relay on this approach are described.
  • the probability p(a) is therefore the product of three probabilities associated with each of the bits of the word a.
  • state 0 origin of the tree
  • This lattice gives the position of the bit in the word. If the current state is the state i, it is known, from the tree, that the current position is the position j of the word processed because there is a bijective correspondence between the states i of the lattice and the positions j in the tree.
  • the decoding algorithm used on this lattice for the simulations given here below is an algorithm of the BCJR (Bahl Cocke Jelinek Raviv) [13] modified by Benedetto in [14] to process the parallel branches.
  • the metric computed on each branch takes account of the available knowledge on the channel and the binary a priori probabilities computed earlier. More specifically, the formula of computation on a branch [15] is:
  • the pair (x k ,x k p ) is the pair of likelihoods of the bits received, d k ⁇ 1 and d k are respectively the outgoing and arrival states of the branch, a k is the information bit associated with the branch.
  • the first two terms of the straight line of the equation (14) refer to the knowledge available on the channel.
  • this a priori information on the source may be of many types:
  • the a priori information consists of probabilities belonging to the set ⁇ 0,1,0.5 ⁇ .
  • This lattice can be used in a soft binary decoding of the BCJR or SOVA (“Soft Output Viterbi Algorithm”) type.
  • the soft decoding calls for a parallel processing operation that indicates the most probable preceding word for each stage. This processing is the one achieved by the reduced symbol lattice. This increase in complexity implies a preference for the SOVA algorithm or for the SUBMAP algorithm [11].
  • the method presented can be extended, without difficulty, to the case where the Huffman table contains more bits.
  • the complexity of the reduced binary lattice increases with the number of words of the Huffman table.
  • the application of the method of the invention to the RVLC codes [8] may of course be envisaged.
  • the lattice of the ⁇ SISO_huff>> block may be used by decoding in the other direction, but this implies an increase in complexity.
  • One alternative would be to use the a priori probabilities in one direction for the “forward” phase and in the other direction for the “backward” phase.
  • SISO_huff The reduced binary lattice or “SISO_huff” is used for a SISO type decoding and can therefore be inserted into an iterative decoding scheme like that of FIG. 4 .
  • Each decoding block 41 to 44 of FIG. 4 extracts extrinsic information on the bits to be decoded and sends this information to the next block.
  • the pieces of information that flow between the different blocks are, as in the reference [15], extrinsic probabilities.
  • the extrinsic information 451 , 452 used by the next channel decoder is the one conventionally used in the turbo-codes.
  • the one used by the “SISO_huff” block 42 , 44 shown in bold characters, 461 , 462 corresponds to this first piece of information from which the information derived from the previous block “SISO_huff”, 44 , 42 has been removed. This was done in order to follow the rule according to which no iterative decoding block must use a piece of information that it has already produced.
  • the overall structure of the iterative decoder is not discussed in greater detail. It is known per se [1].
  • the modules E47 1 and 47 2 are interleavers (identical to those implemented in the encoding operation) and the module E*48 is a de-interleaver symmetrical with the interleaver.
  • Y 1k Likelihood information on the parity bits coming from the encoder 1 ;
  • Y 2K Likelihood information on the parity bits coming from the encoder 2 ;
  • Proba a priori probability
  • This scheme is an exemplary iterative joint decoding system where the a priori information and the information of the channel are used in turn on their associated lattice. It can also be planned to make joint use of these two pieces of information on the lattice of the channel decoder as can be seen in the second method proposed.
  • the joint decoding proposed in FIG. 4 performs the two decoding functions iteratively and sequentially.
  • a technique is presented in which a single decoding block carries out both operations simultaneously. This technique can be applied also to the decoding of a Markov source.
  • the concept developed can be applied to any transmission chain formed by a (convolutional or block) channel decoder lattice and a source that can be represented by a tree.
  • the lattice used is that of the channel encoder and requires no new construction.
  • the decoding algorithm is a Viterbi type algorithm. It decodes the most probable word sequence. The computation of the metric brings into play not only the information on the channel (this is the usual case) but also the binary a priori probabilities.
  • each state of each stage there is an associated piece of information dependent on the sequence (passing through this state) of the past decoded bits with respect to the lattice path direction.
  • This information may designate especially the position of the bit considered in the tree representing said entropic code. It may also be, for example, a verification of the number of decoded words or of the value taken by the decoded coefficients.
  • the algorithm must know which branch of the Huffman tree corresponds to the branch currently being processed on the channel decoder. This information is enough for it to give the appropriate binary a priori probability (computed as in paragraph 4.1 and stored beforehand in the table).
  • This piece of information can easily be accessed by keeping the position of the bit in the tree up to date for each stage and each state of the lattice of the channel decoder.
  • node next node in the tree
  • FIG. 6 A scheme for use in an iterative decoding operation is shown in FIG. 6 .
  • the first decoder 61 uses the channel information and the a priori information on the same lattice.
  • the second decoder 62 cannot use the a priori information because the interleaver 63 , referenced E, breaks the relationships between the VLC code words.
  • This interleaver uses the probabilities p(0) and p(1) which in most cases are different.
  • Each iteration of the turbo-codes in blocks comprises a soft (weighted-decision) decoding of the rows and then a soft decoding of the columns (or vice versa).
  • the method described here below is used in the same way, whether the a priori information is used on a row decoding operation or on a column decoding operation. It must be noted, in the same way, that for convolutional turbo-codes, the a priori information can be used only in one direction (row or column) because the interleaver breaks the order of the bits for the other direction.
  • Each row (or column) of the block matrix corresponds to a code word.
  • a code word For a code with a yield of k/n, a code word contains k information bits and n-k redundancy bits.
  • the a priori information is therefore used on the k first bits of each code word.
  • the node of the tree of the Huffmann code that has been reached must be kept in memory so that it can be used for the decoding of the following group of k bits as illustrated in FIG. 9 .
  • This information can be sent in order to be certain of the node from which there is a departure at each start of a code word. However, that would reduce the compression correspondingly.
  • turbo-codes are the same, whether block turbo-codes or convolutional turbo-codes are used.
  • the difference lies in the interleaver and in each of the two decoders DEC 1 and DEC 2 .
  • a particular feature of this method is the use of a priori information on the source during the forward and back phases of the lattice path, by means of the property whereby RVLC codes are decoded in both directions.
  • RVLC codes [8] have been proposed.
  • the RVLC codes are an extension of the Huffmann codes. They verify the condition known as the prefix condition in both directions. Thus, whether a RVLC word sequence is taken in one direction or the other, this binary sequence can never represent an encoded element and, at the same time, constitute the start of the code of another element. This dual property causes a loss in compression level and dictates constraints during the making of a Huffman code. It must be noted that the RVLC codes may be symmetrical or asymmetrical.
  • the following example shows the case of a symmetrical four-word RVLC code.
  • R. Bauer and J. Hagenauer [20] propose an iterative flexible decoding binding the channel decoder and the source decoder, which is a lattice used at the bit level representing the RVLC code.
  • the sequences can be classified and memorized in the order of probability of occurrence. This storage enables the use of certain sequences when the most probable sequence does not verify an a priori condition.
  • the first two types of errors correspond to the decoding of a word that does not exist in the Huffmann table.
  • the error detection makes it possible to stop the decoding and roughly localize the error.
  • VLC codes the rest of the frame is considered to be lost.
  • RVLC codes it is possible to start the decoding in the back direction by starting from the end (in assuming that it is known where the sequence ends) until the detection of an error.
  • the use of RLVCs thus makes it possible to recover a part of the information.
  • a possible decoding strategy is summarized in [22]. This strategy distinguishes between the cases where the decoding operations in the forward and backward directions detect an error and those where they do not detect it.
  • One approach to overcoming the first two types of errors consists in carrying out a hard decoding operation by making a search, from among the authorized sequences, for that sequence which is best correlated with the received sequence. This search is made by means of the reduced symbol lattice, which has already been commented upon.
  • a sub-optimal method is that of the serial arrangement of the channel and source lattices described further above, possibly separated by an interleaver if the algorithm is iterative [20]. This method soon becomes complex with the number of words in the VLC or RVLC table.
  • Another method of very low complexity envisages the concatenation of the two lattices.
  • the channel decoding is then controlled by the use of the a priori information on the source, as proposed further above.
  • the three methods may be used in jointly with the turbo-codes, but only the last two can be envisaged in terms of compromise between gain and computation complexity.
  • a first new method according to the invention is based on the last method referred to for decoding VLCs and extends it to RVLC codes by using a parallel or series types List-Viterbi algorithm [25].
  • the necessity of the List-Virterbi algorithm for the decoding of the RVLC codes can be explained by the fact that, unlike in the case of the decoding of VLC codes, the sequences are not all authorized at reception. Storage of the most probable authorized sequences enables them to be reused for the replacement of an unauthorized sequence during the decoding phase along the lattice of the channel decoder.
  • This new method can be extended also to the case of the decoding of the VLCs or RVLCs applied to real data for which, for example, there is an additional piece of a priori information on the number of words of the sequence, on the interval of values that the coefficients may take.
  • a second new method for the decoding of the RVLC codes consists in using the equivalent of the List-Viterbi algorithm for the algorithm of the SUBMAP [26], in both directions of decoding, as shown in FIG. 10 .
  • the usefulness of working with the algorithm of the SUBMAP arises out of its mode of operation.
  • a posteriori probability (APP) phase 112 using the results of the previous two phases to returning the APP of the decoded bits.
  • each decoding direction may use its a priori information 113 .
  • a priori information 113 To make the best possible use of the a priori information in the backward direction, it is useful to know which is the final state of the lattice. This information is obtained either by transmitting it or by placing padding bits or by using the “tail-biting” or the circular lattice technique [27].
  • the a priori information used may be of two types, as already mentioned.
  • the soft sequence obtained is not necessarily an authorized sequence because, although the forward and the backward phases guarantee this condition, the APP phase does not guarantee it. It is then advantageous to have this channel decoding checked by the source of a reduced symbol lattice 114 giving the most probable authorized sequence.
  • the decoding method referred to here above can be applied advantageously to a turbo-code type scheme.
  • the channel encoder is then replaced by the turbo codes. This makes it possible to keep the gain obtained at low signal-to-noise ratio throughout the range of signal-to-noise ratios.
  • the source lattice 120 is used in series with the two decoders as can be seen in FIG. 11 .
  • the second method takes up the second principle described further above: the channel lattice uses the a priori information directly as explained in FIG. 12 . Just as in the case of the previous part, to make the best possible use of the a priori information in the “backward” direction, it is useful to know the final state of the lattice. This information is obtained either by transmitting it or by placing padding bits or by using the technique of circular encoding. This technique is, to the Recursive Systematic Codes (RSC), what the “tail-biting” technique is for the Non Systematic Codes (NSC) [28].
  • RSC Recursive Systematic Codes
  • NSC Non Systematic Codes
  • the first method requires the running of the decoding algorithm on the “SISO_huff” lattice while the second method simply requires an updating of a table during the decoding on the lattice of the channel decoder. Furthermore, unlike the first method, the second method has a computation complexity that does not increase with the number of words in VLC table.
  • Huffman type variable length codes Since the use of Huffman type variable length codes is very widespread, these decoding techniques may be suited to many contexts. The applications are numerous. Huffman tables are found for example in the MPEG4 standard for the encoding of DC band and AC band coefficients for intra-coded images as well as for motion vector encoding.
  • the invention can also be used in a more full-scale transmission system taking account of a standard-compliant or near-standard-compliant system of image encoding, and a channel model representing the radio-mobile channel such as the BRAN ( ⁇ Broadcast Radio Network>>).
  • the Huffman code ([18]) is an entropic code that compresses a source by encoding data with low probability on a binary length greater than average and encodes words with high probability on a short binary length.
  • the Huffman code is much used because there are no other integer codes (codes whose symbols are encoded on a whole number of bits) that give a lower average number of bits per symbol.
  • One of its essential characteristics is that a binary sequence can never represent an encoded element and, at the same time, constitute the beginning of the code of another element. This characteristic of Huffman encoding enables representation by means of a binary tree structure where the set of words is represented by the set of paths from the root of the tree to the leaves of the tree.
  • Each terminal top i is assigned the probability p(a i ) and will provide for the binary breakdown of the word a i .
  • the algorithm for the construction of the tree consists, in each step, in summing up the two lowest probabilities of the previous step to join them at a top assigned the sum probability.
  • the top with the probability 1 is obtained. This is the root of the tree.
  • the example explained takes the case of four words as shown in FIG. 6 .
  • Table 1 Example of Huffman table Word Probability Binar Code a 0 p(a 0 ) 111 a 1 p(a 1 ) 110 a 2 p(a 2 ) 10 a 3 p(a 3 ) 0
  • Two optimal structures are used for the decoding of an encoded sequence. Each is based on a different criterion, the criterion of the most likely sequence or the criterion of the most likely symbol commonly called the MAP (maximum a posteriori) criterion
  • the search for the most likely sequence is made with the Viterbi algorithm [9] which seeks the most probable sequence X emitted, the sequence Y received being known: max X ⁇ P ⁇ ( X / Y ) ( 1 )
  • the a posteriori information on X can these broken down into two terms.
  • the first log P(Y/X) relates to information given by the channel, and the second log P(X) relates to the a priori information on the sequence sent.
  • the law of the noise P(Y/X) can be expressed as the product of the laws of probabilities of the noise disturbing each of the words x i and y i and of the transmitted bits x i j and y i j .
  • log ⁇ ⁇ P ⁇ ( Y / X ) log ⁇ ⁇ ⁇ i ⁇ ⁇ ⁇ j ⁇ ⁇ p ⁇ ( y j i / x j i ) ( 5 )
  • the decoding according to the Viterbi algorithm comprises an ACS (Add Compare Select) phase: at each stage and in each state, the metric of the branch associated with each of the two concurrent sequences is added up, the two metrics obtained are compared and the lowest metric sequence is selected.
  • ACS Add Compare Select
  • the less complex algorithm is called the SUBMAP algorithm [11].
  • This weighting is a piece of dependability information on the estimation made by the decoder.
  • This “soft” piece of information sometimes called extrinsic information, can be used by an external decoder, a source decoder or any other device coming into play in the reception of the information [12].
  • the SOVA Soft Output Viterbi Algorithm
  • a VLC encoded source or a source coming from a channel encoder may be represented by a lattice from which the extrinsic information can be extracted.
  • a part of the invention proposes to build a lattice of this kind from the VLC code of the source.
US10/111,833 1999-11-09 2000-11-02 Method for decoding data coded with an entropic code, corresponding decoding device and transmission system Expired - Lifetime US6812873B1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
FR9914321 1999-11-09
FR9914321A FR2800941A1 (fr) 1999-11-09 1999-11-09 Procede de decodage de donnees codees a l'aide d'un code entropique, dispositif de decodage et systeme de transmission correspondants
FR0004787A FR2800942A1 (fr) 1999-11-09 2000-04-13 Procede de decodage de donnees codees a l'aide d'un code entropique, dispositif de decodage et systeme de transmission correspondants
FR0004787 2000-04-13
PCT/FR2000/003061 WO2001035535A1 (fr) 1999-11-09 2000-11-02 Procede de decodage de donnees codees a l'aide d'un code entropique, dispositif de decodage et systeme de transmission correspondants

Publications (1)

Publication Number Publication Date
US6812873B1 true US6812873B1 (en) 2004-11-02

Family

ID=26212344

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/111,833 Expired - Lifetime US6812873B1 (en) 1999-11-09 2000-11-02 Method for decoding data coded with an entropic code, corresponding decoding device and transmission system

Country Status (7)

Country Link
US (1) US6812873B1 (de)
EP (1) EP1230736B1 (de)
JP (1) JP4836379B2 (de)
AT (1) ATE241873T1 (de)
DE (1) DE60003071T2 (de)
FR (1) FR2800942A1 (de)
WO (1) WO2001035535A1 (de)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040114683A1 (en) * 2002-05-02 2004-06-17 Heiko Schwarz Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20050001746A1 (en) * 2003-05-30 2005-01-06 Jagadeesh Sankaran Method of context based adaptive binary arithmetic decoding with two part symbol decoding
US20050001745A1 (en) * 2003-05-28 2005-01-06 Jagadeesh Sankaran Method of context based adaptive binary arithmetic encoding with decoupled range re-normalization and bit insertion
US20050038837A1 (en) * 2003-07-17 2005-02-17 Detlev Marpe Method and apparatus for binarization and arithmetic coding of a data value
US20050102600A1 (en) * 2003-11-10 2005-05-12 Anand Anandakumar High data rate communication system for wireless applications
US20050123207A1 (en) * 2003-12-04 2005-06-09 Detlev Marpe Video frame or picture encoding and decoding
US20050190774A1 (en) * 2004-02-27 2005-09-01 Thomas Wiegand Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US7092883B1 (en) * 2002-03-29 2006-08-15 At&T Generating confidence scores from word lattices
US20070140375A1 (en) * 2004-03-25 2007-06-21 France Telecom Joint-source channel decoding method and associated joint source-channel decoder
US20080240234A1 (en) * 2007-03-30 2008-10-02 Chun Kit Hung High speed context memory implementation for h.264
US20080238731A1 (en) * 2007-03-30 2008-10-02 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Method and apparatus for debinarization of digital video data during decoding
US20090060060A1 (en) * 2007-07-23 2009-03-05 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US20090074057A1 (en) * 2004-01-30 2009-03-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090106637A1 (en) * 2007-10-23 2009-04-23 Samsung Electronics Co., Ltd. Concatenated decoder and concatenated decoding method
US20100009401A1 (en) * 2006-12-21 2010-01-14 Ajinomoto Co., Inc. Method of evaluating colorectal cancer, colorectal cancer-evaluating apparatus, colorectal cancer-evaluating method, colorectal cancer-evaluating system, colorectal cancer-evaluating program and recording medium
US20100031125A1 (en) * 2006-09-28 2010-02-04 Broadcom Corporation Tail-biting turbo coding to accommodate any information and/or interleaver block size
US10171810B2 (en) 2015-06-22 2019-01-01 Cisco Technology, Inc. Transform coefficient coding using level-mode and run-mode
US11062187B1 (en) * 2019-09-15 2021-07-13 Gideon Samid Shapeless—a new language concept and related technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4267595A (en) * 1980-02-04 1981-05-12 International Telephone And Telegraph Corporation AMI Decoder apparatus
US6215422B1 (en) * 1997-08-29 2001-04-10 Canon Kabushiki Kaisha Digital signal huffman coding with division of frequency sub-bands

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2537551B2 (ja) * 1989-07-13 1996-09-25 シャープ株式会社 可変長符号復号回路
JP3305190B2 (ja) * 1996-03-11 2002-07-22 富士通株式会社 データ圧縮装置及びデータ復元装置
JP2001177417A (ja) * 1999-11-08 2001-06-29 Texas Instr Inc <Ti> チャネル復号化とエントロピー復号化を組合わせた復号器

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4267595A (en) * 1980-02-04 1981-05-12 International Telephone And Telegraph Corporation AMI Decoder apparatus
US6215422B1 (en) * 1997-08-29 2001-04-10 Canon Kabushiki Kaisha Digital signal huffman coding with division of frequency sub-bands

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"Improved Joint Source-Channel Decoding for Variable-Length Encoded Data Using Soft Decisions and MMSE Estimation" by M. Park et al., Proceedings of Conference on Data Compression (DCC'99), XP002143145, pp. 554, Mar. 1999.
"Iterative Source/Channel-Decoding Using Reversible Variable Length Codes" by R. Bauer et al., for Data Compression Conference XP002154238, pp. 93-102, Mar. 2000.
"Joint Source-Channel Decoding of Entropy Coded Markov Sources over Binary Symmetric Channels" by K.P. Subbalakshmi et al., International Conference on Communications vol. 1, pp. 446-450, Jun. 1999.
"Joint Source-Channel Decoding of Variable-Length Encoded Sources" by A. Murad et al., XP002143123, Information Theory Workshop ITW98, pp. 94-95, Jun. 1998.
"Joint Source-Channel Soft Decoding of Huffman Codes with Turbo-Codes" by L. Guivarch et al., Information Theory Workshop ITW98, XP-002143090 pp. 83-92, Mar. 2000.
"Utilizing Soft Information in Decoding of Variable Length Codes" by J. Wen et al., Proceedings of Conference on Data Compression (DCC'99), XP002143233, pp. 131-139, Mar. 1999.

Cited By (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092883B1 (en) * 2002-03-29 2006-08-15 At&T Generating confidence scores from word lattices
US7562010B1 (en) * 2002-03-29 2009-07-14 At&T Intellectual Property Ii, L.P. Generating confidence scores from word lattices
US20090201995A1 (en) * 2002-05-02 2009-08-13 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9698823B2 (en) * 2002-05-02 2017-07-04 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20090201996A1 (en) * 2002-05-02 2009-08-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and Arrangement for Coding Transform Coefficients in Picture and/or Video Coders and Decoders and a Corresponding Computer Program and a Corresponding Computer-Readable Storage Medium
US20040114683A1 (en) * 2002-05-02 2004-06-17 Heiko Schwarz Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US7702013B2 (en) * 2002-05-02 2010-04-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20050117652A1 (en) * 2002-05-02 2005-06-02 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9362949B2 (en) 2002-05-02 2016-06-07 Fraunhofer Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20090201986A1 (en) * 2002-05-02 2009-08-13 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and Arrangement for Coding Transform Coefficients in Picture and/or Video Coders and Decoders and a Corresponding Computer Program and a Corresponding Computer-Readable Storage Medium
US9450608B2 (en) 2002-05-02 2016-09-20 Faunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20160277765A1 (en) * 2002-05-02 2016-09-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20090201994A1 (en) * 2002-05-02 2009-08-13 Fraunhofer-Gesellschaff Zur Forderung Der Angewandten Forschung E.V. Method and Arrangement for Coding Transform Coefficients in Picture and/or Video Coders and Decoders and a Corresponding Computer Program and a Corresponding Computer-Readable Storage Medium
US20160277763A1 (en) * 2002-05-02 2016-09-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9698820B2 (en) * 2002-05-02 2017-07-04 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20160277764A1 (en) * 2002-05-02 2016-09-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9698821B2 (en) * 2002-05-02 2017-07-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9698822B2 (en) * 2002-05-02 2017-07-04 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9490837B2 (en) * 2002-05-02 2016-11-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US9490838B2 (en) 2002-05-02 2016-11-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20160277742A1 (en) * 2002-05-02 2016-09-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and arrangement for coding transform coefficients in picture and/or video coders and decoders and a corresponding computer program and a corresponding computer-readable storage medium
US20050001745A1 (en) * 2003-05-28 2005-01-06 Jagadeesh Sankaran Method of context based adaptive binary arithmetic encoding with decoupled range re-normalization and bit insertion
US6940429B2 (en) * 2003-05-28 2005-09-06 Texas Instruments Incorporated Method of context based adaptive binary arithmetic encoding with decoupled range re-normalization and bit insertion
US6876317B2 (en) * 2003-05-30 2005-04-05 Texas Instruments Incorporated Method of context based adaptive binary arithmetic decoding with two part symbol decoding
US20050001746A1 (en) * 2003-05-30 2005-01-06 Jagadeesh Sankaran Method of context based adaptive binary arithmetic decoding with two part symbol decoding
US7088271B2 (en) * 2003-07-17 2006-08-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and apparatus for binarization and arithmetic coding of a data value
US6900748B2 (en) * 2003-07-17 2005-05-31 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and apparatus for binarization and arithmetic coding of a data value
US20050253740A1 (en) * 2003-07-17 2005-11-17 Detlev Marpe Method and apparatus for binarization and arithmetic coding of a data value
US20050038837A1 (en) * 2003-07-17 2005-02-17 Detlev Marpe Method and apparatus for binarization and arithmetic coding of a data value
US20050102600A1 (en) * 2003-11-10 2005-05-12 Anand Anandakumar High data rate communication system for wireless applications
US7379608B2 (en) 2003-12-04 2008-05-27 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung, E.V. Arithmetic coding for transforming video and picture data units
US20050123207A1 (en) * 2003-12-04 2005-06-09 Detlev Marpe Video frame or picture encoding and decoding
US20090080532A1 (en) * 2004-01-30 2009-03-26 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8532186B2 (en) 2004-01-30 2013-09-10 Fraunhofer-Gellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8218630B2 (en) 2004-01-30 2012-07-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20100040139A1 (en) * 2004-01-30 2010-02-18 Frauhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7599435B2 (en) 2004-01-30 2009-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090141806A1 (en) * 2004-01-30 2009-06-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090141803A1 (en) * 2004-01-30 2009-06-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8565304B2 (en) 2004-01-30 2013-10-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8204115B2 (en) 2004-01-30 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090074057A1 (en) * 2004-01-30 2009-03-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090135908A1 (en) * 2004-01-30 2009-05-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20100040148A1 (en) * 2004-01-30 2010-02-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7684488B2 (en) 2004-01-30 2010-03-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7684484B2 (en) 2004-01-30 2010-03-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V Video frame encoding and decoding
US20090135915A1 (en) * 2004-01-30 2009-05-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8204117B2 (en) 2004-01-30 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8509306B2 (en) 2004-01-30 2013-08-13 Franhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7760806B2 (en) 2004-01-30 2010-07-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8379719B2 (en) 2004-01-30 2013-02-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8335253B2 (en) 2004-01-30 2012-12-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20100040138A1 (en) * 2004-01-30 2010-02-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8325806B2 (en) 2004-01-30 2012-12-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8275044B2 (en) 2004-01-30 2012-09-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8254460B2 (en) 2004-01-30 2012-08-28 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8121188B2 (en) 2004-01-30 2012-02-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8243788B2 (en) 2004-01-30 2012-08-14 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8218631B2 (en) 2004-01-30 2012-07-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US8204116B2 (en) 2004-01-30 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20090080521A1 (en) * 2004-01-30 2009-03-26 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20100040140A1 (en) * 2004-01-30 2010-02-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E. V. Video frame encoding and decoding
US8213514B1 (en) 2004-01-30 2012-07-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US20100208808A1 (en) * 2004-02-27 2010-08-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US8625638B2 (en) 2004-02-27 2014-01-07 Fraunhofer-Gessellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8139611B2 (en) 2004-02-27 2012-03-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8249111B2 (en) 2004-02-27 2012-08-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8249110B2 (en) 2004-02-27 2012-08-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8249112B2 (en) 2004-02-27 2012-08-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US20050190774A1 (en) * 2004-02-27 2005-09-01 Thomas Wiegand Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US7586924B2 (en) 2004-02-27 2009-09-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US20100208754A1 (en) * 2004-02-27 2010-08-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US20100208791A1 (en) * 2004-02-27 2010-08-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US20100208735A1 (en) * 2004-02-27 2010-08-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US8340135B2 (en) 2004-02-27 2012-12-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8379672B2 (en) 2004-02-27 2013-02-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US20100208792A1 (en) * 2004-02-27 2010-08-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US20090285309A1 (en) * 2004-02-27 2009-11-19 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US8705573B2 (en) 2004-02-27 2014-04-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US20100172408A1 (en) * 2004-02-27 2010-07-08 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US20100158102A1 (en) * 2004-02-27 2010-06-24 Thomas Wiegand Apparatus and Method for Coding an Information Signal into a Data Stream, Converting the Data Stream and Decoding the Data Stream
US8665909B2 (en) 2004-02-27 2014-03-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8654792B2 (en) 2004-02-27 2014-02-18 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8644344B2 (en) 2004-02-27 2014-02-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8155154B2 (en) 2004-02-27 2012-04-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US20070140375A1 (en) * 2004-03-25 2007-06-21 France Telecom Joint-source channel decoding method and associated joint source-channel decoder
US20100031125A1 (en) * 2006-09-28 2010-02-04 Broadcom Corporation Tail-biting turbo coding to accommodate any information and/or interleaver block size
US8074155B2 (en) * 2006-09-28 2011-12-06 Broadcom Corporation Tail-biting turbo coding to accommodate any information and/or interleaver block size
US20100009401A1 (en) * 2006-12-21 2010-01-14 Ajinomoto Co., Inc. Method of evaluating colorectal cancer, colorectal cancer-evaluating apparatus, colorectal cancer-evaluating method, colorectal cancer-evaluating system, colorectal cancer-evaluating program and recording medium
US20080240234A1 (en) * 2007-03-30 2008-10-02 Chun Kit Hung High speed context memory implementation for h.264
US7518536B2 (en) 2007-03-30 2009-04-14 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Method and apparatus for debinarization of digital video data during decoding
US20080238731A1 (en) * 2007-03-30 2008-10-02 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Method and apparatus for debinarization of digital video data during decoding
WO2008119210A1 (en) * 2007-03-30 2008-10-09 Hong Kong Applied Science and Technology Research Institute Co. Ltd Method and apparatus for debinarization of digital video data during decoding
US7443318B2 (en) 2007-03-30 2008-10-28 Hong Kong Applied Science And Technology Research Institute Co. Ltd. High speed context memory implementation for H.264
US8270505B2 (en) * 2007-07-23 2012-09-18 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US20090060060A1 (en) * 2007-07-23 2009-03-05 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US8817893B2 (en) 2007-07-23 2014-08-26 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US8406329B2 (en) 2007-07-23 2013-03-26 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US8711953B2 (en) 2007-07-23 2014-04-29 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US8509333B2 (en) 2007-07-23 2013-08-13 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US8542759B2 (en) 2007-07-23 2013-09-24 Sony Corporation Method for transmitting a signal from a transmitter to a receiver in a power line communication network, transmitter, receiver, power line communication modem and power line communication system
US20090106637A1 (en) * 2007-10-23 2009-04-23 Samsung Electronics Co., Ltd. Concatenated decoder and concatenated decoding method
US8583982B2 (en) 2007-10-23 2013-11-12 Samsung Electronics Co., Ltd. Concatenated decoder and concatenated decoding method
US10171810B2 (en) 2015-06-22 2019-01-01 Cisco Technology, Inc. Transform coefficient coding using level-mode and run-mode
US11062187B1 (en) * 2019-09-15 2021-07-13 Gideon Samid Shapeless—a new language concept and related technology

Also Published As

Publication number Publication date
WO2001035535A1 (fr) 2001-05-17
EP1230736B1 (de) 2003-05-28
JP4836379B2 (ja) 2011-12-14
JP2003514427A (ja) 2003-04-15
ATE241873T1 (de) 2003-06-15
FR2800942A1 (fr) 2001-05-11
DE60003071T2 (de) 2004-04-01
DE60003071D1 (de) 2003-07-03
EP1230736A1 (de) 2002-08-14

Similar Documents

Publication Publication Date Title
Bauer et al. On variable length codes for iterative source/channel decoding
Bauer et al. Iterative source/channel-decoding using reversible variable length codes
US6812873B1 (en) Method for decoding data coded with an entropic code, corresponding decoding device and transmission system
Bauer et al. Symbol-by-symbol MAP decoding of variable length codes
JP3857320B2 (ja) 並列連結のテイルバイティング・コンボルーション・コード及びその復号器
Jeanne et al. Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes
Guivarch et al. Joint source-channel soft decoding of Huffman codes with turbo-codes
Thobaben et al. Robust decoding of variable-length encoded Markov sources using a three-dimensional trellis
JP3741616B2 (ja) 畳込み符号用のソフト判定出力デコーダ
US7249311B2 (en) Method end device for source decoding a variable-length soft-input codewords sequence
Reed et al. Turbo-code termination schemes and a novel alternative for short frames
Lamy et al. Reduced complexity maximum a posteriori decoding of variable-length codes
Grangetto et al. Iterative decoding of serially concatenated arithmetic and channel codes with JPEG 2000 applications
Wen et al. Soft-input soft-output decoding of variable length codes
Jeanne et al. Source and joint source-channel decoding of variable length codes
EP1094612B1 (de) SOVA Turbodekodierer mit kleinerer Normalisierungskomplexität
Thobaben et al. Design considerations for iteratively-decoded source-channel coding schemes
US7096410B2 (en) Turbo-code decoding using variably set learning interval and sliding window
Lamy et al. Low complexity iterative decoding of variable-length codes
EP1098447B1 (de) Kombinierte Kanal - und Entropiedekodierung
Shrivastava et al. Performance of Turbo Code for UMTS in AWGN channel
Fowdur et al. Joint source channel decoding and iterative symbol combining with turbo trellis-coded modulation
Zribi et al. Low-complexity joint source/channel turbo decoding of arithmetic codes
JP3514213B2 (ja) 直接連接畳込み符号器、及び、直接連接畳込み符号化方法
Bera et al. SOVA based decoding of double-binary turbo convolutional code

Legal Events

Date Code Title Description
AS Assignment

Owner name: FRANCE TELECOM, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIOHAN, PIERRE;GUIVARCH, LIONEL;CARLAC'H, JEAN-CLAUDE;REEL/FRAME:013109/0695

Effective date: 20020531

Owner name: TELEDIFFUSION DE FRANCE, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIOHAN, PIERRE;GUIVARCH, LIONEL;CARLAC'H, JEAN-CLAUDE;REEL/FRAME:013109/0695

Effective date: 20020531

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 12