GB2409618A - Telecommunications decoder device - Google Patents

Telecommunications decoder device Download PDF

Info

Publication number
GB2409618A
GB2409618A GB0329910A GB0329910A GB2409618A GB 2409618 A GB2409618 A GB 2409618A GB 0329910 A GB0329910 A GB 0329910A GB 0329910 A GB0329910 A GB 0329910A GB 2409618 A GB2409618 A GB 2409618A
Authority
GB
United Kingdom
Prior art keywords
code block
reverse
decoder
metrics
calculator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0329910A
Other versions
GB0329910D0 (en
Inventor
Clyde Witchard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Picochip Designs Ltd
Original Assignee
Picochip Designs Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Picochip Designs Ltd filed Critical Picochip Designs Ltd
Priority to GB0329910A priority Critical patent/GB2409618A/en
Publication of GB0329910D0 publication Critical patent/GB0329910D0/en
Priority to PCT/GB2004/005377 priority patent/WO2005081410A1/en
Publication of GB2409618A publication Critical patent/GB2409618A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

A method for decoding and a decoder including first and second computation pipelines in parallel. The first computation pipeline includes a forward branch metric calculator and a forward state metric calculator in series. The second computation pipeline includes a reverse branch metric calculator and a reverse state metric calculator in series. A first half of a code block is processed in the first and second computation pipelines and then stored. A second half of the code block is subsequently processed in the first and second computation pipelines and then fed to the opposite pipeline. Forward and reverse log likelihood ratios (LLRs) are then calculated prior to outputting a decoded signal. Advantage is gained through the avoidance of "trace-back delay" periods and high memory requirements. The decoder can be used in a mobile telecommunications base station on in a telecommunications terminal.

Description

1 240961 8 Telecommunications Decoder Device The present invention relates
to a digital decoder, and in particular to a Maximum A-Posteriori (MAP) decoder for use in a telecommunication system.
Telecommunication systems generally suffer from a degradation of the signal transmitted over a channel due to noise, attenuation and fading. The utilization of digital signals rather than analogue signals affords advantages such as improved immunity to channel noise and interference, increased channel data capacity and improved security through the use of encryption. A data signal is modulated to enable efficient transmission over a channel.
Errors in the form of missing or wrong digits can cause significant problems in digital data transmission, and various systems of error detection and control are commonly used, such as cyclic redundancy checks (CRC) and forward error correction (FEC). Error correction circuitry generally comprises an encoder at the transmitter and a decoder at the receiver.
One class of encoder, known as a convolutional encoder, converts a sequence of input bits into a code block based on a convolution of the input sequence with a fixed binary pattern or with another signal. After transmission, the code blocks are fed to a convolutional decoder, such as a MAP decoder. The convolutional encoder can be in one of numerous states (generally dependent upon a constraint length of the code block) at the time of the conversion of the input data to a code block. A MAP decoder calculates and stores various probabilities. In a log MAP decoder (or in a max log MAP decoder) these probabilities are calculated and stored in logarithmic form, and are known as metrics. There are three types of metrics, namely forward state metrics (also known as forward path metrics) , reverse state metrics (also known as reverse path metrics) and branch metrics. The forward state metrics represent the probability that the encoder is in a particular state and that a particular channel sequence has been received up to this point.
The reverse state metrics represent the probability that given the encoder is in a particular state, the future received sequence will be some particular given sequence. The branch metrics represent the probability that given the encoder is in a particular state, it moves to another particular state and that we receive a particular channel sequence.
The MAP decoder was first formally described by Bahl, Cocke, Jelinek and Raviv (hence the alternative name "the BCJR algorithm") in "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate," IEEE Transactions on Information Theory, pp. 284-287, March 1974. Some MAP decoders implement a logarithmic version of the MAP algorithm (in which all the metrics are stored and computed in logarithmic form) and are known as "log MAP" decoders. Commonly, decoders utilise a form of the MAP algorithm known as the "max log MAP" algorithm which reduces the computational requirements of the algorithm through use of an approximation.
A receiver in a digital communication system generally includes a quantized device which can have two or more so-called "levels". A twolevel (or binary) quantized produces what is known as a 'hard-decision' output. A quantized making use of more than two levels produces a softdecision' output. For example, in a system utilising a four-level quantizer in the receiver, for each received data bit, the quantized output represents a decision on the most likely transmitted binary value combined with a confidence level for that decision.
The MAP algorithm is a "soft-output" (or alternatively, "soft-in-soft-out") algorithm, which means that the algorithm outputs more than a two level representation for each decoded bit. This feature makes MAP decoders particularly suitable for decoding "turbo codes".
Parallel concatenated convolutional codes (PCCCs) are turbo codes which are formed by encoding data bits in a first recursive systematic convolutional encoder and then, after passing through an interleaved, the data bits are further encoded by a second systematic convolutional encoder. Such turbo codes yield coding gains close to theoretical Shannon capacity limits.
The decoding of such turbo codes generally requires two decode operations per iteration (and usually several iterations are involved).
Log MAP (or max log MAP) decoders can be implemented with a single processor or several processors in parallel. Single processor implementations are constrained to processing the MAP algorithm sequentially. The processor determines the forward state metrics by scanning forwards through the code block and computing and storing the branch metrics and forward state metrics. The processor then computes the reverse state metrics by scanning backwards through the code block. These reverse state metrics are used in conjunction with the previously stored forward state metrics and branch metrics to calculate final log likelihood ratios (LLRs) for output from the MAP decoder. Parallel processor implementations are able to achieve a higher data throughput than single processor implementations.
A LLR of a code block is the log of the likelihood of the ratio that the mth data bit is a logical 1 as opposed to a logical 0.
In a typical parallel processor implementation of a MAP decoder, known as a "windowed" technique, the decoding operation comprises forward and backward recursive calculations and each code block is operated on as a series of overlapping sub-blocks. (For example, see S.S. Pietrobon, "Efficient implementation of continuous MAP decoders and a synchronization technique for turbo decoders", Proc. Int. Symp. Information Theory Appl., Victoria, B.C., Canada, 1996, pp. 586-589.) A first processor calculates forward state metrics for the sub blocks and a second, faster processor calculates reverse state metrics for the sub- blocks. (It is also known to utilise two separate processors, each the same speed as the first processor, in place of the second processor.) Immediately the first processor has calculated the forward state metrics for a particular sub-block, the second processor starts to calculate the reverse state metrics for the same sub-block. However, each calculation performed by the second processor commences a part of the way through a code block.
Therefore, the decoder cannot determine the state that the encoder was in prior to transmission of that particular code block, so the decoder commences each reverse state metric engine run ignoring the state metric calculations (for LLR calculation purposes) calculated by the second processor for a predetermined period (generally known as a "trace-back delay") in order to allow the reverse state metrics to converge close to the correct relative values.
The main disadvantages associated with the operation of this type of parallel processor implementation of a MAP decoder stem from the traceback delay.
Specifically, a trace-back delay (commonly a period related to 32 or 64 decoded output bits) occurs for each code block and leads to a slow overall processing rate for short code blocks. Further, the trace-back delay results in a requirement for an additional or faster state metric calculator in the reverse direction compared to the forward direction.
Therefore, the present invention seeks to provide a decoder in which problems associated with the trace back delay are at least alleviated.
According to a first aspect of the present invention there is provided a method for decoding a code block, the method comprising the steps of calculating and storing a forward state metric of a first half of the code block and a reverse state metric of a second half of the code block, in parallel; calculating a forward state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forward state metric of the second half of the code block and the stored reverse state metric of the second half of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric of the first half of the code block and the stored forward state metric of the first half of the code block to a second log likelihood ratio calculator) and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a second aspect of the present invention there is provided a decoder for decoding a code block, the decoder comprising) means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for feeding the calculated reverse state metrics of the first half of the code block and the stored forward state metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a third aspect of the present invention there is provided a mobile telecommunications base station including the decoder of the second aspect of the present invention.
According to a fourth aspect of the present invention there is provided a mobile telecommunications terminal including the decoder of the second aspect of the present invention.
Advantageously, the present invention provides a MAP decoder with a high data throughput. The trace-back delay is avoided, thus allowing short code blocks to be decoded at the decoder's full data rate. Further advantage is gained through the low computational and control requirements and the efficient use of memory.
For a better understanding of the present invention, and to show how it may be put into effect, reference will now be made, by way of example, to the accompanying drawings in which: Figure 1 shows a first embodiment of the decoder of the present invention; Figure 2 shows a turbo decoder implementation of the first embodiment of the present invention) and Figure 3 shows a second embodiment of the present invention.
In the log MAP decoder 10 of Figure 1, a code block buffer 12 is coupled to a first computation pipeline 14 (also known as a forward state metrics computation pipeline) and a second computation pipeline 16 (also known as a backward state metrics computation pipeline) in parallel. The first computation pipeline 14 comprises a first branch metric calculator 18, also referred to herein as a forward branch metric calculator since it is in a "forward" path of the device, coupled in series to a forward state metric calculator 20. The output of the forward state metric calculator 20 is input to a forward state metric store 26 and also a forward LLR calculator 28, both in the first computation pipeline 14. Furthermore, the forward LLR calculator 28 receives a second input from the forward branch metric calculator 18.
The second computation pipeline 16 comprises a second branch metric calculator 22, also referred to herein as a reverse branch metric calculator since it is in a "reverse" path of the device, coupled in series to a reverse state metric calculator 24. In a similar manner, the output of the reverse state metric calculator 24 is input to a reverse state metric store and also a reverse LLR calculator 32, both in the second computation pipeline 16. Furthermore, the reverse LLR calculator 32 receives a second input from the reverse branch metric calculator 22.
Importantly, the forward LLR calculator 28 in the first computation pipeline 14 receives a third input from the reverse state metric store 30 in the second computation pipeline 16. Similarly, the reverse LLR calculator 32 in the second computation pipeline 16 receives a third input from the forward state metric store 26 in the first computation pipeline 14. Finally, the reverse LLR calculator 32 and the forward LLR calculator 28 are both coupled to a single LLR buffer 34.
In operation, a received code block (considered as comprising a first half and a second half) is input to, and held in, the code block buffer 12. The first computation pipeline 14 processes data from the front end of the code block to the back end of the code block. Simultaneously, the second computation pipeline 16 processes data from the back end of the code block to the front end of the code block. 9
Specifically, in a first stage of operation, the first half of the code block is input in sequence to the forward branch metric calculator 18 where branch metrics are determined. Each output from the forward branch metric calculator 18 is input to the forward state metric calculator 20 where it is processed. The second half of the code block is input in sequence to the reverse branch metric calculator 22 where branch metrics are also determined. Each output from the reverse branch metric calculator 22 is input to the reverse state metric calculator 24 where it is processed. Each output from the forward state metric calculator 20 is stored in the forward state metric store 26 and each output from the reverse state metric calculator 24 is stored in the reverse state metric store 30.
In a second stage of operation, the second half of the code block held in the code block buffer 12 is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, as described above, up to the point where the signal is processed by the forward and reverse state metric calculators 20, 24. Here, the signal output from the forward state metric calculator is fed to the forward LLR calculator 28. Similarly, the signal output from the reverse state metric calculator 24 is fed to the reverse LLR calculator 32.
The forward LLR calculator 28 then utilizes the stored input signal (stored during the first stage of operation) from the reverse state metric store 30, an input signal from the forward branch metric calculator 18 and the input signal from the forward state metric calculator 20 in order to immediately determine the forward LLR. The reverse LLR calculator 32 utilizes the stored input signal (stored during the first stage of operation) from the forward state metric store 26, an input signal from the reverse branch metric calculator 22 and the input signal from the reverse state metric calculator 24 in order to immediately determine the reverse LLR. Both the forward and the reverse LLRs are held in the LLR buffer 34, prior to forming a decoder output signal.
Thus, the decoder of Figure 1 provides an increased operational speed in comparison to a conventional decoder, because a trace-back delay period is avoided.
It is only necessary to store the metrics calculated for one half of each code block. The metrics calculated for the other half of each code block are not stored but passed immediately to the forward and reverse LLR calculators 28, 32.
In the turbo code decoder of Figure 2, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1. Modification is found in the LLR buffer 38 which additionally functions as a block interleaver/deinterleaver. An integrated address generator 36 feeds both the LLR buffer 38 and the code block buffer 12 simultaneously. Also, the LLR buffer 38 has a second output line coupled to the forward branch metric calculator 18 and the forward LLR calculator 28, and a third output line coupled to the reverse branch metric calculator 22 and the reverse LLR calculator 32. Further, the code block buffer 12 is coupled directly to the forward and reverse LLR calculators 28, 32.
In operation, the modified system of Figure 2 functions in a similar way to the system depicted in Figure 1.
The processed signal output from the reverse LLR calculator 32 and forward LLR calculator 28 is written to the LLR buffer 38 in linear order and read out in permuted order during an interleaving stage. The processed signal output from the LLR calculators 28, 32 is written to the LLR buffer 38 in permuted order and read out in linear order during a deinterleaving stage.
When the second half of the code block is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, an a-priori LLR value is transferred from the LLR buffer 38 to both the reverse LLR calculator 32 and the forward LLR calculator 28. Consequently, an extrinsic LLR value, output from both the forward and reverse LLR calculators 28, 32 is written immediately back into the same interleaved address. In this way, the LLR buffer 38 acts alternately as a block interleaved and a block deinterleaver, thereby avoiding the necessity for double buffering.
Data throughput can be increased for the two pipeline decoders of Figure 1 and Figure 2 by extending the architecture to comprise 2n pipelines (where n is a positive integer). Figure 3 illustrates a four pipeline MAP decoder architecture, which essentially comprises two of the systems illustrated in Figure 1, operating in parallel. Again, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure In the log MAP decoder 42 of Figure 3, a code block buffer 48 is coupled in parallel to first, second, third and fourth computation pipelines 14, 16, 44, 46.
The first and second computation pipelines 14, 16 comprise the same elements as described in relation to Figure 1. The third and fourth computation pipelines 44, 46 also comprise the same elements as in the first and second computation pipelines 14, 16. The outputs of each of the first, second, third and fourth computation pipelines 14, 16, 44, 46 are coupled in parallel to a LLR buffer 50.
In operation, the first computation pipeline 14 operates on a first quarter of the code block starting from the front end of the block, and calculates and stores the forward state metrics for the first quarter only. The second computation pipeline 16 operates on a second quarter of the code block starting from the middle of the block, and calculates and stores reverse state metrics for the second quarter only. The third computation pipeline 44 operates on a third quarter of the code block starting from the middle of the block, and calculates and stores forward state metrics for the third quarter only. The fourth computation pipeline 46 operates on a fourth quarter of the code block starting from the back end of the block, and calculates and stores reverse state metrics for the fourth quarter only.
In order to allow the state metrics of the second and third computation pipelines 16, 44 to converge close to their correct relative values, these pipelines start to operate before the first and fourth computation pipelines 14, 46 pipelines, and run for a predetermined "trace-back delay" period before reaching the centre of the code block.
The four pipeline architecture of Figure 3 allows a two-fold increase in speed of data throughput in comparison to the two pipeline decoder of Figure 1 (ignoring the effects of the trace-back delay).
Further extensions of the architecture to comprise 2n pipelines (where n is a positive integer) will result in an e-fold increase in speed of data throughput.
It will be apparent to the skilled person that the above described system architectures are not exhaustive and variations on these structures may be employed to achieve a similar result whilst employing the same inventive concept. Specifically, the present inventive concept can be implemented for a MAP decoder, a log MAP decoder or a max log MAP decoder. The extended architectures described above can also be implemented with both PCCC and SCCC (Serial concatenated convolutional codes) turbo code decoder pipelines.
Further, the processors referred to above may be implemented in hardware or software. With reference to Figure 1, where the invention is implemented for a max log MAP decoder, the connection between the forward branch metric calculator 18 and the forward LLR calculator 28, and the connection between the reverse branch metric calculator 22 and the reverse LLR calculator 32 are not necessary for the correct functioning of the decoder.
It can therefore be seen that the present invention provides a convolutional decoder which has significant advantages over conventional MAP decoders.

Claims (9)

  1. Claims 1. A method for decoding a code block, the method comprising the
    steps of: calculating and storing a forward state metric of a first half of the code block and a reverse state metric of a second half of the code block, in parallel; calculating a forward state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forward state metric of the second half of the code block and the stored reverse state metric of the second half of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric of the first half of the code block and the stored forward state metric of the first half of the code block to a second log likelihood ratio calculator; and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
  2. 2. The method as claimed in claim 1, wherein the forward state metrics are calculated from a front end of the code block to a back end of the code block, and the reverse state metrics are calculated from the back end of the code block to the front end of the code block.
  3. 3. A decoder for decoding a code block, the decoder comprising; means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for feeding the calculated reverse state metrics of the first half of the code block and the stored forward state metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
  4. 4. The decoder as claimed in claim 3, further comprising means for supplying the code block to the means for calculating forward state metrics and the means for calculating reverse state metrics such that the means for calculating forward state metrics of the first half and then the second half of the code block operates from a front end of the code block to the back end of the code block, and the means for calculating reverse state metrics of the second half and then the , 16 first half of the code block operates from the back end of the code block to the front end of the code block.
  5. 5. The decoder as claimed in claim 3 wherein the means for calculating forward state metrics comprises a first branch metric calculator and a forward state metric calculator, and the means for calculating reverse metrics comprises a second branch metric calculator and a reverse state metric calculator.
  6. 6. The decoder as claimed in claim 3 wherein the code blocks include parallel concatenated convolutional codes.
  7. 7. A decoder for decoding a code block comprising a plurality of the decoders as claimed in claims 3 to 6, coupled in parallel.
  8. 8. A mobile telecommunications base station including the decoder of any preceding claim.
  9. 9. A mobile telecommunications terminal including the decoder of any preceding claim.
GB0329910A 2003-12-23 2003-12-23 Telecommunications decoder device Withdrawn GB2409618A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0329910A GB2409618A (en) 2003-12-23 2003-12-23 Telecommunications decoder device
PCT/GB2004/005377 WO2005081410A1 (en) 2003-12-23 2004-12-22 Log-map decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0329910A GB2409618A (en) 2003-12-23 2003-12-23 Telecommunications decoder device

Publications (2)

Publication Number Publication Date
GB0329910D0 GB0329910D0 (en) 2004-01-28
GB2409618A true GB2409618A (en) 2005-06-29

Family

ID=30776431

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0329910A Withdrawn GB2409618A (en) 2003-12-23 2003-12-23 Telecommunications decoder device

Country Status (2)

Country Link
GB (1) GB2409618A (en)
WO (1) WO2005081410A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2013125784A (en) 2013-06-04 2014-12-10 ЭлЭсАй Корпорейшн DEVICE FOR PROCESSING SIGNALS CARRYING CODES WITH MODULATION OF PARITY BITS

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0512641A1 (en) * 1991-05-08 1992-11-11 ALCATEL BELL Naamloze Vennootschap Decoder device
US5502735A (en) * 1991-07-16 1996-03-26 Nokia Mobile Phones (U.K.) Limited Maximum likelihood sequence detector
US20020048331A1 (en) * 2000-09-12 2002-04-25 Tran Hau Thien Method of normalization of forward metric (alpha) and reverse metic (beta) in a map decoder
WO2003023709A1 (en) * 2001-09-06 2003-03-20 Interdigital Technology Corporation Pipeline architecture for maximum a posteriori (map) decoders
WO2003105001A1 (en) * 2002-06-05 2003-12-18 Arc International Data processor adapted for turbo decoding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192501B1 (en) * 1998-08-20 2001-02-20 General Electric Company High data rate maximum a posteriori decoder for segmented trellis code words

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0512641A1 (en) * 1991-05-08 1992-11-11 ALCATEL BELL Naamloze Vennootschap Decoder device
US5502735A (en) * 1991-07-16 1996-03-26 Nokia Mobile Phones (U.K.) Limited Maximum likelihood sequence detector
US20020048331A1 (en) * 2000-09-12 2002-04-25 Tran Hau Thien Method of normalization of forward metric (alpha) and reverse metic (beta) in a map decoder
WO2003023709A1 (en) * 2001-09-06 2003-03-20 Interdigital Technology Corporation Pipeline architecture for maximum a posteriori (map) decoders
WO2003105001A1 (en) * 2002-06-05 2003-12-18 Arc International Data processor adapted for turbo decoding

Also Published As

Publication number Publication date
WO2005081410A1 (en) 2005-09-01
GB0329910D0 (en) 2004-01-28

Similar Documents

Publication Publication Date Title
US7584409B2 (en) Method and device for alternately decoding data in forward and reverse directions
EP0834222B1 (en) Parallel concatenated tail-biting convolutional code and decoder therefor
US7568147B2 (en) Iterative decoder employing multiple external code error checks to lower the error floor
Bauer et al. Symbol-by-symbol MAP decoding of variable length codes
US6014411A (en) Repetitive turbo coding communication method
Jeanne et al. Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes
US6606724B1 (en) Method and apparatus for decoding of a serially concatenated block and convolutional code
US6028897A (en) Error-floor mitigating turbo code communication method
US6591390B1 (en) CRC-based adaptive halting turbo decoder and method of use
US6812873B1 (en) Method for decoding data coded with an entropic code, corresponding decoding device and transmission system
Chen Iterative soft decoding of Reed-Solomon convolutional concatenated codes
US6487694B1 (en) Method and apparatus for turbo-code decoding a convolution encoded data frame using symbol-by-symbol traceback and HR-SOVA
US6675342B1 (en) Direct comparison adaptive halting decoder and method of use
JP2004343716A (en) Method and decoder for blind detection of transmission format of convolution-encoded signal
Kim et al. Reduction of the number of iterations in turbo decoding using extrinsic information
US7552379B2 (en) Method for iterative decoding employing a look-up table
US7584407B2 (en) Decoder and method for performing decoding operation using map algorithm in mobile communication system
Andersen 'Turbo'coding for deep space applications
EP1094612B1 (en) SOVA Turbo decoder with decreased normalisation complexity
GB2409618A (en) Telecommunications decoder device
US7565594B2 (en) Method and apparatus for detecting a packet error in a wireless communications system with minimum overhead using embedded error detection capability of turbo code
US7096410B2 (en) Turbo-code decoding using variably set learning interval and sliding window
Sklar Turbo code concepts made easy, or how I learned to concatenate and reiterate
Shah et al. Performance analysis of turbo code for CDMA 2000 with convolutional coded IS-95 system in wireless communication system
Shim et al. An efficient iteration decoding stopping criterion for turbo codes

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)