WO2005081410A1 - Log-map decoder - Google Patents

Log-map decoder Download PDF

Info

Publication number
WO2005081410A1
WO2005081410A1 PCT/GB2004/005377 GB2004005377W WO2005081410A1 WO 2005081410 A1 WO2005081410 A1 WO 2005081410A1 GB 2004005377 W GB2004005377 W GB 2004005377W WO 2005081410 A1 WO2005081410 A1 WO 2005081410A1
Authority
WO
WIPO (PCT)
Prior art keywords
code block
reverse
decoder
metrics
calculator
Prior art date
Application number
PCT/GB2004/005377
Other languages
French (fr)
Inventor
Clyde Witchard
Original Assignee
Picochip Designs Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Picochip Designs Limited filed Critical Picochip Designs Limited
Publication of WO2005081410A1 publication Critical patent/WO2005081410A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows

Definitions

  • the present invention relates to a digital decoder, and in particular to a Maximum A-Posteriori (MAP) decoder for use in a telecommunication system.
  • MAP Maximum A-Posteriori
  • Telecommunication systems generally suffer from a degradation of the signal transmitted over a channel due to noise, attenuation and fading.
  • a data signal is modulated to enable efficient transmission over a channel.
  • Errors in the form of missing or wrong digits can cause significant problems in digital data transmission, and • various systems of error detection and control are commonly used, such as cyclic redu-ndaney chec s- (CRG) and forward error correction (FEC) .
  • Error correction circuitry generally comprises an encoder at the transmitter and a decoder at the receiver.
  • One class of encoder converts a sequence of input bits into a code block based on a convolution of the input sequence with a ixed binary pattern or with another sional .
  • the code blocks are fed to a convolutional decoder, such as a MAP decoder.
  • the convolutional encoder can be in one of numerous states (generally dependent upon a constraint length of the code block) at the time of the conversion of the input data to a code block.
  • a MAP decoder calculates and stores various probabilities. In a log MAP decoder (or in a max log MAP decoder) these probabilities are calculated and stored in logarithmic form, and are known as metrics.
  • forward state metrics also known as forward path metrics
  • reverse state metrics also known as reverse path metrics
  • branch metrics branch metrics.
  • forward state metrics represent the probability that the encoder is in a particular state and that a particular channel sequence has been received up to this point.
  • the reverse state metrics represent the probability that given the encoder is in a particular state, the future received sequence will be some particular; given sequence.
  • the branch metrics represent the probability that given the encoder is in a particular state, it moves to another particular state and that we receive a particular channel sequence.
  • the MAP decoder was first formally described by Bahl, Cocke, Jelinek and Raviv (hence the alternative name
  • MAP decoders implement a logarithmic version of the MAP algorithm (in which all the metrics are stored and computed in logarithmic form) and are known as "log MAP" decoders.
  • decoders utilise a form of the MAP algorithm known as the "max log MAP” algorithm which reduces the computational requirements of the algorithm through use of an approximation.
  • a receiver in a digital communication system generally includes a quantizer device ⁇ ich can have two or more so-called “levels".
  • a two-level (or binary) quantizer produces what is known as a hard-decision' output.
  • a quantizer making use of more than two levels produces a soft-decision' output. For example, in a system utilising a four-level quantizer in the receiver, for each received data bit, the oruantizer output represents a decision on the most likely transmitted binary value combined with a confidence level for that decision.
  • the MAP algorithm is a "soft- output” (or alternatively, “soft-in-soft-out”) algorithm, which means that trie algorithm outputs more than a. two level representation for each decoded bit. This ⁇ eature makes MAP decodeirs particularly suitable for decoding "turbo codes”.
  • PCCCs Parallel concatenated convol ⁇ ational codes
  • a re turbo codes which are ' formed ?by encoding data bits in a first recursive systematic convolutional encoder arid then, after passing through an interleaver, the data bits are further encoded by a second systemat ⁇ c convolutional encoder.
  • turbo codes yield coding gains close to theoretical Shannon capacity limits .
  • the decoding of such turbo codes generally requires two decode operations per iteration (and usually several iterations are involved) .
  • Log MAP (or max log MAP) decoders can be implemented with a * single processor or several processors in parallel.
  • Single processor implementations a ⁇ re constrained to processirxg the MAP algorithm sequentially.
  • the processor determines the forwaird state metrics by scanning forwards through the code block and computing and storing the br-anch metrics and forward state metrics.
  • the processor then computes the reverse state metrics by scanning backwards through the code block. These reverse state metrics are used in conjunction with the previously stored forward state metrics and branch metrics to calc ilate final log likelihood ratios (LLRs) for output from the MAP decoder.
  • Parallel processor implementations are able to achieve a higher data throughput than single processor implementations .
  • a LLR of a code block is the log of the likelihood of the ratio that the m th data bit is a logical 1 as opposed to a logical 0.
  • a typical parallel processor implementation of a MAP decoder known as a "windowed” technique
  • the decoding operation comprises forward and backward recursive calculations and each code block is operated on as a series of' overlapping sub-blocks.
  • a first processor calculates forward state metrics for the sub- blocks and a second, faster processor calculates reverse state metrics for the sub-blocks.
  • the second processor starts to calculate the reverse state metrics for the same su?k>-block.
  • each calculation performed by the second processor commences a part of the way through a code block . Therefore, the decoder cannot determine the state that the encoder was in prior to transmission of that particular code block, so the decoder commences each reverse state metric engine run ignoring the state metric calculations (for LLR calculation purposes) calculated by the second processor for a predetermined period (generally known as a "trace-back delay") in order to allow the reverse state metrics to converge close to the correct relative values.
  • a trace-back delay ( commonly a period related to 32 or 64 decoded output bits) occurs for each code block and leads to a slow overall processing rate for short code blocks. Further, the trace-back delay results in a requirement for an additional or faster state metric calculator in the reverse direction compared to the forward direction.
  • the present invention seeks to provide a decoder in which problems associated with the trace- back delay are at least alleviated.
  • a method for decoding a code block comprising the steps of calculating and storing a forward state metric of first half of the code block and a reverse state metric of a second half of the code block, in parallel; caiculating a forward state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forwa_rd state metric of the second half of the code block: and the stored reverse state metric of the second haif of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric: of the first half of the code block and the stored forward state metric of the first half of the code b lock to a second log likelihood ratio calculator; and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
  • a decoder for decoding a code block, the decoder comprising; means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for f& eding the calculated reverse state metrics of the first half of the code block and the stored forward state -metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
  • a mobile telecommunications base station including the decoder of the second aspect of the present invention.
  • a mobile telecommunications terminal including the decoder of the second asp «ect of the present invention.
  • the present invention provides a MAP decoder with a high data throughput.
  • the trace-back delay is avoided, thus allowing short code holocks to be decoded at the decoder' s full data rate . Further advantage is gained through the low computational and control requirements and the efficient use of memory.
  • Figure 1 shows a first embodiment of the decoder of the present invention
  • Figure 2 shows a turbo decoder implementation of the first embodiment of the present invention
  • FIG 3 shows a second embodiment of the present invention.
  • a code blo k buffer 12 is coupled to a first computation pipeline 1_4 (also known as a forward state metrics computation pipeline) and a second computation pipeline 16 (also known as a backward state metrics computation pipeline) in parallel.
  • the -first computation pipeline 2_4 comprises a first branch metric calculator 18, also referred to herein as a forward branch metric calculator since it is in a "forward" path of time device, coupled in series to a forward state metric calculator 20.
  • the output of the forward state metric calculator 20 is input to a forward state metric 26 and also a forward LLR calculator 28, both in tine first computation pipeline 14. Furthermore, tine forward LLR calculator 28 receives a second input from the forward branch metric calculator 18.
  • the second computation pipeline 16 comprises a second branch metric calculator 22, also referred to herein sis a reverse branch metric calculator since it is in a "reverse" path of the device, coupled in series to a reverse state metric calculator 24.
  • the output of the reverse state metric calculator 24 is input to a reverse state metric storre 30 and also a reverse LLR calculator 32, both in tlie second computation pipeline 16.
  • tiie reverse LLR calculator 32 receives a second input from the reverse branch metric calculator 22.
  • the forward LLR calculator 28 in the first computation pipeline 14 receives a third input from Hie reverse state metric store 30 in the second computation pipeline 16.
  • the reverse LLR calculator 32 in the second computation pipeline 16 receives a thi xd input from the forward state metric store 26 in the first computation pipeline 14.
  • the reverse LLR calculator 32 and the forward LLR calculator 28 are both coupled to a single LLR buffer 34.
  • a received code block (considered as comprising a first half and a second half) is input to, and held in, the code block buffer 12.
  • the first computation pipeline 14 processes data from the front end of the code block to the back end of the code block.
  • the second computation pipeline 16 processes data from the back end of the code block to the front end of the code block.
  • the first half of the code block is input in sequence to the forward branch metric calculator 18 where branch metrics are determined.
  • Each output from the forward branch metric calculator 18 is input to the forward state metric calculator 20 where it is processed.
  • the second half of the code block is input in sequence to the reverse branch metric calculator 22 where branch metrics are also determined.
  • Each output from the reverse branch metric calculator 22 is input to the reverse state metric calculator 24 where it is processed.
  • Each output from the forward state metric calculator 20 is stored in the forward state metric store 26 and each output from the reverse state metric calculator 24 is stored in the reverse state metric store 30.
  • the second half of the code block held in the code block buffer 12 is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, as described above, up to the point where the signal is processed by the forward e.nd reverse state metric calculators 20, 24.
  • the signal output from the forward state metric calculator 20 is fed to the forward LLR calculator 28.
  • the signal output from the reverse state met:tric calculator 24 is fed to the reverse LLR calculator 32 .
  • the forward LLR calculator 28 then utilizes the stoared input signal (stored during the first stage of operation) from the reverse state metric store 30, an input signal from the forward branch metric calculator 18 and the input signal from the forward state metric calculator 20 in order to immediately determine "the forward LLR.
  • the reverse LLR calculator 32 utilizes the stored input signal (stored during the first stage of operation) from the forward state metric store 26, an input signal from the reverse branch metirie calculator 22 and the input signal from the reverrse state metric calculator 24 in order to immediately determine the reverse LLR. Both the forward and the reverse LLRs are held in the LLR buffer' 34, prior to forming a decoder output signal.
  • the decoder of Figure 1 provides an incre sed operational speed in comparison to a conventional decoder, because a trace-back delay period is avoided. It is only necessary to store the metrics calcul ted for one half of each code block. The metrics calculated for the other half of each code block are not stored but passed immediately to the forward and reverse LLR calculators 28, 32.
  • common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1. Modification is found in the LLR buffer 38 which additionally functions as a block interleaver/deinterleaver .
  • An integrated address generator 36 feeds both the LLR buffer 38 and the code block buffer 12 simultaneously.
  • the LLR buffer 38 has a second output line coupled to the forward branch metric calculator 18 and the forward LLR calculator 28, and a third output line coupled to the reverse branch metric calculator 22 and the reverse LLR calculator 32. Further, the code block buffer 12 is coupled directly to the forward and reverse LLR calculators 28, 32.
  • the modified system of Figure 2 functions in a similar way to the system depicted in Figure 1.
  • the processed signal output from the reverse LLR calculator 32 and forward LLR calculator 28 is written to the LLR buffer 38 in linear order and read out in permuted order during an interleaving stage .
  • the processed signal output from the LLR calculators 28, 32 is written to the LLR buffer 38 in permuted order and read out in linear order during a deinterleaving stage.
  • an a-priori LLR value is transferred from the LLR buffer 38 to both the reverse LLR calculator 32 and the forward LLR calculator 28.
  • the LLR buffer 38 acts alternately as a block interleaver and a block deinterleaver, thereby avoiding the necessity for double buffering.
  • FIG. 1 Data throughput can be increased for the two pipeline decoders of Figure 1 and Figure 2 by extending the architecture to comprise 2n pipelines (where n is a positive integer) .
  • Figure 3 illustrates a four pipeline MAP decoder architecture, which essentially comprises two of the systems illustrated in Figure 1, operating in parallel. Again, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1.
  • a code block buffer 48 is coupled in parallel to first, second, third and fourth computation pipelines 14, 16, 44, 46.
  • the first and second computation pipelines 14, 16 comprise the same elements as described in relation to Figure 1.
  • the third and fourth computation pipelines 44, 46 also comprise the same elements as in the first and second computation pipelines 14, 16.
  • the outputs of each of the first, second, third and fourth computation pipelines 14, 16, 44, 46 are coupled in parallel to a LLR buffer 50.
  • the first computation pipeline 14 operates on a first quarter of the code block starting from the front end of the block, and calculates and stores the forward state metrics for the first quarter only.
  • the second computation pipeline 16 operates on a second quarter of the code block starting from the middle of the block, and calculates and stores reverse state metrics for the second quarter only.
  • the third computation pipeline 44 operates on a third quarter of the code block starting from the middle of the block, and calculates and stores forward state metrics for the third quarter only.
  • the fourth computation pipeline 46 operates on a fourth quarter of the code block starting from the back end of the block, and calculates and stores reverse state metrics for the fourth quarter only.
  • these pipelines start to operate before the first and fourth computation pipelines 14, 46 pipelines, and run for a predetermined "trace-back delay" period before reaching the centre of the code block.
  • the four pipeline architecture of Figure 3 allows a two-fold increase in speed of data throughput in comparison to the two pipeline decoder of Figure 1 (ignoring the effects of the trace-back delay) . Further extensions of the architecture to comprise 2n pipelines (where n is a positive integer) will result in an n-fold increase in speed of data throughput.
  • processors referred to above may be implemented in hardware or software.
  • the connection between the forward branch metric calculator 18 and the forward LLR calculator 28, and the connection between the reverse branch metric calculator 22 and the reverse LLR calculator 32 are not necessary for the correct functioning of the decoder.
  • the present invention provides a convolutional decoder which has significant advantages over conventional MAP decoders .

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

A log-MAP decoder including first and second state metric computation pipelines in parallel. The first computation pipeline includes a forward branch metric calculator and a forward state metric calculator in series. The second computation pipeline includes a reverse branch metric calculator and a reverse state metric calculator in series. Each of the first and second computation pipelines processes one half of a code block and the forward and backward metrics are then stored. The remaining halves of the code block are subsequently processed in the first and second computation pipelines and the log likelihood ratios (LLRs) are then calculated and output as a decoded signal. Advantage is gained through the avoidance of an acquisition processing for the forward and backward recursions and high memory requirements.

Description

LOG-MAP DECODER
The present invention relates to a digital decoder, and in particular to a Maximum A-Posteriori (MAP) decoder for use in a telecommunication system.
Telecommunication systems generally suffer from a degradation of the signal transmitted over a channel due to noise, attenuation and fading. The utilisation of digital signals rather than analogue signals- affords advantages such as improved immunity to chsgnpel- noise and interference, increased channel data capacity and improved security through the use of encryption. A data signal is modulated to enable efficient transmission over a channel.
Errors in the form of missing or wrong digits can cause significant problems in digital data transmission, and various systems of error detection and control are commonly used, such as cyclic redu-ndaney chec s- (CRG) and forward error correction (FEC) . Error correction circuitry generally comprises an encoder at the transmitter and a decoder at the receiver.
One class of encoder, known as a convolutional encoder, converts a sequence of input bits into a code block based on a convolution of the input sequence with a ixed binary pattern or with another sional . After transmission, the code blocks are fed to a convolutional decoder, such as a MAP decoder. The convolutional encoder can be in one of numerous states (generally dependent upon a constraint length of the code block) at the time of the conversion of the input data to a code block. A MAP decoder calculates and stores various probabilities. In a log MAP decoder (or in a max log MAP decoder) these probabilities are calculated and stored in logarithmic form, and are known as metrics. There are three types of metrics, namely forward state metrics (also known as forward path metrics) , reverse state metrics (also known as reverse path metrics) and branch metrics. The forward state metrics represent the probability that the encoder is in a particular state and that a particular channel sequence has been received up to this point. The reverse state metrics represent the probability that given the encoder is in a particular state, the future received sequence will be some particular; given sequence. The branch metrics represent the probability that given the encoder is in a particular state, it moves to another particular state and that we receive a particular channel sequence.
The MAP decoder was first formally described by Bahl, Cocke, Jelinek and Raviv (hence the alternative name
"the BCJR algorithm") in "Optimal Decoding of Linear
Codes for Minimizing Symbol Error Rate,"' IEEE
Transactions on Information Theory, pp. 284-287, March
1974. Some MAP decoders implement a logarithmic version of the MAP algorithm (in which all the metrics are stored and computed in logarithmic form) and are known as "log MAP" decoders. Commonly, decoders utilise a form of the MAP algorithm known as the "max log MAP" algorithm which reduces the computational requirements of the algorithm through use of an approximation. A receiver in a digital communication system generally includes a quantizer device ^ ich can have two or more so-called "levels". A two-level (or binary) quantizer produces what is known as a hard-decision' output. A quantizer making use of more than two levels produces a soft-decision' output. For example, in a system utilising a four-level quantizer in the receiver, for each received data bit, the oruantizer output represents a decision on the most likely transmitted binary value combined with a confidence level for that decision.
The MAP algorithm is a "soft- output" (or alternatively, "soft-in-soft-out") algorithm, which means that trie algorithm outputs more than a. two level representation for each decoded bit. This ϊeature makes MAP decodeirs particularly suitable for decoding "turbo codes".
Parallel concatenated convolτational codes (PCCCs) a re turbo codes which are' formed ?by encoding data bits in a first recursive systematic convolutional encoder arid then, after passing through an interleaver, the data bits are further encoded by a second systemat±c convolutional encoder. Such, turbo codes yield coding gains close to theoretical Shannon capacity limits . The decoding of such turbo codes generally requires two decode operations per iteration (and usually several iterations are involved) .
Log MAP (or max log MAP) decoders can be implemented with a* single processor or several processors in parallel. Single processor implementations aπre constrained to processirxg the MAP algorithm sequentially. The processor determines the forwaird state metrics by scanning forwards through the code block and computing and storing the br-anch metrics and forward state metrics. The processor then computes the reverse state metrics by scanning backwards through the code block. These reverse state metrics are used in conjunction with the previously stored forward state metrics and branch metrics to calc ilate final log likelihood ratios (LLRs) for output from the MAP decoder. Parallel processor implementations are able to achieve a higher data throughput than single processor implementations .
A LLR of a code block is the log of the likelihood of the ratio that the mth data bit is a logical 1 as opposed to a logical 0.
In a typical parallel processor implementation of a MAP decoder, known as a "windowed" technique, the decoding operation comprises forward and backward recursive calculations and each code block is operated on as a series of' overlapping sub-blocks. (IFor example, see S.S. Pietrobon, "Efficient implementation of continuous MAP decoders and a synchronization technique for turbo decoders", Proc. Int. Symp. Informat on Theory Appl., Victoria, B.C., Canada, 1996, pp. 586-589.) A first processor calculates forward state metrics for the sub- blocks and a second, faster processor calculates reverse state metrics for the sub-blocks. (It is also known to utilise two separate processors, each the same speed as the first processor, in place of the second processor.) Immediately the first processor has calculated the forward state metrics for a particular sub-block, the second processor starts to calculate the reverse state metrics for the same su?k>-block. However, each calculation performed by the second processor commences a part of the way through a code block . Therefore, the decoder cannot determine the state that the encoder was in prior to transmission of that particular code block, so the decoder commences each reverse state metric engine run ignoring the state metric calculations (for LLR calculation purposes) calculated by the second processor for a predetermined period (generally known as a "trace-back delay") in order to allow the reverse state metrics to converge close to the correct relative values.
The main disadvantages associated with the operation of this type of parallel processor implementation of a MAP decoder stem from the trace-back delay.
Specifically, a trace-back delay ( commonly a period related to 32 or 64 decoded output bits) occurs for each code block and leads to a slow overall processing rate for short code blocks. Further, the trace-back delay results in a requirement for an additional or faster state metric calculator in the reverse direction compared to the forward direction.
Therefore, the present invention seeks to provide a decoder in which problems associated with the trace- back delay are at least alleviated.
According to a first aspect of the present invention there is provided a method for decoding a code block, the method comprising the steps of calculating and storing a forward state metric of first half of the code block and a reverse state metric of a second half of the code block, in parallel; caiculating a forward state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forwa_rd state metric of the second half of the code block: and the stored reverse state metric of the second haif of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric: of the first half of the code block and the stored forward state metric of the first half of the code b lock to a second log likelihood ratio calculator; and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a second aspect of the present invention there is provided a decoder for decoding a code block, the decoder comprising; means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for f& eding the calculated reverse state metrics of the first half of the code block and the stored forward state -metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a third aspect of the present invention there is provided a mobile telecommunications base station including the decoder of the second aspect of the present invention.
According to a fourth aspect of the preseni invention there is provided a mobile telecommunications terminal including the decoder of the second asp«ect of the present invention.
Advantageously, the present invention provides a MAP decoder with a high data throughput. The trace-back delay is avoided, thus allowing short code holocks to be decoded at the decoder' s full data rate . Further advantage is gained through the low computational and control requirements and the efficient use of memory.
For a better understanding of the present invention, and to show how it may be put into effec , reference will now be made, by way of example, to the accompanying drawings in which:
Figure 1 shows a first embodiment of the decoder of the present invention;
Figure 2 shows a turbo decoder implementation of the first embodiment of the present invention; a_nd
Figure 3 shows a second embodiment of the present invention. In the log MAP decoder 10 of Figure 1, a code blo k buffer 12 is coupled to a first computation pipeline 1_4 (also known as a forward state metrics computation pipeline) and a second computation pipeline 16 (also known as a backward state metrics computation pipeline) in parallel. The -first computation pipeline 2_4 comprises a first branch metric calculator 18, also referred to herein as a forward branch metric calculator since it is in a "forward" path of time device, coupled in series to a forward state metric calculator 20. The output of the forward state metric calculator 20 is input to a forward state metric
Figure imgf000010_0001
26 and also a forward LLR calculator 28, both in tine first computation pipeline 14. Furthermore, tine forward LLR calculator 28 receives a second input from the forward branch metric calculator 18.
The second computation pipeline 16 comprises a second branch metric calculator 22, also referred to herein sis a reverse branch metric calculator since it is in a "reverse" path of the device, coupled in series to a reverse state metric calculator 24. In a simileir manner, the output of the reverse state metric calculator 24 is input to a reverse state metric storre 30 and also a reverse LLR calculator 32, both in tlie second computation pipeline 16. Furthermore, tiie reverse LLR calculator 32 receives a second input from the reverse branch metric calculator 22.
Importantly, the forward LLR calculator 28 in the first computation pipeline 14 receives a third input from Hie reverse state metric store 30 in the second computation pipeline 16. Similarly, the reverse LLR calculator 32 in the second computation pipeline 16 receives a thi xd input from the forward state metric store 26 in the first computation pipeline 14. Finally, the reverse LLR calculator 32 and the forward LLR calculator 28 are both coupled to a single LLR buffer 34.
In operation, a received code block (considered as comprising a first half and a second half) is input to, and held in, the code block buffer 12. The first computation pipeline 14 processes data from the front end of the code block to the back end of the code block. Simultaneously, the second computation pipeline 16 processes data from the back end of the code block to the front end of the code block.
Specifically, in a first stage of operation, the first half of the code block is input in sequence to the forward branch metric calculator 18 where branch metrics are determined. Each output from the forward branch metric calculator 18 is input to the forward state metric calculator 20 where it is processed. The second half of the code block is input in sequence to the reverse branch metric calculator 22 where branch metrics are also determined. Each output from the reverse branch metric calculator 22 is input to the reverse state metric calculator 24 where it is processed. Each output from the forward state metric calculator 20 is stored in the forward state metric store 26 and each output from the reverse state metric calculator 24 is stored in the reverse state metric store 30.
In a second stage of operation, the second half of the code block held in the code block buffer 12 is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, as described above, up to the point where the signal is processed by the forward e.nd reverse state metric calculators 20, 24. Here, the signal output from the forward state metric calculator 20 is fed to the forward LLR calculator 28. Similarly, the signal output from the reverse state met:tric calculator 24 is fed to the reverse LLR calculator 32 .
The forward LLR calculator 28 then utilizes the stoared input signal (stored during the first stage of operation) from the reverse state metric store 30, an input signal from the forward branch metric calculator 18 and the input signal from the forward state metric calculator 20 in order to immediately determine "the forward LLR. The reverse LLR calculator 32 utilizes the stored input signal (stored during the first stage of operation) from the forward state metric store 26, an input signal from the reverse branch metirie calculator 22 and the input signal from the reverrse state metric calculator 24 in order to immediately determine the reverse LLR. Both the forward and the reverse LLRs are held in the LLR buffer' 34, prior to forming a decoder output signal.
Thus, the decoder of Figure 1 provides an incre sed operational speed in comparison to a conventional decoder, because a trace-back delay period is avoided. It is only necessary to store the metrics calcul ted for one half of each code block. The metrics calculated for the other half of each code block are not stored but passed immediately to the forward and reverse LLR calculators 28, 32. In the turbo code decoder of Figure 2, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1. Modification is found in the LLR buffer 38 which additionally functions as a block interleaver/deinterleaver . An integrated address generator 36 feeds both the LLR buffer 38 and the code block buffer 12 simultaneously. Also, the LLR buffer 38 has a second output line coupled to the forward branch metric calculator 18 and the forward LLR calculator 28, and a third output line coupled to the reverse branch metric calculator 22 and the reverse LLR calculator 32. Further, the code block buffer 12 is coupled directly to the forward and reverse LLR calculators 28, 32.
In operation, the modified system of Figure 2 functions in a similar way to the system depicted in Figure 1. The processed signal output from the reverse LLR calculator 32 and forward LLR calculator 28 is written to the LLR buffer 38 in linear order and read out in permuted order during an interleaving stage . The processed signal output from the LLR calculators 28, 32 is written to the LLR buffer 38 in permuted order and read out in linear order during a deinterleaving stage. When the second half of the code block is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, an a-priori LLR value is transferred from the LLR buffer 38 to both the reverse LLR calculator 32 and the forward LLR calculator 28. Consequently, an extrinsic LLR value, output from both the forward and reverse LLR calculators 28, 32 is written immediately back into the same interleaver address . In this way, the LLR buffer 38 acts alternately as a block interleaver and a block deinterleaver, thereby avoiding the necessity for double buffering.
Data throughput can be increased for the two pipeline decoders of Figure 1 and Figure 2 by extending the architecture to comprise 2n pipelines (where n is a positive integer) . Figure 3 illustrates a four pipeline MAP decoder architecture, which essentially comprises two of the systems illustrated in Figure 1, operating in parallel. Again, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1.
In the log MAP decoder 42 of Figure 3, a code block buffer 48 is coupled in parallel to first, second, third and fourth computation pipelines 14, 16, 44, 46. The first and second computation pipelines 14, 16 comprise the same elements as described in relation to Figure 1. The third and fourth computation pipelines 44, 46 also comprise the same elements as in the first and second computation pipelines 14, 16. The outputs of each of the first, second, third and fourth computation pipelines 14, 16, 44, 46 are coupled in parallel to a LLR buffer 50.
In operation, the first computation pipeline 14 operates on a first quarter of the code block starting from the front end of the block, and calculates and stores the forward state metrics for the first quarter only. The second computation pipeline 16 operates on a second quarter of the code block starting from the middle of the block, and calculates and stores reverse state metrics for the second quarter only. The third computation pipeline 44 operates on a third quarter of the code block starting from the middle of the block, and calculates and stores forward state metrics for the third quarter only. The fourth computation pipeline 46 operates on a fourth quarter of the code block starting from the back end of the block, and calculates and stores reverse state metrics for the fourth quarter only.
In order to allow the state metrics of the second and third computation pipelines 16, 44 to converge close to their correct relative values, these pipelines start to operate before the first and fourth computation pipelines 14, 46 pipelines, and run for a predetermined "trace-back delay" period before reaching the centre of the code block.
The four pipeline architecture of Figure 3 allows a two-fold increase in speed of data throughput in comparison to the two pipeline decoder of Figure 1 (ignoring the effects of the trace-back delay) . Further extensions of the architecture to comprise 2n pipelines (where n is a positive integer) will result in an n-fold increase in speed of data throughput.
It will be apparent to the skilled person that the above described system architectures are not exhaustive and variations on these structures may be employed to achieve a similar result whilst employing the same inventive concept. Specifically, the present inventive concept can be implemented for a MAP decoder, a log MAP decoder or a max log MAP decoder. The extended architectures described above can also be implemented with both PCCC and SCCC (Serial concatenated convolutional codes) turbo code decoder pipelines.
Further, the processors referred to above may be implemented in hardware or software. With reference to Figure 1, where the invention is implemented for a max log MAP decoder, the connection between the forward branch metric calculator 18 and the forward LLR calculator 28, and the connection between the reverse branch metric calculator 22 and the reverse LLR calculator 32 are not necessary for the correct functioning of the decoder.
It can therefore be seen that the present invention provides a convolutional decoder which has significant advantages over conventional MAP decoders .

Claims

Claims
1. A method for decoding a code block, the method comprising the steps of: calculating and storing a forward state metric of a first half of the code block and a reverse state metric of a second half of the code block, in parallel; calculating a forward state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forward state metric of the second half of the code block and the stored reverse state metric of the second half of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric of the first half of the code block and the stored forward state metric of the first half of the code block to a second log likelihood ratio calculator; and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
2. The method as claimed in claim 1, wherein the forward state metrics are calculated from a front end of the code block to a back end of the code block, and the reverse state metrics are calculated from the back end of the code block to the front end of the code block.
3. A decoder for decoding a code block, the decoder comprising; means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for feeding the calculated reverse state metrics of the first half of the code block and the stored forward state metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
4. The decoder as claimed in claim 3, further comprising means for supplying the code block to the means for calculating forward state metrics and the means for calculating reverse state metrics such that the means for calculating forward state metrics of the first half and then the second half of the code block operates from a front end of the code block to the back end of the code block, and the means for calculating reverse state metrics of the second half and then the first half of the code block operates from the back end of the code block to the front end of the code block.
5. The decoder as claimed in claim 3 wherein the means for calculating forward state metrics comprises a first branch metric calculator and a forward state metric calculator, and the means for calculating reverse metrics comprises a second branch metric calculator and a reverse state metric calculator. . The decoder as claimed in claim 3 wherein the code blocks include parallel concatenated convolutional codes .
7. A decoder for decoding a code block comprising a plurality of the decoders as claimed in claims 3 to 6 , coupled in parallel .
8. A mobile telecommunications base station including the decoder of any preceding claim.
9. A mobile telecommunications terminal including the decoder of any preceding claim.
PCT/GB2004/005377 2003-12-23 2004-12-22 Log-map decoder WO2005081410A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0329910A GB2409618A (en) 2003-12-23 2003-12-23 Telecommunications decoder device
GB0329910.4 2003-12-23

Publications (1)

Publication Number Publication Date
WO2005081410A1 true WO2005081410A1 (en) 2005-09-01

Family

ID=30776431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2004/005377 WO2005081410A1 (en) 2003-12-23 2004-12-22 Log-map decoder

Country Status (2)

Country Link
GB (1) GB2409618A (en)
WO (1) WO2005081410A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9337866B2 (en) 2013-06-04 2016-05-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Apparatus for processing signals carrying modulation-encoded parity bits

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192501B1 (en) * 1998-08-20 2001-02-20 General Electric Company High data rate maximum a posteriori decoder for segmented trellis code words

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1004814A3 (en) * 1991-05-08 1993-02-02 Bell Telephone Mfg Decoder.
US5502735A (en) * 1991-07-16 1996-03-26 Nokia Mobile Phones (U.K.) Limited Maximum likelihood sequence detector
US7242726B2 (en) * 2000-09-12 2007-07-10 Broadcom Corporation Parallel concatenated code with soft-in soft-out interactive turbo decoder
US6961921B2 (en) * 2001-09-06 2005-11-01 Interdigital Technology Corporation Pipeline architecture for maximum a posteriori (MAP) decoders
US6718504B1 (en) * 2002-06-05 2004-04-06 Arc International Method and apparatus for implementing a data processor adapted for turbo decoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192501B1 (en) * 1998-08-20 2001-02-20 General Electric Company High data rate maximum a posteriori decoder for segmented trellis code words

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WORM A ET AL: "VLSI architectures for high-speed MAP decoders", PROCEEDINGS OF 14TH INTERNATIONAL CONFERENCE ON VLSI DESIGN, IEEE, 3 January 2001 (2001-01-03), pages 446 - 453, XP010531492 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9337866B2 (en) 2013-06-04 2016-05-10 Avago Technologies General Ip (Singapore) Pte. Ltd. Apparatus for processing signals carrying modulation-encoded parity bits

Also Published As

Publication number Publication date
GB2409618A (en) 2005-06-29
GB0329910D0 (en) 2004-01-28

Similar Documents

Publication Publication Date Title
US7584409B2 (en) Method and device for alternately decoding data in forward and reverse directions
EP0834222B1 (en) Parallel concatenated tail-biting convolutional code and decoder therefor
Bauer et al. Symbol-by-symbol MAP decoding of variable length codes
US6810502B2 (en) Iteractive decoder employing multiple external code error checks to lower the error floor
US6725409B1 (en) DSP instruction for turbo decoding
US20020174401A1 (en) Area efficient parallel turbo decoding
US6591390B1 (en) CRC-based adaptive halting turbo decoder and method of use
US6812873B1 (en) Method for decoding data coded with an entropic code, corresponding decoding device and transmission system
Johansson et al. A simple one-sweep algorithm for optimal APP symbol decoding of linear block codes
US6487694B1 (en) Method and apparatus for turbo-code decoding a convolution encoded data frame using symbol-by-symbol traceback and HR-SOVA
Chen Iterative soft decoding of Reed-Solomon convolutional concatenated codes
US6675342B1 (en) Direct comparison adaptive halting decoder and method of use
Thobaben et al. Robust decoding of variable-length encoded Markov sources using a three-dimensional trellis
JP2004343716A (en) Method and decoder for blind detection of transmission format of convolution-encoded signal
US8983008B2 (en) Methods and apparatus for tail termination of turbo decoding
US7552379B2 (en) Method for iterative decoding employing a look-up table
US7565594B2 (en) Method and apparatus for detecting a packet error in a wireless communications system with minimum overhead using embedded error detection capability of turbo code
WO2005081410A1 (en) Log-map decoder
US7096410B2 (en) Turbo-code decoding using variably set learning interval and sliding window
US7327796B1 (en) SOVA turbo decoder with decreased normalisation complexity
Sklar Turbo code concepts made easy, or how I learned to concatenate and reiterate
US10116337B2 (en) Decoding method for convolutionally coded signal
US20070157063A1 (en) Method for iterative decoding in a digital system and apparatus implementing the method
Shim et al. An efficient iteration decoding stopping criterion for turbo codes
Bera et al. SOVA based decoding of double-binary turbo convolutional code

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase