LOG-MAP DECODER
The present invention relates to a digital decoder, and in particular to a Maximum A-Posteriori (MAP) decoder for use in a telecommunication system.
Telecommunication systems generally suffer from a degradation of the signal transmitted over a channel due to noise, attenuation and fading. The utilisation of digital signals rather than analogue signals- affords advantages such as improved immunity to chsgnpel- noise and interference, increased channel data capacity and improved security through the use of encryption. A data signal is modulated to enable efficient transmission over a channel.
Errors in the form of missing or wrong digits can cause significant problems in digital data transmission, and • various systems of error detection and control are commonly used, such as cyclic redu-ndaney chec s- (CRG) and forward error correction (FEC) . Error correction circuitry generally comprises an encoder at the transmitter and a decoder at the receiver.
One class of encoder, known as a convolutional encoder, converts a sequence of input bits into a code block based on a convolution of the input sequence with a ixed binary pattern or with another sional . After transmission, the code blocks are fed to a convolutional decoder, such as a MAP decoder. The convolutional encoder can be in one of numerous states (generally dependent upon a constraint length of the code block) at the time of the conversion of the input data to a code block. A MAP decoder calculates and
stores various probabilities. In a log MAP decoder (or in a max log MAP decoder) these probabilities are calculated and stored in logarithmic form, and are known as metrics. There are three types of metrics, namely forward state metrics (also known as forward path metrics) , reverse state metrics (also known as reverse path metrics) and branch metrics. The forward state metrics represent the probability that the encoder is in a particular state and that a particular channel sequence has been received up to this point. The reverse state metrics represent the probability that given the encoder is in a particular state, the future received sequence will be some particular; given sequence. The branch metrics represent the probability that given the encoder is in a particular state, it moves to another particular state and that we receive a particular channel sequence.
The MAP decoder was first formally described by Bahl, Cocke, Jelinek and Raviv (hence the alternative name
"the BCJR algorithm") in "Optimal Decoding of Linear
Codes for Minimizing Symbol Error Rate,"' IEEE
Transactions on Information Theory, pp. 284-287, March
1974. Some MAP decoders implement a logarithmic version of the MAP algorithm (in which all the metrics are stored and computed in logarithmic form) and are known as "log MAP" decoders. Commonly, decoders utilise a form of the MAP algorithm known as the "max log MAP" algorithm which reduces the computational requirements of the algorithm through use of an approximation.
A receiver in a digital communication system generally includes a quantizer device ^ ich can have two or more so-called "levels". A two-level (or binary) quantizer produces what is known as a hard-decision' output. A quantizer making use of more than two levels produces a soft-decision' output. For example, in a system utilising a four-level quantizer in the receiver, for each received data bit, the oruantizer output represents a decision on the most likely transmitted binary value combined with a confidence level for that decision.
The MAP algorithm is a "soft- output" (or alternatively, "soft-in-soft-out") algorithm, which means that trie algorithm outputs more than a. two level representation for each decoded bit. This ϊeature makes MAP decodeirs particularly suitable for decoding "turbo codes".
Parallel concatenated convolτational codes (PCCCs) a re turbo codes which are' formed ?by encoding data bits in a first recursive systematic convolutional encoder arid then, after passing through an interleaver, the data bits are further encoded by a second systemat±c convolutional encoder. Such, turbo codes yield coding gains close to theoretical Shannon capacity limits . The decoding of such turbo codes generally requires two decode operations per iteration (and usually several iterations are involved) .
Log MAP (or max log MAP) decoders can be implemented with a* single processor or several processors in parallel. Single processor implementations aπre constrained to processirxg the MAP algorithm sequentially. The processor determines the forwaird
state metrics by scanning forwards through the code block and computing and storing the br-anch metrics and forward state metrics. The processor then computes the reverse state metrics by scanning backwards through the code block. These reverse state metrics are used in conjunction with the previously stored forward state metrics and branch metrics to calc ilate final log likelihood ratios (LLRs) for output from the MAP decoder. Parallel processor implementations are able to achieve a higher data throughput than single processor implementations .
A LLR of a code block is the log of the likelihood of the ratio that the mth data bit is a logical 1 as opposed to a logical 0.
In a typical parallel processor implementation of a MAP decoder, known as a "windowed" technique, the decoding operation comprises forward and backward recursive calculations and each code block is operated on as a series of' overlapping sub-blocks. (IFor example, see S.S. Pietrobon, "Efficient implementation of continuous MAP decoders and a synchronization technique for turbo decoders", Proc. Int. Symp. Informat on Theory Appl., Victoria, B.C., Canada, 1996, pp. 586-589.) A first processor calculates forward state metrics for the sub- blocks and a second, faster processor calculates reverse state metrics for the sub-blocks. (It is also known to utilise two separate processors, each the same speed as the first processor, in place of the second processor.) Immediately the first processor has calculated the forward state metrics for a particular sub-block, the second processor starts to calculate the reverse state metrics for the same su?k>-block. However,
each calculation performed by the second processor commences a part of the way through a code block . Therefore, the decoder cannot determine the state that the encoder was in prior to transmission of that particular code block, so the decoder commences each reverse state metric engine run ignoring the state metric calculations (for LLR calculation purposes) calculated by the second processor for a predetermined period (generally known as a "trace-back delay") in order to allow the reverse state metrics to converge close to the correct relative values.
The main disadvantages associated with the operation of this type of parallel processor implementation of a MAP decoder stem from the trace-back delay.
Specifically, a trace-back delay ( commonly a period related to 32 or 64 decoded output bits) occurs for each code block and leads to a slow overall processing rate for short code blocks. Further, the trace-back delay results in a requirement for an additional or faster state metric calculator in the reverse direction compared to the forward direction.
Therefore, the present invention seeks to provide a decoder in which problems associated with the trace- back delay are at least alleviated.
According to a first aspect of the present invention there is provided a method for decoding a code block, the method comprising the steps of calculating and storing a forward state metric of first half of the code block and a reverse state metric of a second half of the code block, in parallel; caiculating a forward
state metric of a second half of the code block and a reverse state metric of a first half of the code block, in parallel; feeding the calculated forwa_rd state metric of the second half of the code block: and the stored reverse state metric of the second haif of the code block to a first log likelihood ratio calculator; feeding the calculated reverse state metric: of the first half of the code block and the stored forward state metric of the first half of the code b lock to a second log likelihood ratio calculator; and combining the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a second aspect of the present invention there is provided a decoder for decoding a code block, the decoder comprising; means for calculating forward state metrics of a first half and then a second half of the code block, means for storing the calculated forward state metrics of the first half of the code block, means for calculating reverse state metrics of a second half and then a first half of the code block, means for storing the reverse state metrics of the second half of the code block, means for feeding the calculated forward state metrics of the second half of the code block and the stored reverse state metrics of the second half of the code block to a first log likelihood ratio calculator, means for f& eding the calculated reverse state metrics of the first half of the code block and the stored forward state -metrics of the first half of the code block to a second log likelihood ratio calculator, and means for combining
the outputs from the first and second log likelihood ratio calculators to determine a decoded output signal.
According to a third aspect of the present invention there is provided a mobile telecommunications base station including the decoder of the second aspect of the present invention.
According to a fourth aspect of the preseni invention there is provided a mobile telecommunications terminal including the decoder of the second asp«ect of the present invention.
Advantageously, the present invention provides a MAP decoder with a high data throughput. The trace-back delay is avoided, thus allowing short code holocks to be decoded at the decoder' s full data rate . Further advantage is gained through the low computational and control requirements and the efficient use of memory.
For a better understanding of the present invention, and to show how it may be put into effec , reference will now be made, by way of example, to the accompanying drawings in which:
Figure 1 shows a first embodiment of the decoder of the present invention;
Figure 2 shows a turbo decoder implementation of the first embodiment of the present invention; a_nd
Figure 3 shows a second embodiment of the present invention.
In the log MAP decoder 10 of Figure 1, a code blo k buffer 12 is coupled to a first computation pipeline 1_4 (also known as a forward state metrics computation pipeline) and a second computation pipeline 16 (also known as a backward state metrics computation pipeline) in parallel. The -first computation pipeline 2_4 comprises a first branch metric calculator 18, also referred to herein as a forward branch metric calculator since it is in a "forward" path of time device, coupled in series to a forward state metric calculator 20. The output of the forward state metric calculator 20 is input to a forward state metric
26 and also a forward LLR calculator 28, both in tine first computation pipeline 14. Furthermore, tine forward LLR calculator 28 receives a second input from the forward branch metric calculator 18.
The second computation pipeline 16 comprises a second branch metric calculator 22, also referred to herein sis a reverse branch metric calculator since it is in a "reverse" path of the device, coupled in series to a reverse state metric calculator 24. In a simileir manner, the output of the reverse state metric calculator 24 is input to a reverse state metric storre 30 and also a reverse LLR calculator 32, both in tlie second computation pipeline 16. Furthermore, tiie reverse LLR calculator 32 receives a second input from the reverse branch metric calculator 22.
Importantly, the forward LLR calculator 28 in the first computation pipeline 14 receives a third input from Hie reverse state metric store 30 in the second computation pipeline 16. Similarly, the reverse LLR calculator 32 in the second computation pipeline 16 receives a thi xd
input from the forward state metric store 26 in the first computation pipeline 14. Finally, the reverse LLR calculator 32 and the forward LLR calculator 28 are both coupled to a single LLR buffer 34.
In operation, a received code block (considered as comprising a first half and a second half) is input to, and held in, the code block buffer 12. The first computation pipeline 14 processes data from the front end of the code block to the back end of the code block. Simultaneously, the second computation pipeline 16 processes data from the back end of the code block to the front end of the code block.
Specifically, in a first stage of operation, the first half of the code block is input in sequence to the forward branch metric calculator 18 where branch metrics are determined. Each output from the forward branch metric calculator 18 is input to the forward state metric calculator 20 where it is processed. The second half of the code block is input in sequence to the reverse branch metric calculator 22 where branch metrics are also determined. Each output from the reverse branch metric calculator 22 is input to the reverse state metric calculator 24 where it is processed. Each output from the forward state metric calculator 20 is stored in the forward state metric store 26 and each output from the reverse state metric calculator 24 is stored in the reverse state metric store 30.
In a second stage of operation, the second half of the code block held in the code block buffer 12 is
processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, as described above, up to the point where the signal is processed by the forward e.nd reverse state metric calculators 20, 24. Here, the signal output from the forward state metric calculator 20 is fed to the forward LLR calculator 28. Similarly, the signal output from the reverse state met:tric calculator 24 is fed to the reverse LLR calculator 32 .
The forward LLR calculator 28 then utilizes the stoared input signal (stored during the first stage of operation) from the reverse state metric store 30, an input signal from the forward branch metric calculator 18 and the input signal from the forward state metric calculator 20 in order to immediately determine "the forward LLR. The reverse LLR calculator 32 utilizes the stored input signal (stored during the first stage of operation) from the forward state metric store 26, an input signal from the reverse branch metirie calculator 22 and the input signal from the reverrse state metric calculator 24 in order to immediately determine the reverse LLR. Both the forward and the reverse LLRs are held in the LLR buffer' 34, prior to forming a decoder output signal.
Thus, the decoder of Figure 1 provides an incre sed operational speed in comparison to a conventional decoder, because a trace-back delay period is avoided. It is only necessary to store the metrics calcul ted for one half of each code block. The metrics calculated for the other half of each code block are not stored but passed immediately to the forward and reverse LLR calculators 28, 32.
In the turbo code decoder of Figure 2, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1. Modification is found in the LLR buffer 38 which additionally functions as a block interleaver/deinterleaver . An integrated address generator 36 feeds both the LLR buffer 38 and the code block buffer 12 simultaneously. Also, the LLR buffer 38 has a second output line coupled to the forward branch metric calculator 18 and the forward LLR calculator 28, and a third output line coupled to the reverse branch metric calculator 22 and the reverse LLR calculator 32. Further, the code block buffer 12 is coupled directly to the forward and reverse LLR calculators 28, 32.
In operation, the modified system of Figure 2 functions in a similar way to the system depicted in Figure 1. The processed signal output from the reverse LLR calculator 32 and forward LLR calculator 28 is written to the LLR buffer 38 in linear order and read out in permuted order during an interleaving stage . The processed signal output from the LLR calculators 28, 32 is written to the LLR buffer 38 in permuted order and read out in linear order during a deinterleaving stage. When the second half of the code block is processed in the first computation pipeline 14 and the first half of the code block is processed in the second computation pipeline 16, an a-priori LLR value is transferred from the LLR buffer 38 to both the reverse LLR calculator 32 and the forward LLR calculator 28. Consequently, an extrinsic LLR value, output from both the forward and reverse LLR calculators 28, 32 is written immediately
back into the same interleaver address . In this way, the LLR buffer 38 acts alternately as a block interleaver and a block deinterleaver, thereby avoiding the necessity for double buffering.
Data throughput can be increased for the two pipeline decoders of Figure 1 and Figure 2 by extending the architecture to comprise 2n pipelines (where n is a positive integer) . Figure 3 illustrates a four pipeline MAP decoder architecture, which essentially comprises two of the systems illustrated in Figure 1, operating in parallel. Again, common reference numerals have been employed where common elements have the same function as in the log MAP decoder of Figure 1.
In the log MAP decoder 42 of Figure 3, a code block buffer 48 is coupled in parallel to first, second, third and fourth computation pipelines 14, 16, 44, 46. The first and second computation pipelines 14, 16 comprise the same elements as described in relation to Figure 1. The third and fourth computation pipelines 44, 46 also comprise the same elements as in the first and second computation pipelines 14, 16. The outputs of each of the first, second, third and fourth computation pipelines 14, 16, 44, 46 are coupled in parallel to a LLR buffer 50.
In operation, the first computation pipeline 14 operates on a first quarter of the code block starting from the front end of the block, and calculates and stores the forward state metrics for the first quarter only. The second computation pipeline 16 operates on a second quarter of the code block starting from the
middle of the block, and calculates and stores reverse state metrics for the second quarter only. The third computation pipeline 44 operates on a third quarter of the code block starting from the middle of the block, and calculates and stores forward state metrics for the third quarter only. The fourth computation pipeline 46 operates on a fourth quarter of the code block starting from the back end of the block, and calculates and stores reverse state metrics for the fourth quarter only.
In order to allow the state metrics of the second and third computation pipelines 16, 44 to converge close to their correct relative values, these pipelines start to operate before the first and fourth computation pipelines 14, 46 pipelines, and run for a predetermined "trace-back delay" period before reaching the centre of the code block.
The four pipeline architecture of Figure 3 allows a two-fold increase in speed of data throughput in comparison to the two pipeline decoder of Figure 1 (ignoring the effects of the trace-back delay) . Further extensions of the architecture to comprise 2n pipelines (where n is a positive integer) will result in an n-fold increase in speed of data throughput.
It will be apparent to the skilled person that the above described system architectures are not exhaustive and variations on these structures may be employed to achieve a similar result whilst employing the same inventive concept. Specifically, the present inventive concept can be implemented for a MAP decoder, a log MAP decoder or a max log MAP decoder. The extended
architectures described above can also be implemented with both PCCC and SCCC (Serial concatenated convolutional codes) turbo code decoder pipelines.
Further, the processors referred to above may be implemented in hardware or software. With reference to Figure 1, where the invention is implemented for a max log MAP decoder, the connection between the forward branch metric calculator 18 and the forward LLR calculator 28, and the connection between the reverse branch metric calculator 22 and the reverse LLR calculator 32 are not necessary for the correct functioning of the decoder.
It can therefore be seen that the present invention provides a convolutional decoder which has significant advantages over conventional MAP decoders .