GB2365289A - Optimal soft output decoder for convolutional codes - Google Patents

Optimal soft output decoder for convolutional codes Download PDF

Info

Publication number
GB2365289A
GB2365289A GB0102720A GB0102720A GB2365289A GB 2365289 A GB2365289 A GB 2365289A GB 0102720 A GB0102720 A GB 0102720A GB 0102720 A GB0102720 A GB 0102720A GB 2365289 A GB2365289 A GB 2365289A
Authority
GB
United Kingdom
Prior art keywords
window
recursion
trellis
backward
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0102720A
Other versions
GB0102720D0 (en
GB2365289B (en
Inventor
Vipul A Desai
Brian Keith Classon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Publication of GB0102720D0 publication Critical patent/GB0102720D0/en
Publication of GB2365289A publication Critical patent/GB2365289A/en
Application granted granted Critical
Publication of GB2365289B publication Critical patent/GB2365289B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4169Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using traceback
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing
    • H03M13/6505Memory efficient implementations

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

Optimal decoding of signals represented by a trellis of block length <I>N</I> divided into windows of length <I>L</I> includes a step of decoding a backward recursion from a point <I>N</I> that is at the end of the block back to the end of the window, and storing the determined state metrics at the end of each window. A next step includes decoding the window using backward recursion from the known state at the end of the window back to the beginning of the window to define a set of known backward recursion state metrics which are stored. A next step includes decoding using forward recursion starting from a known state at the beginning of the window and moving forward. A next step includes calculating a soft output at each stage of the forward recursion using the stored backward recursion state metrics, and branch metrics at each state, and outputting the soft output for that stage.

Description

<Desc/Clms Page number 1> SOFT OUTPUT DECODER FOR CONVOLUTIONAL CODES CROSS REFERENCE TO RELATED APPLICATIONS This application is related to U.S. patent applications, serial no. 09/502,132 by inventors Classon, Schaffner, Desai, Baker and Friend, serial no.09/501,922 by inventors Classon and Schaffner, and serial no. 09/501,883 by inventors Classon, Schaffner and Desai. The related applications are filed on even date herewith, are assigned to the assignee of the present application, and are hereby incorporated herein in their entirety by this reference thereto. FIELD OF THE INVENTION This invention relates generally to communication systems, and more particularly to a soft output decoder for use in a receiver of a convolutional code communication system.
BACKGROUND OF THE INVENTION Convolutional codes are often used in digital communication systems to protect transmitted information from error. At the transmitter, an outgoing code vector may be described using a trellis diagram whose complexity is determined by the constraint length of the encoder. Although computational complexity increases with increasing constraint length, the robustness of the coding also increases with constraint length.
At the receiver, a practical soft-decision decoder, such as a Viterbi decoder as is known in the art, uses a trellis structure to perform an optimum search for the maximum likelihood transmitted code vector. The Viterbi algorithm, however, is computationally complex, and its complexity increases exponentially with increasing constraint length. This essentially means that a
<Desc/Clms Page number 2>
Viterbi decoder requires a significant amount of memory and processing power for convolutional codes with large constraint lengths.
Coders for various communications systems, such as Direct Sequence Code Division Multiple Access (DS-CDMA) standard IS-95 and Global System for Mobile Communications (GSM), have such large constraint lengths. For example, the GSM half-rate constraint length K = 7 and the IS-95 constraint length K = 9.
Another disadvantage of Viterbi decoders is that a fixed number of computations must be performed for each code vector, irrespective of the actual number of errors that occurred during transmission. Thus, a Viterbi decoder processes a received signal having few transmission errors or no errors at all using the same number of computations as a received signal having many errors.
More recently, turbo codes have been developed that outperform conventional coding techniques. Turbo codes are generally composed of two or more convolutional codes and turbo interleavers. Turbo decoding is iterative and uses a soft output decoder to decode the individual convolutional codes. The soft output decoder provides information on each bit position which helps the soft output decoder decode the other convolutional codes. The soft output decoder is usually a MAP (maximum a posteriori) decoder which requires backward and forward decoding to determine the soft output. However, because of memory, processing, and numerical tradeoffs, MAP decoding is usually limited to a sub- optimal approximation. All of these variants require both forward and backward decoding over the block.
For future standards, such as the 3GPP (third generation partnership project for wireless systems), an 8-state turbo code with a block length of N=5120, needs 40960 words of intermediate storage which may be unacceptable. Future systems (larger frame and greater number of states) will require even more memory. By comparison, a Viterbi decoder that does not produce soft outputs for an N=5120, 8-state trellis requires less than 100 words of intermediate.
<Desc/Clms Page number 3>
There is a need for a soft output decoder that reduces overall memory and processing requirements for decoding convolutional codes without the degree of limitations imposed by prior art turbo and MAP decoders.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a trellis diagram for a first prior art soft output decoder technique; FIG. 2 shows a trellis diagram for a second prior art soft output decoder technique; FIG. 3 shows a trellis diagram for a third prior art soft output decoder technique; FIG. 4 shows an expanded graphical representation of the diagram of FIG. 3; FIG. 5 shows an alternate expanded graphical representation of the diagram of FIG. 3; FIG. 6 shows a trellis diagram for a soft output decoder technique in accordance with the present invention; FIG. 7 shows a block diagram of a soft output decoder, in accordance with the present invention; FIG. 8 shows an expanded graphical representation of the diagram of FIG. 6; and FIG. 9 shows a flow chart of a soft output decoder method, in accordance with the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The present invention greatly reduces the memory requirement from prior art turbo decoders with only a small increase in computation over a Viterbi decoder. In all, this provides for a more efficient decoder. Moreover, the present invention minimizes the limitations of prior art turbo and MAP decoders.
<Desc/Clms Page number 4>
Typically, block codes, convolutional codes, turbo codes, and others are graphically represented as a trellis as shown in FIG. 1. Maximum a posteriori type decoders (log-MAP, MAP, max-log-MAP, constant-log-MAP, etc.) utilize forward and backward generalized Viterbi recursions on the trellis in order to provide soft outputs, as is known in the art. The MAP decoder minimizes the decoded bit error probability for each information bit based on all received bits. Typical prior art MAP decoders require a memory for use in decoding.
Because of the Markov nature of the encoded sequence (wherein previous states cannot affect future states or future output branches), the MAP bit probability can be broken into the past (beginning of trellis to the present state), the present state (branch metric for the current value), and the future (end of trellis to current value). More specifically, the MAP decoder performs forward and backward recursions up to a present state wherein the past and future probabilities are used along with the present branch metric to generate an output decision. The principles of providing hard and soft output decisions are known in the art, and several variations of the above described decoding methods exist.
Most of the soft input-soft output SISO decoders considered for turbo codes are based on the prior art MAP algorithm in a paper by L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv entitled "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate", IEEE Transactions on Information Theory, Vol. IT-20, March 1974, pp. 284-7 (BCJR algorithm). FIG. 1 shows a trellis diagram for this algorithm for an 8-state convolutional code, which can be used in a turbo code. As should be recognized, turbo coders are constructed with interleavers and constituent codes which are usually systematic convolutional codes, but can be block codes also. MAP algorithms not only minimize the probability of error for an information bit given the received sequence, they also provide the probability that the information bit is either a 1 or 0 given the received sequence. The BUR algorithm provides a soft output decision for each bit position (trellis section) wherein the influence of the soft inputs within the block is broken into contributions from the past (earlier soft inputs), the present soft input, and the future (later soft inputs). This decoder algorithm requires a forward and a
<Desc/Clms Page number 5>
backward generalized Viterbi recursion on the trellis to arrive at an optimal soft output for each trellis section (stage). These a posteriori probabilities, or more commonly the log-likelihood ratio (LLR) of the probabilities, are passed between SISO decoding steps in iterative turbo decoding. The LLR for information bit ut is
for all bits in the decoded sequence (t = 1 to M. In equation (1), the probability that the decoded bit is equal to 1 (or 0) in the trellis given the received sequence is composed of a product of terms due to the Markov property of the code. The Markov property states that the past and the future are independent given the present. The present, y,(n,m), is the probability of being in state m at time t and generating the symbol yt when the previous state at time t-1 was n. The present plays the function of a branch metric. The past, a,(nz), is the probability of being in state m at time twith the received sequence fy,,..., y, 1, and the future,,6,(nz), is probability of generating the received sequence 1Y1111 ... I YN) from state mat time t. The probability a,(m) can be expressed as function of ajni) and y,(n,m) and is called the forward recursion
where M is the number of states. The reverse or backward recursion for computing the probability,6,(n) from,6,,(n) and y,(n,m) is
The overall a posteriori probabilities in equation (1) are computed by summing over the branches in the trellis B' (BO) that correspond to ut = 1 (or 0).
The LLR in equation (1) requires both the forward and reverse recursions to be available at time t. The BCJR method for meeting this requirement is to compute and store the entire reverse recursion, and recursively compute a,(ni) and A, from t = I to t N using %, and 6,.
<Desc/Clms Page number 6>
The disadvantage of this decoder is that the entire block of N stages must first be stored in memory before processing. Not only does this requires a large memory (N sections x M states x number of bits per state), this also causes a signal delay of length N before any information can possibly be output. In a W- CDMA system (N-5000, M=8,13 bits) the memory required is about 0.5 Mbits. In a cdma2000 system, N is approximately 20000 which requires a memory of about 2 Mbits. For small sequence lengths, memory utilization is generally not an issue. However, for the large N where turbo codes perform the best, memory utilization is significant.
In terms of complexity, the BCJR method requires NM state updates for the reverse recursion (M state updates per trellis section, N trellis sections in the code) and provides optimal performance. In practice, a backward recursion is performed by a processor across the entire block (as shown in FIG. 1) and stored in memory. Then a forward recursion is performed by the processor and the result is used with the present state and stored future state to arrive at a soft output decision for each stage. In this case the processor operates on each state twice, once to store the backward recursion states, and once during forward recursion processing (throughput of 1/2).
To address the memory utilization problem, a sliding window method and similar variations were developed. In the sliding window technique, described in a paper by S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, entitled "Algorithm for continuous decoding of turbo codes," Electronics Letters, Vol. 32, Feb. 15, 1996, pp. 314-5, as represented in FIG. 2 (in the figures that follow a solid arrow represents an output provided With recursion but no storage, a dotted arrow represents a learning period with no output and no storage, and a hollow arrow represents a stored recursion with no output, with the direction of the arrows indicating forward or backward recursions). An assumption that all states at time t+P are equally probable (or unknown) is used for the reverse recursion. To use this assumption, the learning period P must be several constraint lengths of the constituent code in order to provide near-optimal performance. Making the
<Desc/Clms Page number 7>
learning period too small can introduce noticeable performance degradation, similar to the effects of 'finite'traceback in the conventional Viterbi algorithm.
The sliding window technique does not require any memory, but is computationally complex. Specifically, instead of an entire backward recursion being performed and stored, only a partial backward recursion is performed (and not stored) to determined each state. For each present state, the algorithm initializes the future recursion at a learning period of P away from the present state with the initial state unknown. The future probabilities are calculated backward from the unknown future point, not from the known end of the trellis. The length P (learning period) is set such that by the time the partial backward recursion reaches the present state, the future probabilities are most likely correct. P depends on the rate and constraint length of the code and the expected channel conditions. For example, given an 8-state decoder with a 1/2 rate convolutional code, P is typically between 16 to 32, wherein P is some multiple of constraint lengths. The disadvantage of this decoder is that the partial backward recursion is started with equally likely (unknown states) and is allowed to iterate until it reaches the present window. This is a sub-optimal algorithm as the sliding window causes degradation from true MAP performance, similar to the effects of finite traceback in a conventional Viterbi algorithm, increasing the probability of decoded bit error. Also, the processor operates on each state P times (throughput of 11P) and has a output delay of P. Moreover, this algorithm requires P times the complexity which can only be reduced by adding more processing.
The sliding window method can be summarized as, for t = 1 to N, compute the reverse recursion starting at time t+P to time t, and compute a,(n?) and A, from ajm) and fl, The sliding window method reduces the memory requirement from NM as needed in the BCJR method down to an insignificant amount of memory needed for a recursion. Assuming double buffering, the amount of memory is only 2M, and can be safely ignored in the analysis.
<Desc/Clms Page number 8>
However, to achieve this memory saving, the computational complexity for the backward recursion increases by a factor of P. The sliding window method is also sub-optimal due to the 'finite' window size.
Another prior art decoder, described in U.S. Patent 5,933,462 to Viterbi et al. (and similarly in a paper of S. Pietrobon and S. Barbulescu, "A Simplification of the Modified Bahl et al. Decoding Algorithm for Systematic Convolutional Codes," Int. Symp. On Inform. Theory and its Applications, Sydney, Australia, pp. 1073-7, Nov. 1994, revised Jan. 4, 1996 and S. Pietrobon, "Efficient Implementation of Continuous MAP Decoders and a Synchronisation Technique for Turbo Decoders," Int. Symp. On Inform. Theory and its Applications, Victoria, B.C., Canada, pp. 586-9, September 1996) describes another sliding window technique, as represented in FIG. 3.
The Viterbi sliding window method reduces the large increase in computational complexity of the prior art sliding window method by performing processing in blocks. The reverse recursion is started at time t+2L, and the reverse recursion values are stored from time t+L to time t. The forward recursion and output likelihood computation are then performed over the block of time t to time t+L. Memory is reduced f rom NM down to L M, while only doubling the computational complexity. The key observation of starting the recursion in an unknown state is the same as for the sliding window technique.
This technique requires some memory and is still computationally complex. The decoder differs from the previously described sliding window technique by providing a window that slides forward in blocks rather than a symbol at a time. Specifically, a sliding window is defined having a length L which is equal to the previously described learning period P. Also, L is some multiple of the total trellis length, N, and the window slides from the beginning to the end of the trellis in steps of length L. In this way, the memory required in prior art decoders, where the entire trellis was stored, has been reduced from N to NIL (typically 3 kbits for cdma2000 and W-CDMA systems where L=32).
This decoder also uses a learning period starting from an unknown future state and is therefore sub-optimal as described previously. Specifically, a
<Desc/Clms Page number 9>
forward recursion is performed by the processor starting from a known state at the beginning of a first window L and over the length (L) of the first window. These forward recursion states are stored. The processor then performs a backward recursion from an unknown state starting at a point that is 2L away from where the forward recursion started so as to define a known state at the end of the first window. Then the processor performs a second backward recursion starting from the known state at the end of the first window to the present state wherein information from the backward recursion and the stored forward recursion are used to generate the soft output. Once all the outputs of the first window are determined the window slides forward an amount L and the process is repeated starting from the state that was determined at the end of the first window.
The disadvantage of this decoder is that the first backward recursion over the learning period, L, is started with equally likely (unknown states) and is allowed to iterate over the length L which is sub-optimal as previously described. Also, the processor operates on each state three times although a forward and backward processor can be run concurrently such that the throughput of 1/2 is obtained. The decoder produces an output delay of 2L. Moreover, the backward recursion requires twice the complexity which can only be reduced (or the throughput increased) by adding more processing. Further, this decoder produces soft outputs in reverse order which would need to be buffered in a supplementary memory before being output.
FIG. 4 shows an expanded diagram of the graph of FIG. 3 with a time component added. In operation, at time 0 a forward processor performs a forward recursion over a first window from position 0 to L and stores the information while over the same time period a backward processor performs a backward recursion from position 2L to L to define a known state at the end of the first window at position L at time L. Thereafter, a second backward recursion operates from time L to 2L over position L to 0 to define the soft outputs over the first window. At this time, the soft decisions can now be reversed and output in order (which clearly occurs after, a delay of 2L), the memory is cleared, the
<Desc/Clms Page number 10>
window position slides forward a length of L, and the process repeats. Alternatively, with an additional backward recursion processor and memory, throughput can be increased.
FIG. 5 shows an alternative result for the graph of FIG. 3 using an additional backward processor. In operation, at time 0 the forward processor performs a forward recursion over a first window from position 0 to L and stores the information while over the same time period the backward processor performs a backward recursion from position 2L to L to define a known state at the end of the first window at position L at time L. Thereafter, a second backward recursion operates from time L to 2L over position L to 0 to define the soft outputs over the first window. At the same time, the forward and additional backward processor start a second cycle by beginning to process information the second window (from position L to 2L). At time 2L, the soft decisions for the first window are output while the forward recursion and backward learning period for the second window have already been completed. Then a second backward recursion for the second window is performed to obtain the soft outputs for the second window. As can be seen, this technique doubles the throughput. Twice the memory is needed as the information of the forward recursion for the first window is being used while the forward recursion for the second window is being stored.
The above decoders (specifically of FiGs. 4 and 5) suffer from several problems. First, and most important, the soft outputs are produced out of order and must be reversed before being output. This requires additional buffer memory and produces an additional delay to reverse the outputs. Second, the above decoders are sub-optimal.
The present invention solves these problems in a novel way. FIG. 6 shows a trellis diagram utilizing convolutional decoding in accordance with the present invention. The trellis code is obtained from a convolutionally coded sequence of signals represented by a trellis of length N in a communication system, as simplified in FIG. 7. In a radiotelephone 100, a signal travels through an antenna 102 to a receiver 104 and demodulator 106, as is known in the art.
<Desc/Clms Page number 11>
The signal is loaded into a frame buffer. 106. A forward recursion processor 110 and backward recursion processor 112 operate on the block.
The present invention differs from the previously described sliding window technique of FIGs. 3-5 in -that it is an optimal technique wherein all recursions using known states for initialization are performed for each window, rather than the unknown values used by the prior art sliding window methods were learning periods involved. The present invention divides the block of length N into X sections of length L, although it is not necessary that all the sections be equal length, it is assumed that they are equal for purposes of explanation. As in the prior art, the states at points 0 and N are known. The present invention provides a first backward recursion over the length of the block from N to L to determined and store the reverse recursion at specific times defining the state metric at the end of each window. Without loss of generality, NIL of the reverse values are uniformly stored, starting at,6N. As each group of indices with a stored fiendpoint is defined as a window, the trellis is now split into NIL sections with known 8 endpoints Thereafter, a backward recursion is performed for the L-1 unknown state metrics from the known 8value states at the end of each window and stored, followed by a forward recursion such that the soft outputs are generated in'order and can be output immediately without waiting for the entire window to be traversed. In particular, a sliding window is defined having a length L such that some multiple of L equals the total trellis length, N, and the window slides from the beginning to the end of the trellis in steps of length L. This present invention is optimal as all recursion are started from known states, unlike the prior art sliding window methods. - Specifically, a backward recursion (using a generalized Viterbi algorithm) is perf ormed by the backward processor 112 starting f rom the known end state of the block up to the end of the first window (the beginning of the first window already being known as it is the start of the block). Only the state metrics at the end of each window are stored. Since these metrics are derived from the known end state of the block they are optimal values. Then a backward recursion is
<Desc/Clms Page number 12>
performed by the backward processor 112, starting at the known state at the end of the first window back to the beginning of the window and stored in a memory 114. The forward recursion processor 110 then performs a forward recursion from the known state at the beginning of the window throughout the length of the window to compute a,(m) and A, usingfiand r. At the same time, the decoder 116 outputs the soft output decisions as they are generated using the known backward recursion in the memory 114, the information from the forward recursion state metrics from the forward recursion processor 110, and the present branch metrics.
At the end of the first window, the window slides forward an amount L and the process is repeated for that window and each subsequent window until the entire block is decoded. In a preferred embodiment, when the forward recursion processor 110 operates within the first window the backward recursion processor 112 decodes the portion of the trellis within the next window using backward recursion from the known state at the end of the next window back to the beginning of the next window to define a set of known backward recursion state metrics within the next window which is stored in the memory 114 as the memory is cleared by the forward recursion processor 110, leaving the forward recursion processor 110 available to begin decoding the next window immediately after the present window is processed. It should be recognized that separate buffer memories could be used instead of the circular buffer described above. Preferably, the forward and backward processor operate concurrently until all of the windows within the block are decoded.
The advantage of the present invention is that the outputs are provided in order and can be outputted as they are generated without the need of a supplementary buffer memory. An additional advantage is that the soft output decoder is optimal, in contrast to the prior art. The soft output decoder of the present invention provides a throughput of one-half with an output delay of N.
FIG. 8 shows an expanded diagram of the graph of FIG. 6 with a time component added. In operation, at time 0 a backward processor performs a first backward recursion from position N to L to define optimal known states at the
<Desc/Clms Page number 13>
end of each window at position L, 2L, 3L, etc. This causes a first delay of length N-L. From time N-L to N (difference of L), a second backward recursion is performed over the first window from position L to 0 and stores the information. From time N to N+L, a forward recursion operates from the initial known state at position 0 to L to generate and output the soft outputs over the first window utilizing the forward recursion values, the stored backward recursion values, and the current branch metric. During this same time (from N to N+L), a separate backward recursion is being performed on positions 2L to L to be used in subsequent windows repeating the above. The present invention computes each soft output in accordance with known turbo coding techniques, such as those represented in the Bahl et al. paper (BCJR algorithm) cited above.
The advantage of the present invention is that it is optimal and the soft outputs are being output as they are being generated, freeing up memory as they are being output. Also, as memory clears, new information from the backward recursion for the next window can be circulated into the memory. Therefore, the present invention not only eliminates buffer memory for reversing outputs as is need in the prior art, but also cuts in half the memory requirement. It should be recognized that this is not necessary where sufficient memory is present. Further, the present invention saves time by not having to reorder any soft outputs as would be generated in the prior art. It should also be noted the entire process described above can be reversed (mirror-image) for those cases where a last-in first-out format for the output is required.
FIG. 9 shows a flow chart representing a method 200 of decoding a received convolutionally coded sequence of signals represented by a trellis of block length N, in accordance with the present invention (also see FIG. 6). Trellis diagrams are well known in the art. A first step 202 is dividing the trellis into windows of length L. A next step 204 is decoding a portion of the trellis using backward recursion from a known metric at point N that is at the end of the block back to the end of the first window storing the determined state metric at the end of each window. A generalized Viterbi algorithm is used for the recursion. The length L is independent of the constraint length of the
<Desc/Clms Page number 14>
convolutional code, but can be set to a multiple of the constraint length for convenience, or it can be variable. A next step 206 is selecting a window of the trellis. A next step 208 is decoding the portion of the trellis within the window using backward recursion from the known state at the end of the window defined in the above decoding step back to the beginning of the window to define a set of known backward recursion state metrics within the window, and storing the set of known backward recursion state metrics in a memory. A next step 210 is decoding the portion of the trellis within the window using forward recursion starting from a known state at the beginning of the window and moving forward. A next step 212 is calculating a soft output at each stage of the forward recursion process using the forward recursion state metrics, the branch metrics, and the stored backward recursion state metrics, and outputting the soft output for that stage. Preferably, the recursion updates and soft outputs are calculated using a MAP algorithm or one of the MAP derivatives (i.e., log-MAP, max-log-MAP, constant-log-MAP, etc.).
Once a window is completely decoded, the window can be "slided" forward a distance L where the beginning of the new window starts at the end of the previous window so as to start at a previously determined known state. The above steps can then be repeated for the new window. This process continues until all of the windows in the block are processed. The present invention has the advantage that no special processing is needed for the last window in the block, unlike the prior art.
In a preferred embodiment, a further step is included where the backward recursion for the next window is performed at the same time that the forward recursion and output is occurring in the present window. In other words, the processing for the next window begins while the first window is being processed. In particular, the further step includes repeating the above steps, wherein the repeated selecting step includes selecting a next window starting at the end of the presently selected window, and wherein the repeated steps decoding and calculating steps for the next window occur one step out of sequence and concurrently with the processing of the present window. This additional step
<Desc/Clms Page number 15>
saves processing time, and no additional memory is required. More preferably, while the forward recursion for the first window is being performed the stored memory is being cleared, and the backward recursion of the next window can stored, or circulated, into the cleared portions of the memory.
Table 1 summarizes the memory, throughput, and computational requirements of the three prior art methods along with the preferred two-pass method of the present invention. However, it should be noted that the main benefit of the present invention is that an optimal solution is provided, unlike the prior art sliding window methods, and that much less memory is needed than the other optimal solution, the prior art BCJR method.
Table 1. Comparison of the four methods for backward recursion. Method Memory Needed Throughput Computational Complexity BCJR NM words 1/2 NM state updates Sliding Window 0 words I/P LNM state updates Viterbi Sliding ML words 1/2 2NM state updates Window Present Invention 2sqrt(N)M words 1/2 2NM state updates To illustrate the differences among the methods, Table 2 presents the results when typical values of sequence length (N = 5000), number of states (M = 8), window size (L = 32) are used.
Table 2. Comparison among the methods for typical values of N, M and L. Method Memory Needed Throughput Computational Complexity BUR 40,000 words 1/2 40,000 state updates Sliding Window 0 words 1/32 1,280,000 state updates Viterbi Sliding 256 words 1/2 80,000 state updates Window Present Invention 1132 words 1/2 80,000 state updates
<Desc/Clms Page number 16>
As Table 2 shows, the memory requirements of the present invention are well within a reasonable range and more than an order of magnitude less than the BUR method, while only requiring twice as many state updates (essentially two entire reverse recursions are performed). In contrast, though the sliding window method may require less memory, it is not optimal. The block length L is independent of the constraint length and can be chosen to minimize memory utilization while maintaining optimal performance. In some cases, the memory required by the present invention is less than that required by the Viterbi Sliding Window method.
The present invention increases throughput and greatly reduces the memory required for a turbo decoder with only a small increase in complexity. For the turbo code within the 3GPP standard, the 40960 words of intermediate storage can be easily reduced to less than about 1500 words. In contrast, the prior art sliding window technique not only degrades performance but can require 10 to 15 times more computational complexity than the present invention.
While specific components and functions of the soft output decoder for convolutional codes are described above, fewer or additional functions could be employed by one skilled in the art within the broad scope of the present invention. The invention should be limited only by the appended claims.
<Desc/Clms Page number 17>

Claims (9)

  1. CLAIMS What is claimed is: 1. An optimal method of decoding a received convolutionally coded sequence of signals represented by a trellis of block length N, comprising the steps of: a) dividing the trellis into windows; b) selecting a window of length L of the trellis; c) decoding a portion of the trellis using backward recursion from a point N that is at the end of the block back to the end of the window, and storing the determined state metrics at the end of each window; d) decoding the portion of the trellis within the window using backward recursion from the known state at the end of the window defined in step c) back to the beginning of the window to define a set of known backward recursion state metrics within the window, and storing the set of known backward recursion state metrics in a memory; e) decoding the portion of the trellis within the window using forward recursion starting from a known state at the beginning of the window and moving forward; and f) calculating a soft output at each stage of the forward recursion using the forward recursion state metrics, the branch metrics, and the stored backward recursion state metrics, and outputting the soft output at each stage.
    <Desc/Clms Page number 18>
  2. 2. The method of claim 1, wherein the dividing step includes the length L being a multiple of a constraint length of the convolutional code.
  3. 3. The method of claim 1, wherein the dividing step includes the length L being independent of a constraint length of the convolutional code.
  4. 4. The method of claim 1, further comprising a step g) of repeating steps b) through f) until the entire block length N is decoded, wherein the repeated selecting step includes selecting a next window starting at the end of the presently selected window, and wherein the repeated steps b) through e) occur concurrently with present steps c) through f), respectively.
    <Desc/Clms Page number 19>
  5. 5. A radiotelephone with a receiver and demodulator for processing a convolutionally coded sequence of signals represented by a trellis of block length N divided into windows of length L in a frame buffer by a soft-decision output decoder, the decoder comprising: a memory; a backward recursion processor decodes a portion of the trellis using a backward recursion from a point N that is at the end of the block back to the end of the window to define state metrics at the end of each window which are stored in the memory, the backward recursion processor subsequently decodes the portion of the trellis within the window using backward recursion from the known state at the end of the window back to the beginning of the window to define a set of known backward recursion state metrics within the window which is stored in the memory; a forward recursion processor decodes the portion of the trellis within the window using forward recursion starting from a known state at the beginning of the window and moving forward; and a decoder coupled to the memory and the forward recursion processor calculates a soft output at each stage of the forward recursion using the forward recursion state metrics, stored backward recursion state metrics in the memory, and branch metrics at each stage, and outputs the soft output for that stage.
    <Desc/Clms Page number 20>
  6. 6. The radiotelephone of claim 5, wherein the length L is a multiple of a constraint length of the convolutional code.
  7. 7. The radiotelephone of claim 5, wherein the length L is independent of a constraint length of the convolutional code.
  8. 8. The radiotelephone of claim 5, wherein when the forward recursion processor operates within the window the backward recursion processor decodes the portion of the trellis within the next window using backward recursion from the known state at the end of the next window back to the beginning of the next window to define a set of known backward recursion state metrics within the next window which is stored in the memory as the memory is cleared by the forward recursion processor, leaving the forward recursion processor available to begin decoding the next window immediately after the present window is processed.
  9. 9. The radiotelephone of claim 8, wherein the forward and backward processors operate concurrently until all of the windows within the block are decoded.
GB0102720A 2000-02-10 2001-02-05 Soft output decoder for convolutional codes Expired - Fee Related GB2365289B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US50081900A 2000-02-10 2000-02-10

Publications (3)

Publication Number Publication Date
GB0102720D0 GB0102720D0 (en) 2001-03-21
GB2365289A true GB2365289A (en) 2002-02-13
GB2365289B GB2365289B (en) 2002-11-13

Family

ID=23991075

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0102720A Expired - Fee Related GB2365289B (en) 2000-02-10 2001-02-05 Soft output decoder for convolutional codes

Country Status (3)

Country Link
KR (1) KR100369422B1 (en)
CN (1) CN1136661C (en)
GB (1) GB2365289B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100703307B1 (en) * 2002-08-06 2007-04-03 삼성전자주식회사 Turbo decoding apparatus and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030457A2 (en) * 1999-02-18 2000-08-23 Interuniversitair Microelektronica Centrum Vzw Methods and system architectures for turbo decoding
EP1128560A1 (en) * 2000-02-21 2001-08-29 Motorola, Inc. Apparatus and method for performing SISO decoding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030457A2 (en) * 1999-02-18 2000-08-23 Interuniversitair Microelektronica Centrum Vzw Methods and system architectures for turbo decoding
EP1128560A1 (en) * 2000-02-21 2001-08-29 Motorola, Inc. Apparatus and method for performing SISO decoding

Also Published As

Publication number Publication date
GB0102720D0 (en) 2001-03-21
CN1308415A (en) 2001-08-15
KR20010082093A (en) 2001-08-29
GB2365289B (en) 2002-11-13
KR100369422B1 (en) 2003-01-30
CN1136661C (en) 2004-01-28

Similar Documents

Publication Publication Date Title
US6901117B1 (en) Soft output decoder for convolutional codes
US6829313B1 (en) Sliding window turbo decoder
US6452979B1 (en) Soft output decoder for convolutional codes
US6192501B1 (en) High data rate maximum a posteriori decoder for segmented trellis code words
US6510536B1 (en) Reduced-complexity max-log-APP decoders and related turbo decoders
EP1314254B1 (en) Iteration terminating for turbo decoder
US6393076B1 (en) Decoding of turbo codes using data scaling
Guivarch et al. Joint source-channel soft decoding of Huffman codes with turbo-codes
US6856657B1 (en) Soft output decoder for convolutional codes
US6812873B1 (en) Method for decoding data coded with an entropic code, corresponding decoding device and transmission system
US6868132B1 (en) Soft output decoder for convolutional codes
EP1471677A1 (en) Method of blindly detecting a transport format of an incident convolutional encoded signal, and corresponding convolutional code decoder
Thobaben et al. Robust decoding of variable-length encoded Markov sources using a three-dimensional trellis
US7249311B2 (en) Method end device for source decoding a variable-length soft-input codewords sequence
Lamy et al. Reduced complexity maximum a posteriori decoding of variable-length codes
KR19990081470A (en) Method of terminating iterative decoding of turbo decoder and its decoder
Ould-Cheikh-Mouhamedou et al. Enhanced Max-Log-APP and enhanced Log-APP decoding for DVB-RCS
US6857101B1 (en) Apparatus and method of storing reference vector of state metric
US7031406B1 (en) Information processing using a soft output Viterbi algorithm
GB2365289A (en) Optimal soft output decoder for convolutional codes
EP1587218B1 (en) Data receiving method and apparatus
US20030101407A1 (en) Selectable complexity turbo coding system
WO2002021784A1 (en) Soft-output error-trellis decoder for convolutional codes
US20030101403A1 (en) Turbo decoder and its calculation methods having a state metric
KR100317377B1 (en) Encoding and decoding apparatus for modulation and demodulation system

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20080205