SOFT-OUTPUT ERROR-TRELLIS DECODER FOR CONVOLUTIONAL CODES
Technical Field
This present invention relates generally to communication systems, and more particularly to a soft-output decoder for use in a receiver of a convolutional code communication system.
Background Art Convolutional codes are often used in digital communication systems (e.g., the direct sequence code division multiple access CDS-CDMA), IS-95, IS-136, and Global System for Mobile Communications (GSM) time division multiple access (TDMA) standards) to protect transmitted information. At the transmitter, an outgoing code vector may be described using a trellis diagram whose complexity is determined by the constraint length of the encoder. Although computational complexity increases with increasing constraint length, the robustness of the coding also increases with constraint length.
At the receiver, a soft-decision decoder, such as a Viterbi decoder, uses a trellis structure to perform an optimal search for the maximum likelihood transmitted code vector. More recently, turbo codes have been developed that outperform conventional coding techniques. Turbo codes are generally composed of two or more convolutional codes and turbo interleavers. Turbo decoding is iterative using a soft output decoder to decode the constituent convolutional codes.
The soft output decoder provides a reliability measure on each information bit which helps the soft output decoder decode the other convolutional codes. The soft output decoder computes an approximated probability for each information bit in a received signal. The soft output decoder is usually a MAP (maximum a posteriori) decoder, which uses both backward and forward decoding to determine the soft output. However, because of memory, processing, and numerical tradeoffs, MAP decoding is usually limited to a sub-optimal approximation. All. of these variants require both forward and backward decoding over the block.
For future standards, such as the 3GPP (third generation partnership project for wireless systems), an 8-state turbo code with a block length of N=5120, will need 40960 words of intermediate storage, which may be unacceptable for practical implementation. Future systems employing larger frames and a greater number of states will require even more memory. By comparison, a Viterbi decoder that does not produce soft outputs for an N=5120, 8-state trellis requires less than 100 words of intermediate storage.
There is a need for a soft output decoder that reduces overall memory and processing requirements for decoding convolutional codes without the limitations imposed by prior art turbo and MAP decoders.
Brief Description of Drawings
FIG. 1 shows a block diagram of a communication system with a receiver having a soft- output decoder.
FIG. 2 shows a block diagram of the trellis composer and optimum state detector shown in FIG. 1.
FIG. 3 shows a more detailed flow chart of the operation of the soft output decoder including the syndrome calculator, syndrome modifier, and optimum state detector shown in FIG. 2.
Disclosure of the Invention
A soft output error trellis decoder 170 (FIG. 1) computes an approximated maximum a posteriori probability on each information bit in a demodulated received signal. A trellis composer and optimum state detector composes a window error trellis from a window syndrome and identifies an optimum state from the window error trellis. A soft output decoder performs decoding using optimum states identified by the optimum state detector. The soft output decoder generates a soft output sequence to be used in iterative decoding of concatenated codes.
Efficient soft-output decoding of convolutional codes uses a soft-output decoder based on a data-dependent error-trellis. The soft-output decoder includes two error-trellis decoders for decoding the sequence of signals received over the channel. Forward and backward recursions are conducted through a window of length (w-m) trellis stages, where w is fixed and m depends on the position of an optimum state in the window. Such a state lies, with high probability, along the optimum path through the error trellis. A syndrome based error trellis is constructed and used to identify an optimum state. The identified optimum state serves as the starting point for backward recursion and as the terminal state for forward recursion for a turbo decoder decoding windows of the syndrome based error trellis. The obtained soft-output metric may then be employed for iterative decoding of concatenated codes, also referred to as turbo codes.
The syndrome based error trellis is thus employed to identify optimum states without the need for either first processing the entire error trellis block (as required for a conventional trellis that generates soft outputs only after performing a backward recursion over the full length of a trellis block) or use of a trellis decoder training period prior to decoding each window in the trellis (as required for sliding window techniques that decode windows in a trellis block only after
decoding in a learning period beyond the window). The performance of the improved decoder is substantially the same as that of a decoder using full trellis block decoding without the memory requirements of a full trellis block decoder or the computational complexity of prior art sliding window trellis decoders. This improved soft output MAP decoder provides optimal approximation for information bits in a received signal.
FIG. 1 shows a block diagram of a communication system 100 with a receiver 130 having a soft output error trellis decoder 170. Receiver 130 is shown as part of a cellular radiotelephone subscriber unit 101, however, the receiver may alternately be part of a digital telephone, a facsimile machine, modulator-demodulator (MODEM), two-way radio, a base station, or any other communication device that receives convolutionally encoded signals. In the subscriber unit 101, a microphone 105 picks up audio signals that are modulated by a transmitter 110 and broadcast by an antenna 120 after passing through a duplexer 125. The antenna 120 also receives radio frequency (RF) signals from a complementary transmitter in a transceiver 199, which may for example be a cellular base station. A radio frequency (RF) front end 140 steps down the received RF signal received through duplexer 125 to an analog baseband signal. An analog-to- digital (A D) converter 146 converts the analog baseband signal to a digital signal. Those skilled in the art will recognize that the A/D converter 146 may not be required where the baseband signal is digital. Digital demodulator 150 processes the digital signal to produce a demodulated received signal vector r. The demodulated received signal vector r is connected to the soft output error trellis decoder 170. The soft output error trellis decoder generates an error trellis, identifies an optimum state within a relatively small number of sections of the full trellis block, and generates soft output data from the error trellis using the optimum state detected. The soft output decoder could be incorporated in the framework of iterative decoding of concatenated codes. At the decoder 170, a digital-to-analog (D/A) converter 180 converts the decoded signal to the analog domain. Those skilled in the art will recognize that the decoder can be an approximated a posterior decoder, and that the D/A converter 180 can be omitted where the baseband signal, and thus the output of the receiver 130, is digital. In the illustrated embodiment, an audio amplifier 185 uses operational amplifiers to increase the gain of the recovered signal for reproduction through audio speaker 190. The decoder 170 may be implemented using any suitable circuitry, such as one or more digital signal processors (DSP), microprocessors, microcontrollers, or the like, and associated circuitry. It is envisioned that the decoder may include a trellis composer, an optimum state detector and a soft output decoder embodied in a single DSP. The soft output decoder may decode the trellis constructed by the trellis composer to take advantage of the simplified error trellis produced, or the soft output decoder may decode received data independently of the error
trellis using the optimum state from the optimum state detector, in which case an error trellis composed will only be used to identify the optimum state.
FIG. 2 shows a block diagram of the soft output iterative decoder 170 of FIG. 1 in greater detail. A demodulated received signal vector r from demodulator 150 enters the symbol-by- symbol detector 210 that produces a hard-decision vector v. The symbol-by-symbol detector merely examines the incoming signal and converts it to the closest valid (e.g., binary) symbol without regard for the value of the surrounding symbols. The output hard-decision vector v from symbol-by-symbol detector 210 is not necessarily a valid code vector. A syndrome calculator 214 multiplies the hard-decision vector v by scalar parity check matrix H to produce a syndrome vector S. In the situation where syndrome vector S = 0, then hard-decision vector v is the maximum likelihood estimated transmitted code vector 5, and it is guaranteed that hard-decision vector v is identical to the output of a soft-decision decoder such as a Viterbi decoder, and no additional processing will be required.
If syndrome vector S ≠ 0, which is the typical case, the syndrome modifier 216 performs a search through the computed syndrome vector to detect predetermined syndrome patterns stored in syndrome pattern memory 218, which correspond to specific error patterns e-p, such as single- error syndrome patterns or double-error syndrome patterns, in the received hard-decision vector v. If a syndrome pattern, p, is discovered in the syndrome vector, syndrome modifier 216 simplifies the syndrome vector by subtracting the syndrome pattern p and records the found error pattern ep in an estimated error vector e according to the search results. The modified syndrome vector S' is simplified in the sense that it has more zeros than the original syndrome vector S. If S' = 0, then the most likely error vector e = e, and the maximum likelihood estimated transmitted code vector c = v - e. If modified syndrome vector S' = H(v - e) ≠ 0 after subtracting a found syndrome pattern p, the syndrome modifier 216 continues to search for syndrome patterns until all of the syndrome patterns in the syndrome pattern memory 218 have been compared with the modified syndrome vector S' or until the modified syndrome vector S' = 0.
After all of the syndrome patterns in the syndrome pattern memory 218 have been compared to the modified syndrome pattern S', then the modified syndrome vector S ' is sent to a syndrome-based decoder 220 for use in constructing a simplified error trellis. The search for the most likely error vector e that produces the syndrome vector s can be described by a search through an error trellis diagram for the path with the minimum accumulated weight. Meir Ariel and Jakov Snyders provide an in-depth discussion and detailed examples of how to construct and decode an error trellis in an article entitled "Soft Syndrome Decoding of Binary Convolutional Codes," published in Vol. 43 of the IEEE Transactions on Communications, pages 288-297 (1995), the disclosure of which is incorporated herein by
reference thereto. An error trellis that describes the same coset of the convolutional code to which the symbol-by-symbol vector v belongs is used to search for the most likely transmitted data sequence. The shape of the error trellis depends only on the value of the computed syndrome vector, and a modified syndrome vector produces a simplified error trellis. Once the error trellis is constructed, an accelerated search through a simplified error trellis is possible due to its irregular structure. Under low bit-error rate conditions, the error trellis can be substantially simplified without affecting the optimum output of the decoding procedure. In other words, the syndrome modifier 216 simplifies the syndrome vector to promote long strings of zeros, which allows the construction and processing of an error trellis by the syndrome-based decoder 220 to be more efficient.
Once the error trellis is constructed, an optimum state is detected by optimum state detector 222 from a search through the syndrome based error trellis. An optimum state is one that lies on the least weighing error path through the error trellis. A backward search is performed to detect the optimum state. Once the optimum state is identified, the identified optimum state is generated at output 207 for use by soft output decoder 204. Soft output decoder 204 uses the optimum state as the starting state for backward recursion.
A flow chart describing operation of the soft output error trellis decoder 170 is depicted in FIG. 3. As described briefly above, the soft-output decoder employs a variable-length window for backward and forward recursion. State metrics are saved during the backward recursion, and soft bit metrics are output during the forward recursion. A key component of the decoder is a searcher for an optimum state. Such a state is defined as having a high probability to lie along the optimum path through the error trellis. The optimum path is the most likely path through the error trellis. The construction and properties of error trellises can be found in "Error Trellises for Convolutional Codes- Part 1: Construction", by M. Ariel and J. Snyders, IEEE Transactions on Communications, vol. COM 46, No. 12, pp 1592-1612, Dec. 1998, and an in-depth discussion on optimum states and their identification can be found in "Error Trellises for Convolutional Codes - Part II: Decoding Methods" IEEE Transactions on Communications, vol. COM-47, No. 7, pp 1015-1024, July 1999, the disclosures of both of which articles are incorporated herein by reference thereto. The backward recursion begins once the rightmost optimum state through the window is identified, and no learning period is required to increase the reliability of the state metrics.
More particularly, W, which is a sequence of w symbols wherein w is a positive integer preferably less than the full trellis block length, is processed to produce soft output metrics {Λ(dk)} at the output of the decoder 170. For example, it is envisioned that w can be less than 100 sections, such as 36 sections, whereas the full trellis length will be thousands of sections long.
The soft output metrics are computed sequentially starting with Λ(do). Initially, m is set to 0 in step 300. The variable m represents an index of the number of trellis locations (or stages) considered, and will be used to locate the optimum state that is the final stage of the variable length window. The w soft input (a priori LLR) symbols are received in step 301 and the symbol detector 210 outputs a symbol-by-symbol hard-decision vector vw corresponding thereto in step 302. A syndrome S related to the received vector v- is calculated using a window syndrome calculator 303. The syndrome is calculated as Sw=Hw*vtw, where Hw is the scalar parity check matrix associated with the window W. Based on the syndrome Sw, preprocessing is performed in steps 304-307, the purpose of which is to construct a degenerated error trellis. In such a trellis, identifying optimum states is immediate. The preprocessing is composed of the following steps:
1) the syndrome Sw is searched backwards to match a syndrome pattern Sp in step 304 (this operation is a pattern matching process);
2) if the patterns match, and the accumulated metric λp of the corresponding error pattern is below some predetermined threshold T, in step 306, then the syndrome Sw is modified by Sw<— Sw+Sp in step 305; and
3) the syndrome search is repeated until all of the syndrome patterns have been considered, as determined in step 307.
The threshold T is a predetermined value representing a probability level. The metric of an error- pattern is proportional to the probability of that pattern. The threshold T is set to assure that only highly probable events are considered, and will be dependent on the application for the trellis decoder. The accumulated metric is the sum of the metrics of the bits composing the error pattern. Additionally, a detailed description of syndrome based decoding can be found in United States patents 5,936,972, entitled "Syndrome Based Channel Quality or Message Structure Determiner", issued to Meiden et al. on August 10, 1999, and 6,009,552, entitled Soft-Decision Syndrome Based Decoder for Convolutional Codes, issued to Ariel et al. on December 28, 1999, the disclosures of both of which are incorporated herein by reference thereto.
A window error trellis is then composed based upon the modified syndrome Sw in step 308. Starting from the last stage of the obtained error trellis, a backward search is made through the error trellis to identify an optimum state, as indicated in step 309. The optimum state is the first state encountered during the backward search through the error trellis that meets the following criteria: all surviving paths leading to all states at some time index k are traced back to this single state at time index k-s, where s is a positive integer. An in-depth discussion of optimum states and their identification can be found in "Error Trellises for Convolutional Codes - Part U: Decoding Methods" incorporated hereinabove by reference. U such an optimum state is identified after searching backwards through m stages of the error trellis, as determined in step
310 (m being incremented until it is equal to w or an optimum state is detected in steps 309, 310 and 311), the optimum state is provided to soft output decoder 204 (in steps 312 and 314), and sw_ m is a stage with reliable state metrics. The soft output decoder performs backward recursion decoding over the w-m stages of the error trellis and the backward recursion state metrics generated are stored in memory (for example a memory which is not shown that is associated with the soft output decoder 204), as indicated in step 312. Next, forward recursion decoding is performed over the window comprising the w-m stages of the error trellis and forward recursion soft metrics A(dk) for k=0, 1, 2,...,w-m-l, are output as indicated in step 314. The stage is then shifted w-m stages to the right in step 315 such that they are located between stages sw.m and s2w.m , if it is determined in step 316 that the entire trellis block has not yet been decoded.
If, however, no optimum state is identified within the w stages, as determined in steps 310 and 311, then the size is increased by a predetermined amount Δ in step 313. The process described above for the w stages is repeated for the w+ Δ stages, beginning from step 301 for window size w+ Δ. The variable m is reset to 0 in step 316. In this manner, the window length is initiated at a desired maximum length w but can be increased dynamically to look for an optimum state without having to decode and store state metrics for the entire block.
Alternatively, if a maximum sequence length is reached, such that the sequence length w can not be increased (e.g., because of memory size limitations), then the optimum constraint can be relaxed, and the following method could be applied: partition the sequence of length w into two parts, wi and w2, where wi >0. Wi and w2 may be of equal or different lengths. A sub- optimum state within w2 is identified. The sub-optimum state used for backward recursion can be selected by some sub-optimal criterion such as: the state at time k-s that is connected to the maximum number of surviving paths leading to states at some time index k, where s is a positive integer. Alternatively, the sub-optimum state can be selected at random. Backward and forward recursions are performed through wi and the portion of w2 to the left of the sub-optimum state. The maximum constraint length might be 100 sections or more, for example.
The soft output decoder 204 uses the information generated by the trellis composer and optimum state detector 202 to generate the decoded sequence. Those skilled in the art will recognize that any suitable decoder could be used for decoder 204, but soft output decoders are preferred. Most preferably, a soft input-soft output (SISO) decoder is used, such as a SISO for turbo codes that is based on the MAP algorithm described in a paper by L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv entitled "Optimum Decoding of Linear Codes for Minimizing Symbol Error Rate", IEEE Transactions on Information Theory, Vol. JT-20, March 1974, pp. 284-7 (the "BCJR algorithm" or "BCJR method"). As will also recognized by those skilled in the art, turbo coders are constructed with interleavers and constituent codes, which are usually systematic
convolutional codes, but can alternately be block codes. MAP algorithms not only minimize the probability of error for an information bit given the received sequence, they also provide the probability that the information bit is either a 1 or 0 given the received sequence.
The BCJR algorithm provides a soft output decision for each bit position (stage) wherein the influences of the soft inputs within the block are broken into contributions from the past (earlier soft inputs), the present soft input, and the future (later soft inputs). This decoder algorithm requires a forward and a backward generalized Viterbi recursion on the trellis to arrive at an optimum soft output for each trellis location, or stage. These a posteriori probabilities, or more commonly the log-likelihood ratio (LLR) of the probabilities, are passed between SISO decoding steps in iterative turbo decoding. The LLR for information bit ut is
for all bits in the decoded sequence (t = 1 to N). In equation (1), the probability that the decoded bit is equal to 1 (or 0) in the trellis given the received sequence is composed of a product of terms due to the Markov property of the code. The Markov property states that the past and the future are independent given the present. The present, γ
t(n,m), is the probability of being in state m at time t and generating the symbol y
t when the previous state at time t-1 was n. The present operates as a branch metric. The past, afyn), is the probability of being in state m at time t with the received sequence { y
1 ,..., γ
t } , and the future, β
t{m), is probability of generating the received sequence {y
tH ,..., y
N } from state m at time t. The probability a
t(m) can be expressed as function of a
t-ι(m) and γ
t(n,m) and is called the forward recursion
M-\ at (m) = ∑at_1(n)γt (n,m), m = ,...,M -X , (2)
where M is the number of states. The reverse or backward recursion for computing the probability
M-\ β, (n) = ∑βM(τn)yt(n,m), n = 0,...,M -1. (3) m=0 The overall a posteriori probabilities in equation (1) are computed by summing over the branches in the trellis B1 (B°) that correspond to ut = 1 (or 0).
Maximum a posteriori (MAP) type decoders (MAP, log-MAP, max-log-MAP, constant- log-MAP, etc.) utilize forward and backward generalized Viterbi recursions on the trellis in order to provide soft outputs, as is known in the art. The MAP decoder minimizes the decoded bit error
probability for each information bit based on all received bits. However, typical prior art MAP decoders require a large memory to support full block decoding.
In summary, the soft output decoder 204 avoids memory problems associated with prior MAP type decoders by decoding windows of the error trellis rather than the entire block. The windows each have a length of w-m, wherein m is dynamically determined based on the detection of an optimum state through the error trellis as described hereinabove. Backward recursion is initiated from the optimum state detected for each window. Forward recursion starts from the known final state of the previous window. When a window is decoded, the decoder slides to the next adjacent window, the length of which is dynamically determined. The syndrome modifier in conjunction with a syndrome-based decoder simplifies the construction of a degenerated error trellis, which is used to identify an optimum state within a window. An optimum state may thus be identified without decoding the entire block length N of the trellis decoder. The optimum state so detected can then be used as the starting state for backward recursion within a window without the need for a soft output decoder training period. The soft outputs can be generated during the forward recursion. Additionally, a dynamically assigned soft-output decoder window is provided based upon the location of an optimum state within a sub-block of input data. The optimum state closest to the end of the sub-block is selected as the starting point for backward recursion and the ending point for forward recursion to allow the largest possible soft-output decoder window size within the input sequence of length w. Those skilled in the art will recognize that the optimum state identified can be used with any decoder and it is envisioned that the optimum state detected could be provided to a soft output decoder which is not processing the error trellis generated by the syndrome detector. While specific components and functions of the soft-decision decoder for convolutional codes are described above, fewer or additional functions could be employed by one skilled in the art within the true spirit and scope of the present invention. The invention should be limited only by the appended claims.
We claim: