WO2004019498A1 - Convolutional decoder and method for decoding demodulated values - Google Patents

Convolutional decoder and method for decoding demodulated values Download PDF

Info

Publication number
WO2004019498A1
WO2004019498A1 PCT/EP2002/008854 EP0208854W WO2004019498A1 WO 2004019498 A1 WO2004019498 A1 WO 2004019498A1 EP 0208854 W EP0208854 W EP 0208854W WO 2004019498 A1 WO2004019498 A1 WO 2004019498A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
transition
decision information
metric
final
Prior art date
Application number
PCT/EP2002/008854
Other languages
French (fr)
Inventor
Peter Jentsch
Gian Huaman-Bollo
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to AU2002340809A priority Critical patent/AU2002340809A1/en
Priority to PCT/EP2002/008854 priority patent/WO2004019498A1/en
Publication of WO2004019498A1 publication Critical patent/WO2004019498A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • H03M13/6325Error control coding in combination with demodulation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3988Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes for rate k/n convolutional codes, with k>1, obtained by convolutional encoders with k inputs and n outputs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4123Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing the return to a predetermined state
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/413Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors tail biting Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4169Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using traceback
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing

Abstract

In a convolutional decoder according to the invention, sets of demodulated values are successively processed until a predetermined final state, resulting from tail bits appended in the transmitter, is reached. Once all sets of demodulated values including those associated with coded tail bits have been processed, a trace back of a trellis memory is carried out. Thus, the decoder first steps through all sets of demodulated values associated with a data bit sequence (burst) including tail bits until the final state is reached and then a trace back means traces back the entries in the trellis memory. A fast and reliable convolutional decoding specifically adapted to a burst mode transmission is thus possible. For calculating state transition metrics, a metric calculator calculates a soft error as a bit distance between the demodulated (soft) values and the respective expected bit values (most reliable zero or one). The state metric for the final state corresponding to the accumulated soft errors for the most likely path to the final state is output as an error measure. Thus, data bits transmitted in burst mode can be decoded at high speeds, low complexity and also for different bit rates.

Description

CONVOLUTIONAL DECODER AND METHOD FOR DECODING DEMODULATED VALUES
Field of the invention
The present invention relates to a receiver in a digital telecommunication system. More particularly, the invention relates to a decoder and a method for decoding demodulated values originating from a sequence of data bits including a certain number of tail bits, wherein said sequence has been convolutionally encoded and transmitted in a burst mode operation. Specifically, the decoder is a Viterbi decoder used for example in base stations of a mobile radio communication system, in which data transmission is carried out for a great plurality of user channels in a burst mode.
Background of the invention/Prior art
In digital telecommunication systems, convolutional encoding and decoding is a standard forward error correction (FEC) technique used in order to perform channel encoding/decoding. One channel decoding method is generally known as Viterbi decoding. For various reasons, such convolutional encoding and Viterbi decoding is widely used in mobile radio communication systems such as second generation GSM (Global System for Mobile communication) and third genera' ion UMTS (Universal Mobile Telecommunication System) systems. From these examples, it also follows that convolutional encoding and Viterbi decoding can be applied independent from the modulation scheme specified in the respective standard.
In a convolutional encoder, each set of L > l data bits (from a sequence of data bits to be transmitted) is encoded into a (larger) set of N > coded bits, wherein r = L/N is commonly referred to as the coding rate. For example, for a coding rate of r = 1/2, each single (L=l) data bit is encoded into a set of two (N=2) coded bits. As the skilled person will readily appreciate, such convolutional encoding is typically applied to a sequence of data bits comprising a number of user bits as well as a certain number of appended tail bits whose function will be explained below. Let n denote the total number of bits in the sequence of (uncoded) data bits.
During the encoding of successive sets of L data bits, the convolutional encoder is switched between internal encoder states (depending on the encoder polynomial used) . This switching determines the so-called state transition diagram. In order to reconstruct the data at the receiving side, each set of N demodulated values is decoded, on the basis of the encoder polynomial and state transition probabilities, into a set of L decoded bits (corresponding to a set of L data bits), thereby determining whether the L uncoded data bits had a value of zero or one. This is, the decoder knows which transitions between states are possible in principal and must decide on the basis of the demodulated values which state transitions are most likely to have occurred in the encoder.
For conciseness reasons, the subsequent description is based on the assumption of L=l although other values are possible, of course. In addition, it is assumed that the data bits, the sets of N coded bits, the sets of N demodulated values, and the decoded bits all have the same rate, which will be referred to as the bi t rate in the following. Therefore, in a single bi t period, a single data bit is encoded into a single set of N coded bits, and likewise, a single set of N demodulated values is decoded into a single decoded bit. An exemplary convolutional encoder with a coding rate of r=L/N=l/2 and a constraint length of K=3 is described with respect to Figures 1 to 3. Fig. 1 shows the convolutional encoder itself, while Figures 2 and 3 present the corresponding state transition and trellis diagrams, respectively.
From Fig. 1, it can be seen that each uncoded (input) data bit D is encoded into N=2 coded (output) bits CQ, C]_ (called a set of N coded bits herein) and with the input of each data bit D with a value from {0,1}, another set of N=2 coded bits CQ, Ci is output. By the logic interconnections of the inputs and outputs of the K-l=2 registers R and the N=2 XOR gates XOR1 , XOR2 , a specific encoder polynomial is realized. Let rrj and r]_ denote the contents of the registers R. Then, rg and τ-_ together indicate the internal state of the encoder. In
K-l 2 this example, there are 2 =2 =4 internal states, briefly denoted "00", "01", "10", and "11", wherein the first value of each state corresponds to rg. while the second value is associated with rj_ . Due to the specific interconnections, only specific transitions between these internal states are possible when a data bit sequence is input. These possible transitions are indicated in the state transition diagram of Fig. 2. Herein, the four internal states are indicated in circles (notation: rg , ^± ) an< the transitions are marked with solid and dashed arrows for input data bit values of D=l and D=0, respectively. On each arrow, the value of the resulting set of N=2 coded bits is provided (notation: CQ.CI) . As the skilled person will readily appreciate, the initial internal state depends on the contents of the registers R before data bits are input, while the final internal state depends on the values of the K-l=2 final bits of the data bit sequence, i.e. on a number of data bits equal to the number of registers .
In order to make sure that the convolutional encoder assumes a predetermined state when the last (n-th) bit of the data bit sequence has been input, a predetermined number of tail bits (for example, zeros) is appended to the payload/user bits . The number of tail bits to bring the convolutional encoder in the predetermined state is equal to (or greater than) K-l. For example, in Figures 1 and 2, at least K-l=2 tail bits must be applied to ensure that the encoder assumes a predetermined state independent from the value of the last payload/user bit. Thus, before the payload/user bits of another user channel are input to the encoder, each data bit sequence is ended with a predetermined number of tail bits.
Fig. 3 shows a temporal representation, also referred to as a trellis, of the state transiton diagram of Fig. 2. Herein, the data bit sequence is assumed to comprise a total of n bits including tail bits. On the horizontal axis, time t is displayed in the form of to+i*trj wherein tg and tjj denote a time offset and the bit period, respectively, whereas i represents the data bit index. On the vertical axis, the internal states "00", "01", "10" and "11" are marked. On the assumption that the encoder is in the "00" state at time trj
as well as at time to+n*tj, Fig. 3 shows which states can be assumed in between as a consequence of an encoding of either a zero (D=0, dashed arrows) or a one (D=l, solid arrows) data bit D, wherein the sets of coded bits (CQ.CI) are marked on
the arrows. From time to+2*to onwards, i.e. after K-l=2 bit periods, all states can be assumed so that the trellis as shown between to+2*tjj and to+3*tjj will repeat itself in
subsequent bit periods up to time trj+ (n-2) * ] After time
to+(n-2)*to, -l=2 tail bits D=0 force the trellis in its
original state "00" at time to+n*tj. As the skilled person will readily appreciate, any state other than "00" could also serve as initial and/or final state.
Typically, in a known Viterbi decoder, the determination as to which data bit has been encoded is not performed on the basis of a single set of demodulated values received in a given bit period, but the process first determines state probabilities on the basis of a number of sets of demodulated values, e.g. 48 sets, and then decodes a single set for a bit period which lies back e.g. 48 steps (48 bit periods). First, the known Viterbi decoder uses a predetermined window size (of a pre-determined number of decoding steps) which sequentially shifts when successively decoding the sets of demodulated values. This decoding of one set lying back 48 steps, e.g., is continued until the decoder arrives at those sets of demodulated values which are a result of the encoding of tail bits contained in the sequence of data bits .
As stated above, tail bits are appended to the user bits so as to bring the encoder into a predetermined state such as its original or a reset state. Thus, the final set(s) of demodulated values will correspond to the set(s) of coded tail bits. Thus, the conventional decoder is brought into a predetermined state by decoding these final set(s) of demodulated values . Once the decoder is in the predetermined state, the next data bit sequence can be decoded. In mobile radio communication systems such as UMTS systems, a great plurality of user channels may have different bit rates and thus the number of bits in a sequence of data bits may not be the same for different user channels. However, in each data bit sequence, the last bits correspond to tail bits. Thus, in such a communication system, a burst mode transmission is used and a decoder performs a "window-wise" decoding of demodulated values resulting from a data bit sequence until it arrives at those demodulated values corresponding to coded tail bits.
However, the above-described conventional Viterbi decoding is not adapted specifically to the burst mode transmission of data bits in a great plurality of user channels (which may even have different bit rates) . That is, going from data bit sequence to data bit sequence, it may be necessary (due to the different lengths of the sequences) to change the window sizes to perform an efficient decoding. Furthermore, it is clear that in burst mode transmission using different bit rates, the decoder arrives at the tail bits after different numbers of steps due to the different lengths of the data bit sequences. Therefore, the conventional Viterbi decoding is not specifically adapted to the burst mode operation due to the window-wise decoding technique used therein.
Furthermore, when receiving a set of demodulated values, the conventional Viterbi decoder calculates the probabilities for each state on the basis of a squared error. Beforehand, for each set of received values, a soft decision means in the demodulator determines a set of soft values by quantizing the received values with, e.g., 4 bit. In the decoder, respective squared errors are then determined with respect to both possible bit values ("0" and "1") that could have occurred according to the state transition diagram or the encoder polynomial . A calculation of this squared error requires a great number of multiplications .
Summary of the invention
As explained above, the known Viterbi decoder is not specifically adapted to a burst mode transmission due to the window-wise operation. Furthermore, the known Viterbi decoder uses a demanding square-error calculation for the determination of the state probabilities. This reduces its decoding speed and/or increases its implementational complexity.
Therefore, the object of the present invention is to provide a decoder and a decoding method as well as a receiver including such a decoder, which can perform an efficient convolutional decoding for a burst mode transmission. A further object of the invention is to reduce the processing time and/or the implementational complexity in a Viterbi decoder needed for calculating the state probabilities.
These objects are solved by a decoder according to claim 1. Furthermore, these objects are solved by a method according to claim 9. These objects are also solved by a receiver of claim 8.
According to one aspect of the invention, the decoder does not use the window-wise decoding of one single bit which lies back a predetermined number of decoding steps (bit periods), but it determines (a) state metrics indicating the probabilities of each state (or at least each possible state according to the trellis diagram) in each step (i.e. for each set of demodulated values [hard or soft decision values] or, e.g. in each bit period) and (b) state decision information indicating the final transitions of the most likely paths leading to each (or at least each possible) state in (essentially) each step (i.e. for each or almost each set of demodulated values or, e.g. in each or almost each bit period) including a final state decision information indicating the final transition of the most likely path leading to the final (predefined) state as determined by the encoding of the tail bits. This state decision information is recorded (stored) in a trellis memory. Having stored said final state decision information in said trellis memory, i.e. after the final bit period relating to the last set of demodulated values) , a control means in the decoder triggers a trace back means to run back through the trellis memory from the final to the initial state and to select, on the basis of the stored state decision information, the transitions part of the most likely path leading to the final state and - while so doing - to output a decoded bit in accordance with each selected transition.
That is, the state decision information in the trellis memory will indicate, from step to step and starting with the final state, the most likely path through the trellis and thus the decoding is performed step-by-step in the reverse direction. Thus, according to this aspect of the invention, the decoder does not decode only one bit which lies back a predetermined number of steps, but it decodes all bits successively during the trace back, wherein said trace back is started only after the final state decision information associated with the final transition of the most likely path leading to the final state has been recorded in the trellis memory. According to a second aspect of the invention, two trellis memories are employed and the decoder writes in state decision information into a first trellis memory for a first data bit sequence (burst) , whilst at the same time the trace back for a second data bit sequence (burst) is performed in the second trellis memory, in which the state decision information has been stored previously. After each burst period, the read/write operation of the first and second trellis memories is swapped. Thus, a first data bit sequence is decoded by the trace back in one trellis memory, whilst the calculations for the state metrics and the state decision information are carried out in a forward manner for a second (subsequent, e.g.) data bit sequence. Thus, the subsequent (second) burst can already be processed whilst data of the current (first) burst is decoded. Hence, the decoder according to the invention is optimally adapted to a burst mode transmission.
According to a third aspect of the invention, a non-squared metric is calculated for the determination of the transition metrics required for the determination of the state metrics and the state decision information. The demodulated values input into the decoder are compared with the expected values for a most confident "1" and/or a most confident "0", and merely the bit differences between the demodulated values and the respective expected values are calculated and then added. Thus, the computation time and implementational complexity is reduced significantly.
According to a fourth aspect of the invention, for each possible transition to each particular state, the transition metric of the transition from a respective previous state to said particular state is added to the state metric of said previous state. For each particular state, the results obtained in this way are then compared, thereby generating a state decision information for said particular state. This state decision information is a one bit indication indicating the final transition of the most likely path leading to said particular state. Using the same hardware components for the determination of both state metrics and state decision information, implementational complexity is further reduced in this way.
According to a fifth aspect of the invention, the final state metric, i.e. the state metric associated with the final state and thus corresponding to the sum of the transition metrics of all transitions part of the most likely path leading to the final state, serves as an error measure indicating the decoding quality. Thus, the final state metric, which is just the accumulated soft error of the most likely path, can be output as an error measure such as a bit error rate flag.
Further advantageous embodiments and improvements of the invention can be taken from the dependent claims . Hereinafter, embodiments of the invention will be described with reference to the attached drawings .
Brief description of the drawings
In the drawings
Fig. 1 shows a typical convolutional encoder having a coding rate r=l/2 and a constraint length K=3 ;
Fig. 2 shows the state transition diagram for the encoder of Fig. 1;
Fig. 3 shows the trellis diagram for the encoder of Fig. 1;
Fig. 4 shows parts of a base station of a UMTS system to which the convolutional decoding according to the invention can be applied;
Fig. 5 shows a block diagram of a decoder unit DEC in Fig. 4 including the convolutional decoder ©;
Fig. 6 shows a block diagram of the decoder according to the invention; and
Fig. 7 shows the calculation of the non-squared metric for determining the transition metrics in the decoder according to the invention.
In the drawings, the same or similar reference numerals denote the same or similar parts or steps .
Before coming to a detailed discussion of the decoder of the present invention, first a base station of a UMTS system will be described to which the decoder of the invention can be applied. However, it should be noted that the invention is not restricted to UMTS or CDMA systems, since the decoder may be employed in other communication systems using other types of multiple access and/or modulation/demodulation techniques . The decoder and the decoding method of the invention can be applied to any convolutional channel decoding in any digital communication system.
UMTS base station of the invention
Briefly summarized, the block diagram in Fig. 4 shows parts of a UMTS base station (also called "node B") comprising a baseband transmitter Tx, a baseband receiver Rx, a high frequency section HF, and an ATM switch.
In the baseband transmitter Tx, user/payload data are input, for example in the form of ATM packets, into a channel encoder unit ENC via a corresponding interface ATM-IFX/IFC, wherein the channel encoder unit ENC encodes data bit sequences (comprising user/payload bits as well as tail bits) as described above with respect to Figures 1 to 3. The coded (and also interleaved) bit sequences are then modulated and spread by a baseband transmitter unit BBTX. In the trans- mitting part of the HF section, the modulated sequences are then filtered and converted to an analog signal in the unit TRX-DIG, upconverted to the desired carrier frequency in the unit TRX-RF, amplified by a power amplifier MCPA and finally transmitted to an antenna ANT via a duplex filter. In the receiving part of the HF section, two antennas (diversity reception) are commonly used in each sector to receive the signal which is then amplified in the low noise amplifier LNA, downconverted in the unit TRX-RF, and further A/D converted and filtered in the unit TRX-DIG. In the baseband receiver Rx, the user channels are then demodulated by a RAKE receiver/despreader in the unit BBRX while random access channels (branched off by an intermediate filter unit BBIF) are demodulated and detected in the unit BBRA. In the decoder unit DEC, the sequences of demodulated values are then decoded into sequences of decoded bits which are then forwarded to the ATM switch via an ATM interface ATM-IFX/IFC
In the base station of Fig. 4, channel encoding and decoding is thus performed in the encoder unit ENC of the baseband transmitter Tx and in the decoder unit DEC of the baseband receiver Rx, respectively.
The decoder unit DEC in Fig. 4 includes the convolutional decoder according to the invention which performs Viterbi decoding. In principle, the decoder unit DEC performs a block deinterleaving, a soft decision Viterbi decoding including a tail bit removal and a demultiplexing of the user/payload bits as well as a CRC evaluation as will be explained below in more detail with respect to Fig. 5. A typical Viterbi decoding, and thus the corresponding convolutional encoding too, is performed at a rate of r=l/3 or 1/2 and a constraint length of K=7 (or 9), which means that 6 (or 8, respectively) register levels are present in the encoder shift register.
In Fig. 5, which shows a more detailed overview of the internal structure of the decoder unit DEC, demodulated values (4-bit soft values) originating from the BBRX unit are input to a slot demultiplexer © whereafter a time slot desegmentation and a deinterleaving is performed. The deinterleaved sequences of 4-bit soft values are then input into the convolutional decoder © according to the invention. Although not of relevance for the functioning of the convolutional decoder of the invention, it should be noted that two parallel branches are provided each with a convolutional decoder © for processing the two different channels DTCH (dedicated traffic channel) and ACCH (associated control channel) , which are transmitted over the air interface in a time-multiplexed manner. In the branch for the DTCH channel, Fig. 5 mentions as an example a coding/ decoding rate of r=l/3 and a constraint length of K=9. The sequences of decoded bits output by the Viterbi decoders Φ then undergo a BER measurement in the BER measurement units © and cyclic redundancy checks (CRC) are performed in the CRC units ©. Finally, upon multiplexing in the multiplexers MUX, the output is forwarded to the ATM interface ATM-IFX/IFC of the baseband receiver Rx shown in Fig. 4.
First embodiment (channel decoding)
In the baseband receiver Rx of Fig. 4, a decision is made in the decoder unit DEC as to whether a zero (0) or a one (1) data bit D was encoded in the encoder ENC of the baseband transmitter Tx. The demodulator (BBRX in Fig. 4) does however not perform "hard" decisions as to whether the coded bits were zeros or ones, but soft decisions. In other words, a soft decision means (which is part of the demodulator) outputs a set of N soft values, also referred to as soft decision symbols, in each bit period tjj (also see Fig. 3), i.e. values consisting of a predetermined number of bits (4, e.g.) indicating the reliability of a demodulated value.
The necessity to determine probabilities as well as the trace back performed according to the invention (as will be explained below) is strongly linked to the fact that no "hard" decisions are taken by the demodulator. This is illustrated below by way of example.
Based on the state transition and trellis diagrams shown in Figures 2 and 3, the beginning as well as the end of a convo- lutional decoding process for a data bit sequence comprising n data bits (including tail bits) will now be described.
Assume that the first set of N=2 soft values is received at time to- On the assumption that, initially, the encoder was in the reset state "00" (i.e.
Figure imgf000017_0001
it is known in the decoder (cf . Fig. 3) that only the states "00" or "10" can be assumed one bit period later (to+tjj) as a consequence of the encoding of a data bit D=0 or D=l, respectively. If the decoder could rely on "hard" decisions of the demodulator, then the only possible sets of coded bits (notation: CQ.C]_) which could have been transmitted, are "00" and "H" (cf. Fig. 3) . That is, if the demodulator made a hard decision as to "00", then it would be clear in the decoder that a zero data bit (D=0) was encoded and if a hard decision as to "11" was made, then only a one data bit (D=l) could have been encoded, as indicated in Fig. 3, e.g.. In this case, there would be no necessity to determine probabilities, since the demodulator would just decide on "00" or "11".
However, if erroneously, e.g. due to noise, a hard decision was made that "01" or "10" was received, then the decoder would not know which one of the two possible next states "00" or "10" should be assumed. Therefore, the decoder would have to arbitrate as to whether "00" or "11" was transmitted.
Therefore, when receiving a set of N=2 soft values CQ ' . C]_ ' at time tg , according to the trellis diagram, there can only be two possible states which can be assumed at time tø+ jj as a consequence of the encoding of the associated data bit D. A zero data bit (D=0) would have caused a transition to state "00" (rQ,rι) in the encoder with a transmission of the set "00" of coded bits (CQ,C ), whereas a one data bit (D=l) would have led to a transition to state "10" with a transmission of "11". In order to determine the most likely path through the trellis, the decoder, during the decoding process, must establish the path with the minimum "cost" to evaluate which of the sets of coded bits have been transmitted at any point in time. For this, certain "costs" or "merits" must be assigned to each individual transition such that it can be determined at each point in time which of the coded bits have been transmitted with a certain probability.
As described above with respect to Fig. 3, the usage of tail bits on the transmitting side ensures that, after encoding of a complete data bit sequence comprising n data bits
(including tail bits) , a predetermined state is assumed in the encoder (in Fig. 3, the state "00") . Therefore, during decoding, the decoder knows that after n bit periods, the predetermined state "00" must be assumed, and then traces back the path to decode the individual sets of soft values for each individual timing tQ. tø+tjj, ..., to+n*t£).
As is shown in Fig. 3, although it is known that the encoding process starts and ends with "00" states, the question remains what criteria can be used in order to determine after each step the probability whether the uncoded data bit was a one or a zero. Furthermore, implementing the hardware structure that would completely take care of all possible paths through the trellis as shown in Fig. 3 would mean an extremely high effort in terms of hardware. Fig. 6 shows a block diagram of an exemplary decoder according to the 'invention, wherein a corresponding encoder according to Figures 1 to 3 is assumed. For this reason, a set of N=2 (4-bit, e.g.) soft values CQ ' , C]_ ' is input into the decoder in each bit period. Also, for the encoder considered here, only two paths can lead to each state, as can be seen in Fig. 3 by way of example from the transitions aø and a^_ leading to state A.
In each bit period, i.e. for each set of soft values, the following operations are to be performed for all target states, i.e. for all or at least for all possible1 states. For ease of description, however, consider state A only. First, an indication of (i.e. a measure for) probability is calculated for each transition leading to state A. Such a measure for a transition probability is referred to as a transition metric in the following. This is, two transition metrics PQ, i for the transitions &Q and a.χ , respectively, are calculated for state A in the metric calculator (DIST) 1 of Fig. 6. More precisely, the metric calculator 1 calculates a measure po for the probability that the soft values CQ ' . Ci ' were indeed ones (transmission of the set "11" of coded
bits due to the transition ø) as well as a measure p^ for the probability that they were indeed zeros (transmission of "00" due to the transition a^_) .
Secondly, a so-called add-compare-select (ACS) circuit 6 calculates a measure for the probability of state A, i.e. a measure for the probability that the state "10" is reached at time tQ+3*tj . Such a measure for a state probability is also referred to as a state metric (SM) . A state metric thus indicates how likely it is that during the successive stepping the respective state is reached. For the determination of the state metric for state A, the metrics for the respective previous states (the "old" state metrics) are added to the transition metrics by the adders 2, 3 shown in Fig. 6, wherein it is assumed that the "old" state metrics are stored in a state metric buffer (SM-BUF) 7. In particular, the adder 2 adds the "old" state metric SM(00), i.e. a measure for the probability that the "00" state was assumed at time tø+2*tj to the transition metric po associated with transition aQ . Likewise, adder 3 adds the "old" state metric SM(01), i.e. a measure for the probability that the "01" state was assumed at time to+2*tjj to the transition metric p]_ associated with transition aj_ . From the resulting two sums SM(00)+PQ and SM(0'l)+p]_ output by the adders 2 and 3, respectively, the selector 5 of Fig. 6 selects, under control of the comparator 4, the smaller value as the "new" state metric SM' (10) , i.e. as a measure for the probability that the "10" state is reached at time tQ+3*tj -
In other words, the path with fewer bit errors leading to the target state (state A) is chosen as being the more probable one. This processing of excluding the path with more bit errors is also known as "pragmatic trellis decoding", since only the better path is kept for further processing. The "new" state metric SM' (10) is then written to the state metric buffer (SM-BUF) 7, as indicated in Fig. 6.
'At the initial and final K-l time instants, not all states can be assumed in accordance with the trellis diagram, as can be seen in Fig. 3 from the times t0, t0+tD, t0+(n-l)*tD, and t0+n*tp- As stated above, in each bit period, i.e. for each set of soft values, these operations are to be performed for all target states, i.e. for all or at least all possible states. Considering first "normal" bit periods, i.e. bit periods where all states can be assumed in the beginning and at the end, it can be stated that in the above example, in each "normal" bit period in between time tQ+2*tjj and tg+ (n-2 ) *tjj, four new state metrics are calculated (and stored in the buffer 7) on the basis of the four old state metrics, which have previously been stored in the buffer 7, and the four different transition metrics of all (8) transitions between the old and new states. Thus, in each "normal" bit period, the state metric buffer (SM-BUF) 7 stores measures for the probabilities that the convolutional encoder had assumed the respective states at the corresponding time (respective bit period) during the encoding process .
As far as bit periods are concerned where not all states can be assumed in the beginning and/or at the end, fewer (than 4) old state metrics and/or fewer (than 4) transition metrics can be taken into account and/or fewer (than 4) new state metrics need to be determined. From an implementational point of view, it may however be advantageous to determine state metrics for all states (as described above with respect to "normal" bit periods) no matter whether or not they can be assumed at a given point in time. This can easily be achieved by assigning values corresponding to the lowest possible probability to states and/or transitions that can not be assumed. Thereby, impossible paths can be prevented from being retained in the subsequent path selecting operations (4, 5) and states can be marked as impossible. In the example of Fig. 3, the state metrics for the states "01", "10", and
"11" can be initialized (time to) with the value corres- ponding to the lowest possible probability while the state metric for the state "00" should be initialized with the value for the greatest possible probability. This will ensure that no path originating from one of the states "01", "10", and "11" will be retained at time tQ+2*tD latest. Similarly, after time tg+ (n-2 ) *tjj, states can easily be marked as impossible by assigning values corresponding to the lowest possible probability to all transitions which cannot occur, i.e. to all transitions due to an encoding of a D=l data bit.
As stated above, a circuit for performing the functions of the blocks 2-5 shown in Fig. 6 for a single target state (state A, e.g.) is commonly referred to as an add-compare- select (ACS) circuit 6. The skilled person will readily appreciate that for the determination of the state metrics for all (or all possible) states, either a single (fast) ACS circuit 6 or several parallel (slower) ACS circuits 6 can be applied. In the former case, the ACS circuit will have to serially calculate the state metrics for all (possible) states, while in the latter case, a separate ACS circuit is dedicated to each state thus allowing for a simultaneous calculation thereof .
Principally, the same applies to the metric calculator 1. It must however be stated that, in the above example, a total of only four transition metrics needs to be determined in each "normal" bit period, because there are only four different metrics for the eight transitions from the beginning to the end of a "normal" bit period in Fig. 3. For this reason, the skilled person will readily appreciate that it is sufficient to determine said four transition metrics and then to distribute them appropriately to the eight inputs of the four parallel ACS circuits, e.g..
In summary, all metric calculators and ACS circuits needed for the determination of state metrics can be viewed as a determining means for determining, for each set of soft values, a set of new state metrics to be stored in the state metric buffer (SM-BUF) 7.
Advantageously, the size of the state metric buffer (SM-BUF) 7 is chosen so that it can hold a single state metric value for each state, i.e. a total of four state metric values in the above example. In this case, the respective "old" state metric (SM(10), e.g.) is overwritten when storing the "new" state metric (SM' (10), e.g.) . Care must however be taken in order to prevent old state metrics from being overwritten before they are needed for the calculation of another new state metric. In particular, this applies to the case, where a single ACS circuit serially calculates the state metrics .
Whilst in the prior art only a predetermined number of steps is calculated due to the window-size of the conventional Viterbi decoder (e.g. 48 steps), according to the invention the state metric determining procedure is carried out up to the set of soft values corresponding to the last set of coded (tail) bits. That is, this procedure runs through all sets of soft values received for a data bit sequence until the final (predefined) state (the reset state "00", e.g.) is reached at time tQ+n*tjj, wherein n indicates the number of bits (inclu- ding tail bits) in the sequence of (uncoded) data bits D.
Furthermore, during the stepping through of the sets of soft values for one complete data bit sequence, so-called state decision information is written into a trellis memory 8 shown in Fig. 6. More precisely, in each (or almost each) bit period, i.e. for (essentially) each set of soft values, the state decision information includes a one bit indication for each (or at least each possible) state indicating the final transition in the most likely path leading to said state. Preferably, the value of this indication is identical to the (uncoded) data bit D which "fell off" the encoder shift register (cf. Fig. 1) as a consequence of said final transition in the most likely path leading to said state.
As an example, consider again state A shown in Fig. 3 and assume that the most likely path leading to state A includes the transition ao rather than a^ _ In this case, the output of adder 2 in Fig. 6 will be inferior to the one of adder 3. As a consequence, the comparator 4 will output a value of zero thereby causing the selector 5 to indeed select the output of adder 2 as the new state metric SM' (10) . In contrast, if the transition a. was part of the most likely path leading to state A, the comparator 4 would have output a value of one so as to select the output of adder 3.
It is important to note that, by a simple arrangement, it can be ensured that the value output by the comparator 4 always corresponds (is identical) to the value of the data bit D which "fell off" the encoder shift register as a consequence of the final transition in the most likely path leading to the considered state. Considering again the above example, it can be seen from Fig. 3 that the transition ao originates from state "00". Thus, a value of zero (the final value in "00") would disappear from the encoder shift register as a consequence of this transition. In contrast, the transition ai originates from state "01" so that a value of one would be forced off the shift register due to this transition. Given the fact that the comparator outputs a value of zero [one] if ao [a]_] is to be selected as a part of the most likely path leading to state A, it is easy to confirm that the comparator output thus is identical to the value of the data bit
"falling off" the shift register as a consequence of the respective transition. This data bit falling off the shift register is of course identical to the data bit D which was shifted into the shift register K-l bit periods earlier. For this reason, the output of the comparator 4 in Fig. 6 can be directly stored in the trellis memory 8 for the state "10" and the time tQ+3*tjj (state A) as a state decision information indicating the value of the data bit D shifted into the encoder K-l=2 bit periods earlier. In order for the output of the comparator 4 to be identical to the data bit D shifted into the encoder K-l bit periods earlier, the following simple arrangement must be made. The inputs of . adder 2; must be assigned to the transition driving out a value of zero from the encoder shift register (i.e. to the transition aø in the above example) , whereas the inputs of adder 3 must be assigned to the transition forcing off a value of one from the encoder shift register (i.e. to ai in the example considered) . As the skilled person will readily appreciate, an inverter or some mapping unit would be necessary at the output of the comparator 4 if an arrangement other than the simple arrangement described above was made
(such as assigning adder 2 to trans, ai and adder 3 to aø) •
In summary, the trellis memory 8 stores, for (almost) each set of soft values, a state decision information for each (or at least each possible) state indicating the value of the data bit output from (falling off) the shift register as a consequence of the final transition in the most likely path leading to said state. Therefore, in the above example, a value of zero will be stored for state A (i.e. the state "10" at time to+3*tjj) in order to indicate that the transition aø is part of the most likely path leading to state A, while a value of one will be stored to indicate that &χ is part of the most likely path leading to state A. With the help of such state decision information, a decoded bit as well as the previous state can be determined for (almost) each set of soft values during a trace back process as described below.
As the skilled person will readily appreciate, state decision information needs to be determined and stored only for those states to which more than one path can lead. In other words, state decision information needs not be determined for states to which no path or a single path can lead. In the example of Fig. 3, this applies to all states up to (and including) time tø+2*tj, because it is clear that during this period, the initial (reset, e.g.) state of the shift register comprising K-l=2 bits is output (as opposed to data bits) . This also applies to the impossible states from time to+(n-l)*t;p onwards, although state decision information has to be stored in this period for the possible states, because the final K-l user/payload bits will fall off the shift register as a consequence of the shifting in of tail bits D=0. From an implementational point of view, as described above with respect to storing state metrics, it may however be advantageous to determine and store state decision information for all states (as opposed to all possible states) and/or for all n sets of soft values (as opposed to almost all sets) . At the limit, the trellis memory will need to be capable of storing a number of bits equal to the number of states multiplied by n, i.e. a total of 4*n bits in the above example.
As stated above, for the determination of the state metrics, either a single (fast) ACS circuit or several parallel (slower) ACS circuits can be applied. Of course, this will also apply to the determination of the state decision information. In the former case, the ACS circuit will have to serially calculate the state metrics and the state decision information for all (possible) states, while in the latter case, a separate ACS circuit is dedicated to each state thus allowing for a simultaneous calculation thereof.
In summary, all metric calculators, adders, comparators and selectors needed for the determination of state metrics and/or state decision information can be viewed as a determining means for determining, for (essentially) each set of soft values, state metrics to be stored in the state metric buffer 7 as well as state decision information to be stored in the trellis memory 8.
Once the complete state decision information has been stored in the trellis memory 8, the trace back means 11 traces back the entries in the trellis memory by following, in the reverse direction (and beginning with the final state) , the most likely path through the trellis as indicated by the stored state decision information and, while so doing, outputs the respective stored values (i.e. the state decision information stored along the most likely path) as decoded bits. The trace back means 11 is triggered by a control means 12 after the final (predefined) state ("00", e.g.) has been reached, i.e. after n bit periods. As opposed to the prior art, the decoder according to the invention first steps through all n sets of soft values and thereby determines and stores the state metrics a-j well as the state decision information. Only after the final (predefined) state has been reached, a trace back is carried out in the reverse direction by tracing back the trellis memory 8 such that all decoded bits (i.e. the state decision information stored along the most likely path) are output during the trace back. Note that said tracing back can be performed at high speeds, since no demanding calculations are necessary during the trace back. Thus, the present invention does not use the window-wise decoding as described above, but performs the stepping forward for the complete data bit sequence (burst) until the final state is reached, and then performs the stepping back, again for the complete data bit sequence.
Instead of using one trellis memory only which is first written to and then read out during the trace back, a preferred embodiment of the decoder comprises two trellis memories 8, 9, as indicated in Fig. 6. In this case, the control means 12 controls the trace back means 11 and a cross-connect means 10 such that said tracing back of one trellis memory, e.g. memory 8, for a first (current, e.g.) data bit sequence (burst) is carried out simultaneously with writing of state decision information for a second (subsequent, e.g.) data bit sequence (burst) in the other trellis memory, e.g. memory 9. In this case, the cross- connect means 10 is used to let the trace back means 11 respectively read one of the two trellis memories 8, 9, whilst the other one is written to by the determining means (1-6) as described above with respect to Fig. 6. It is to be noted that the decoder according to the invention (Fig. 6, e.g.) can also be applied in cases where the demodulator takes hard decisions as opposed to soft decisions. The only difference in this case is that sets of demodulated values each comprising a single bit would be input into the metric calculator 1 instead of sets of demodulated values (soft values) each comprising several bits.
Second embodiment (metric calculation)
As explained above, the decoder according to the invention provides a convolutional decoding for a communication system
(e.g. a UMTS-system) where data bit sequences of variable lengths must be decoded. Each sequence is followed to the end
(tail bits) before carrying out a trace back.
For this purpose, the metric calculator 1 calculates measures for the probabilities that the received soft values were indeed a 0 or a 1 when transmitted in the form of coded bits . Conventionally, as explained above, a square error is calculated to evaluate the "cost" of each transition. Requiring multiplications, these square-error calculations, however, are expensive both in terms of the necessary hardware effort (and thus complexity) and the calculation time.
Fig. 7 shows the principle for the metric calculations performed in the metric calculator 1 according to the invention. On the vertical axis, the possible 4-bit soft values are shown in sign-magnitude notation. The metric calculator 1 calculates the distances between the actual 4-bit soft values
Cø ' , Ci ' on the one hand and the respective expected values on the other hand. Herein, the expected values are values for a most confident "0" (such as "0111") and a most confident "1" (such as "1111"), wherein it is determined from the trellis diagram which one is relevant for a particular transition under consideration.
Fig. 7 shows an example for an actual soft value Co'="0001".
The distance between this soft value and the value for the most confident zero (i.e. "0111"), i.e. the soft error for the "0" decision on Co' would thus be "0110" in binary notation ("OllObin") or "6" in decimal notation ("6dec"), while the distance between the above soft value and the value "1111" standing for the most confident one, i.e. the soft error for the "1" decision on Co' would amount to 1001bn=9dec-
For example, assuming that the calculation is done for state A in Fig. 3, then the metric calculator 1 knows that only the distances between' the 4-bit soft values Cø ' , C]_ ' on the one
hand (Co ' = "0001", Ci ' ='" 0010" , e.g., cf. Fig. 6) and the expected values "11" and "00" on the other hand need to be calculated, since only the sets "11" and "00" of coded bits can result from the transitions aø and ^ resp., in accordance with the trellis diagram. The metric calculator 1 then adds up the respective binary distances (relating to the same transition) for the two soft values and outputs the results as the transition metrics po and pj_ for the transitions ao
and a.i , respectively. The metric po for the probability that the set of actual soft values corresponds to a "11" transmission is output to the adder 2 and the metric p^ for the probability that the set of actual soft values corresponds to a "00" transmission is output to the adder 3. The adders 2, 3, as explained above, then add the transition me-trics po ,
PI to the respective "old" state metrics, as explained above.
Arranging the possible levels for the soft values in sign- magnitude notation as shown in Fig. 7, the metric is calculated as a soft error or as a bit distance and thus this calculation of a non-squared soft error as metric can save complexity and computation time in comparison with the square-error calculation according to the prior art.
Third embodiment (quality indicator/error measure)
Due to the encoding of a sufficient number of tail bits, the trellis will converge in a final (predefined) state at time to+n*tjj (the state "00" in Fig. 3). The state metric of this final state can be output as a quality indicator or as an error measure by the convolutional decoder. For example, the BER flag shown in Fig. 5 at © represents such an error measure. Provided that the transition metrics are determined according to the above description with respect to Figure 7, it is to be noted that the error measure is composed of accumulated non-squared soft errors. That is, for each possible transition, the metric calculator 1 calculates the non-squared bit distance, and the bit distance associated with the more likely path is respectively added to the state metric of the previous state and thus, invariably the state metric of the last (known and predefined) state can be used as an error indication indicating how likely it is that the trace back through the transitions as carried out by the trace back means indeed corresponds to the stepping of states as originally done in the transmitter during the encoding. Having defined the possible levels for the soft values in sign-magnitude notation in accordance with Fig. 7, this means that the larger the bit distance is, the smaller the probability will be that the respective bit has been transmitted. Thus, the final state metric can be directly used as an indication of decoding quality or as an error measure. If the accumulated bit distance compared to a threshold is small, this indicates a high decoding quality and if the accumulated bit distance is high, then this indicates a low quality.
Depending on the values ("1" or "0") of the tail bits used, different final states can be reached. Values other than zero (assumed above) can be used as long as it is guaranteed that the encoder and decoder know that the last predetermined number of bits correspond to tail bits which will bring the encoder and decoder into one known final state (as determined by the state transition or trellis diagram) .
Industrial applicability
As explained above, the decoder and the decoding method according to the invention are adapted to a burst mode transmission and first calculate state metrics as well as state decision information in each step for all (n) sets of soft values including those associated with coded tail bits until a final state known to the encoder and the decoder is reached. Once the final state is reached, a trace back is carried out with simultaneous reading out of decoded bits according to the state decision information stored in the trellis memory. Thus, this type of convolutional decoder can be used in any digital communication system using a burst mode transmission. Hence, the invention is not restricted to UMTS and/or CDMA systems, which has been used in the above description as a preferred example only. The application of the decoding principle is also independent as to whether several user channels arrive parallely or serially at the decoder. Several determining means as described above can be provided in parallel for a parallel processing of a great plurality of user channels. However, individual data bit sequences of different user channels can also be processed serially by single units. That is, whilst one data bit sequence (of one user channel, e.g.) is processed in the determining means, another data bit sequence is traced back in the other trellis memory.
The decoding principle is also independent from the length n of the data bit sequence (or the number of sets of demodulated values), or equivalently, the bit rate. That is, the decoding uses the fact that the encoder/decoder must be in predetermined states at the beginning and the end of processing a data bit sequence (burst) and thus an arbitrary number of bits can be processed in each data bit sequence. It must only be known that the predetermined number of tail bits brings the encoder/decoder into a predefined state according to the values of the tail bits used. Thus, the invention is applicable to all digital communication systems using a burst mode transmission and convolutional encoding/decoding.
Furthermore, it should be noted that the invention is not restricted to specific embodiments described above which have only been discussed here as the presently known best mode of the invention. Other modifications and variations can however be carried out according to the above teachings . In particular, the invention can comprise a combination of features which have only been described separately in the description or separately in the claims .

Claims

Claims
1. A decoder for decoding sets of demodulated values associated with a sequence of data bits including a predetermined number of tail bits, wherein said data bit sequence has been convolutionally encoded and transmitted in a burst mode operation, said decoder including:
a) determining means (1-6) for determining, for essentially each set of demodulated values, a state metric (SM' (10) ) and a state decision information for at least each possible state, wherein said state metric and said state decision information for a particular state indicate the probability of said particular state and a final transition of a most likely path leading to said particular state, respectively, including determining a final state decision information for a final state indicating a final transition of a most likely path leading to the final state, wherein the final state is determined by said tail bits;
b) a trellis memory (8, 9) for storing said state decision information;
c) trace back means (11) for tracing back said trellis memory from the final state by selecting, based on the stored state decision information, the transitions of said most likely path leading to the final state, and by outputting a decoded bit in accordance with each transition selected during the trace back; and d) control means (12) for controlling said trace back means (11) such that said tracing back is started only after said final state decision information has been stored in said trellis memory.
2. A decoder according to claim 1, wherein
- a further trellis memory (9) is provided for storing state decision information associated with a complete data bit sequence,
- said control means (12) controls said trace back means (11) and said trellis memories (8,9) such that said tracing back of said trellis memory (8) for a first data bit sequence is carried out simultaneously with the writing of state decision information for a second data bit sequence in said further trellis memory (9).
3. A decoder according to claim 1 or 2 , wherein said determining means includes a metric calculator (1) for - receiving a set of demodulated values (Co', C ' ) , and
- calculating, for each possible transition to each particular state, a respective transition metric (p , pχ) by determining the bit distances between the demodulated values on the one hand and the respective expected bit on the other hand, and by adding said bit distances.
4. A decoder according to one of the claims 1-3, wherein said determining means includes :
- adding means (2,3) for adding, for each possible transition to each particular state, a state metric of a respective previous state (SM(00); SM(01)) to a transition metric of a transition from said respective previous state to said particular state (pQ; p.) ,
- comparing means (4) for comparing, for each particular state, the results (SM(00)+pQ, SM(01)+p1) obtained by said adding means, thereby generating a state decision information for said particular state.
5. A decoder according to claim 4, wherein said determining means includes :
- selecting means (5) for selecting, for each particular state, the smallest one of the results obtained by said adding means, thereby generating a state metric for said particular state (SM'(IO)), wherein said selecting means is controlled by said state decision information for said particular state.
6. A decoder according to one of the preceding claims, adapted to output the state metric for the final state as an error measure indicating decoding quality.
7. A decoder according to one of the preceding claims, wherein each set of demodulated values includes a set of a predetermined number of soft values (Co', C]_ ' ) , each comprising a predetermined number of bits and indicating the reliability of the associated demodulated value.
8. A receiver of a digital telecommunication system including a decoder according to one of the preceding claims .
9. A method for decoding sets of demodulated values associated with a sequence of data bits including a predetermined number of tail bits, wherein said data bit sequence has been convolutionally encoded and transmitted in a burst mode operation, including the steps of:
a) determining, for essentially each set of demodula- ted values, a state metric (SM' (10) ) and a state decision information for at least each possible state, wherein said state metric and said state decision information for a particular state indicate the probability of said particular state and a final transition of a most likely path leading to said particular state, respectively, including determining a final state decision information for a final state indicating a final transition of a most likely path leading to the final state, wherein the final state is determined by said tail bits;
b) storing said state decision information in a trellis memory;
c) tracing back said trellis memory from the final state by selecting, based on the stored state decision information, the transitions of said most likely path leading to the final state, and by outputting a decoded bit in accordance with each selected transition; and
d) controlling said step of tracing back such that said tracing back is started only after said final state decision information has been stored in said trellis memory.
10. A method according to claim 9, wherein
- a further trellis memory is provided for storing state decision information associated with a complete data bit sequence, - said controlling step controls said step of tracing back and said trellis memories such that said tracing back of said trellis memory for a first data bit sequence is carried out simultaneously with the writing of state decision information for a second data bit sequence in said further trellis memory.
11. A method according to claim 9 or 10, wherein said step of determining includes :
- receiving a set of demodulated values (Co'. C ' ) , and - calculating, for each possible transition to each particular state, a respective transition metric (pø,
Pl) by determining the bit distances between the demodulated values on the one hand and the respective expected bit on the other hand, and by adding said bit distances
12. A method according to one of the claims 9-11, wherein said step of determining includes :
- adding, for each possible transition to each particular state, a state metric of a respective previous state (SM(00); SM(01)) to a transition metric of a transition from said respective previous state to said particular state (pQ; p1) ,
- comparing, for each particular state, the results (SM(00)+pQ; SM(01)+p1) obtained by said adding step, thereby generating a state decision information for said particular state.
13. A method according to claim 12, wherein said step of determining includes: - selecting, for each particular state, the smallest one of the results obtained by said adding step, thereby generating a state metric for said particular state (SM' (10)), wherein said selecting step is controlled by said state decision information for said particular state.
14. A method according to one of the claims 9 to 13, wherein the state metric for the final state is output as an error measure indicating decoding quality.
15. A method according to one of the claims 9 to 14, wherein each set of demodulated values includes a set of a predetermined number of soft values (CQ', C-[ ' ) , each comprising a predetermined number of bits and indicating the reliability of the associated demodulated value.
PCT/EP2002/008854 2002-08-08 2002-08-08 Convolutional decoder and method for decoding demodulated values WO2004019498A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2002340809A AU2002340809A1 (en) 2002-08-08 2002-08-08 Convolutional decoder and method for decoding demodulated values
PCT/EP2002/008854 WO2004019498A1 (en) 2002-08-08 2002-08-08 Convolutional decoder and method for decoding demodulated values

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2002/008854 WO2004019498A1 (en) 2002-08-08 2002-08-08 Convolutional decoder and method for decoding demodulated values

Publications (1)

Publication Number Publication Date
WO2004019498A1 true WO2004019498A1 (en) 2004-03-04

Family

ID=31896797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2002/008854 WO2004019498A1 (en) 2002-08-08 2002-08-08 Convolutional decoder and method for decoding demodulated values

Country Status (2)

Country Link
AU (1) AU2002340809A1 (en)
WO (1) WO2004019498A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG113465A1 (en) * 2003-05-30 2005-08-29 Oki Techno Ct Singapore Pte Method of estimating reliability of decoded message bits
JP2014501472A (en) * 2011-01-03 2014-01-20 セントル・ナショナル・デチュード・スパシアル Decoding method and decoder
JP2014501473A (en) * 2011-01-03 2014-01-20 セントル・ナショナル・デチュード・スパシアル Method for correcting a message containing stuffing bits

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995008888A1 (en) * 1993-09-24 1995-03-30 Qualcomm Incorporated Multirate serial viterbi decoder for code division multiple access system applications
EP0660534A2 (en) * 1993-12-22 1995-06-28 AT&T Corp. Error correction systems with modified viterbi decoding
US5887007A (en) * 1996-02-23 1999-03-23 Oki Electric Industry Co., Ltd. Viterbi decoding method and viterbi decoding circuit
US5907586A (en) * 1995-09-04 1999-05-25 Oki Electric Industry Co., Ltd. Method and device for signal decision, receiver and channel condition estimating method for a coding communication system
US6088405A (en) * 1999-01-15 2000-07-11 Lockheed Martin Corporation Optimal decoder for tall-biting convolutional codes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995008888A1 (en) * 1993-09-24 1995-03-30 Qualcomm Incorporated Multirate serial viterbi decoder for code division multiple access system applications
EP0660534A2 (en) * 1993-12-22 1995-06-28 AT&T Corp. Error correction systems with modified viterbi decoding
US5907586A (en) * 1995-09-04 1999-05-25 Oki Electric Industry Co., Ltd. Method and device for signal decision, receiver and channel condition estimating method for a coding communication system
US5887007A (en) * 1996-02-23 1999-03-23 Oki Electric Industry Co., Ltd. Viterbi decoding method and viterbi decoding circuit
US6088405A (en) * 1999-01-15 2000-07-11 Lockheed Martin Corporation Optimal decoder for tall-biting convolutional codes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
COX R V ET AL: "A CIRCULAR VITERBI ALGORITHM FOR DECODING TAILBITING CONVOLUTIONAL CODES", PERSONAL COMMUNICATION - FREEDOM THROUGH WIRELESS TECHNOLOGY. SECAUCUS, NJ., MAY 18 - 20, 1993, PROCEEDINGS OF THE VEHICULAR TECHNOLOGY CONFERENCE, NEW YORK, IEEE, US, vol. CONF. 43, 18 May 1993 (1993-05-18), pages 104 - 107, XP000393135, ISBN: 0-7803-1267-8 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG113465A1 (en) * 2003-05-30 2005-08-29 Oki Techno Ct Singapore Pte Method of estimating reliability of decoded message bits
US7203894B2 (en) 2003-05-30 2007-04-10 Oki Techno Centre (Singapore) Pte Ltd Method of estimating reliability of decoded message bits
JP2014501472A (en) * 2011-01-03 2014-01-20 セントル・ナショナル・デチュード・スパシアル Decoding method and decoder
JP2014501473A (en) * 2011-01-03 2014-01-20 セントル・ナショナル・デチュード・スパシアル Method for correcting a message containing stuffing bits

Also Published As

Publication number Publication date
AU2002340809A1 (en) 2004-03-11

Similar Documents

Publication Publication Date Title
US5416787A (en) Method and apparatus for encoding and decoding convolutional codes
EP1355430B1 (en) Error detection methods in wireless communication systems
JP4101653B2 (en) Scaling demodulated data in interleaver memory
KR100625477B1 (en) Method and apparatus for transmitting and receiving concatenated code data
US5923713A (en) Viterbi decoder
JP3677257B2 (en) Convolution decoding device
KR100554322B1 (en) Convolutional decoding with the ending state decided by crc bits placed inside multiple coding bursts
US6199190B1 (en) Convolution decoding terminated by an error detection block code with distributed parity bits
EP1805899B1 (en) Puncturing/depuncturing using compressed differential puncturing pattern
JP2003023359A (en) Decoder for error-correcting turbo code
US6452985B1 (en) Viterbi decoding apparatus and Viterbi decoding method
US6757865B1 (en) Turbo-code error correcting decoder, turbo-code error correction decoding method, turbo-code decoding apparatus, and turbo-code decoding system
JPH0555932A (en) Error correction coding and decoding device
US5982822A (en) Viterbi decoder
US5822340A (en) Method for decoding data signals using fixed-length decision window
US8009773B1 (en) Low complexity implementation of a Viterbi decoder with near optimal performance
EP1370006A2 (en) Blind transport format detection system and method
US7072926B2 (en) Blind transport format detection system and method with logarithm approximation for reliability figure
US7340670B2 (en) Decoding apparatus, communication apparatus and decoding method
WO2004019498A1 (en) Convolutional decoder and method for decoding demodulated values
US7382831B1 (en) Viterbi decoder utilizing compressed survival metrics for reduced memory size requirements
US6411663B1 (en) Convolutional coder and viterbi decoder
JP2004215310A (en) Decoder for error correction turbo code
JP3337950B2 (en) Error correction decoding method and error correction decoding device
US7159167B2 (en) Method and arrangement for enhancing search through trellis

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP