CN1144378C - Soft output decoder for convolution code and soft output decoding method - Google Patents

Soft output decoder for convolution code and soft output decoding method Download PDF

Info

Publication number
CN1144378C
CN1144378C CNB998008192A CN99800819A CN1144378C CN 1144378 C CN1144378 C CN 1144378C CN B998008192 A CNB998008192 A CN B998008192A CN 99800819 A CN99800819 A CN 99800819A CN 1144378 C CN1144378 C CN 1144378C
Authority
CN
China
Prior art keywords
probability
soft output
calculation
truncation length
calculate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB998008192A
Other languages
Chinese (zh)
Other versions
CN1272253A (en
Inventor
���ڿ�֮
宫内俊之
֮
服部雅之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN1272253A publication Critical patent/CN1272253A/en
Application granted granted Critical
Publication of CN1144378C publication Critical patent/CN1144378C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3911Correction factor, e.g. approximations of the exp(1+x) function
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3922Add-Compare-Select [ACS] operation in forward or backward recursions
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3905Maximum a posteriori probability [MAP] decoding or approximations thereof based on trellis or lattice decoding, e.g. forward-backward algorithm, log-MAP decoding, max-log-MAP decoding
    • H03M13/3927Log-Likelihood Ratio [LLR] computation by combination of forward and backward metrics into LLRs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6566Implementations concerning memory access contentions

Abstract

After I beta (( beta t to beta t-D+1) for (the number of states)x(data within truncation length) is calculated, the soft output outside the truncation length is calculated in order while I beta ( beta t-D to beta t-2D+1) for data outside the succeeding truncation length, and I beta for data within the next truncation length is calculated in order. Thus, a decoder (4) parallel calculates I beta within the truncation length and I beta ascended by the truncation length or more. Consequently the amount of calculation of I beta per clock is (the number of states)x2, meaning that the amount of calculation is fairly small, and leading to high-speed decoding.

Description

Convolution code soft output decoder device and soft output decoder method
Technical field
The present invention relates to a kind of for example comparatively ideal convolution code soft output decoder device and soft output decoder method of satellite broadcast receiving device etc. of being applicable to.Specifically, relate to a kind ofly, carry out with parallel mode that probabilistic information upgrades and the outer soft output calculating of truncation length in the truncation length, soft output decoder device that can high speed operation by the probabilistic information more than storage truncation length on the recording medium.
Background technology
For the decoding that makes symbol error rate minimum behind the convolutional code decoder, known have bcjr algorithm (Bahl, a Cocke, Jelinek and Raviv work, " making the optimization decoding of the minimized linear code of symbol error rate ", IEEE information theory journal, the IT-20 volume, pp.284-287, in March, 1974).In the bcjr algorithm, be not each code element of output, but the likelihood score of exporting each code element is as decoded result.This output is called soft output (soft-output).
In recent years, people just are being devoted to repeat to export by each of the decoding output that makes the concatenated code ISN, repeat decoding method to become the research that soft output reduces the code element bit error probability, and as its decoding that is fit to, bcjr algorithm receives publicity.
Below, the content of detailed description bcjr algorithm.
Bcjr algorithm, it or not each code element of output, but the likelihood score of exporting each code element is by convolution coder digital information to be carried out convolutional encoding as decoded result, observes through the algorithm under this this situation of convolution encoded data sequence by the memoryless channel with noise.
Here, M the state (transition state) that convolution coder is represented shift register content by m (0,1 ..., M|1) expression, the order state of t constantly are S t, constantly t's is input as i t, t is output as X constantly t, output sequence is X t T '=X t, X T+1..., X T 'Migration probability p between each state t(m|m ') represent by following formula (1).(1)pt(m|m’)=Pr{St=m|St-1=m’}
…(1)
In addition, Pr{A|B} is meant because of the B condition takes place and adds the probability that the A condition takes place, and Pr{A; B} then represents A and the simultaneous probability of B.And the convolution code of convolution coder is set at from state S 0=0 beginning, output X 1 t, at S t=0 o'clock finishes.
Noisy memoryless channel will be exported X 1 tAs input, will export Y 1 tOutput.Here, make output sequence Y t T '=Y t, Y T+1..., Y T 'Here, the migration probability of noisy memoryless channel is for whole t (1≤t≤τ), can the function R (|) of formula (2) define below satisfying.
Pr { Y 1 t | X 1 t } = Π j = 1 t ( Yj | Xj ) · · · ( 2 )
Therefore, definition probability λ shown in following formula (3) t, this probability λ tExpression receives Y 1 tThe time likelihood score of the input information of t constantly, this probability λ tAs the soft output that should try to achieve.
λt = Pr { it = 1 | Y 1 t } Pr { it = 0 | Y 1 t } · · · ( 3 )
Bcjr algorithm is defined as follows the probability α shown in face formula (4)~formula (6) t, β t, γ t
αt ( m ) = Pr { St = m ; Y 1 t } · · · ( 4 )
βt ( m ) = Pr { Y t + 1 τ | St = m } · · · ( 5 )
γt(m’,m,i)=Pr{St=m;Yt;it=i|St-1=m’}
…(6)
With Fig. 1 simple declaration α t, β t, γ tContent.Shown in Fig. 1 is relation between each probability.α T-1Be equivalent to begin the probability that passes through according to each state that receives the moment t-1 that word calculates by the time sequence order from coding initial condition S0=0.β tBe equivalent to from the pass through probability of end-of-encode state S τ=0 beginning according to each state that receives the moment T that word calculates by time sequence reverse order.γ tThe probability of acceptance of each the branch's output that is equivalent between state, move according to the moment t that reception word and the input probability of moment t are calculated.
Soft output can be used this α t, β t, γ tBy following formula (7) expression.
λt = Σ m = 0 M - 1 Σ m ′ = 0 M - 1 αt - 1 ( m ′ ) γt ( m ′ , m , 1 ) βt ( m ) Σ m = 0 M - 1 Σ m ′ = 0 M - 1 αt - 1 ( m ′ ) γt ( m ′ , m , 0 ) βt ( m ) · · · ( 7 )
Wherein, for t=1,2 ..., τ has following formula (8) to set up.
αt = Σ m ′ = 0 M - 1 Σ i = 0 1 αt - 1 ( m ′ ) γt ( m ′ , m , i ) · · · ( 8 )
Wherein, α 0(0)=1, α 0(m)=0 (m ≠ 0)
Equally, for t=1,2 ..., τ-1 has following formula (9) to set up.
βt ( m ) = Σ m ′ = 0 M - 1 Σ i = 0 1 βt + 1 ( m ′ ) γt + 1 ( m ′ , m , i ) · · · ( 9 )
Wherein, β t(0)=1, β t(m)=0 (m ≠ 0)
In addition, for γ t, have following formula (10) to set up.
γ t(m′,m,i)
Figure C9980081900084
Bcjr algorithm is according to above-mentioned relation, and (a)~(c) tries to achieve soft output λ by the following step t
(a) whenever receiving Y t, calculate α with formula (8), formula (10) t(m), γ t(m ', m, i).
(b) receive sequence Y 1 tWhole after, with formula (9),, calculate β to each state m of whole moment t t(m).
(c) α that (a) and (b) are calculated t, β t, γ tSubstitution formula (7) is calculated each soft output λ of t constantly t
But bcjr algorithm described above exists to comprise the big problem of product calculation thereby operand and code and need terminate thereby can't receive this problem of continuous data.
In the middle of the above-mentioned two aspect problems, for the way that reduces operand, Max-Log-BCJR algorithm and Log-BCJR algorithm (being called Max-Log-MAP algorithm and Log-MAP algorithm in the paper wherein) (Robertson has been proposed, Villebrun and Hoeher work, " comparison of optimization of carrying out in the territory and suboptimization MAP decoding algorithm ", ieee communication international conference collected works, pp.1009-1013, June nineteen ninety-five), and for receiving the used way of continuous data, then proposed to carry out SW-BCJR algorithm (Benedetto and the Montorsi work that the slippage window is handled, " the soft output decoder algorithm in vortex sign indicating number (turbo codes) iterative decoding ", TDA technology report 42-124, in February, 1996).
Next the content of these algorithms is described.
The content of Max-Log-BCJR algorithm and Log-BCJR algorithm is described earlier.
The Max-Log-BCJR algorithm is with probability α t, β t, γ t, λ tIn order to e is logarithm (natural logrithm) expression at the end, and the product calculation with probability is replaced into logarithm and computing as the formula (11), as the formula (12) being similar to the maximum operation of computing with logarithm probability.In addition, (x is to select x y) to max, the function of greatest measure in the middle of the y.
log(e x·e y)=x+y
…(11)
Figure C9980081900091
For the purpose of simplifying the description, shown in following formula (13)~formula (15), existing with α t, β t, γ t, λ tLogarithm be made as I α t, I β t, I γ t, I λ tBelow " I " expression be the logarithm at the end with e.
Iαt(m)=log(αt(m))
…(13)
Iβt(m)=log(βt(m))
…(14)
Iγt(m)=log(γt(m)) …(15)
The Max-Log-BCJR algorithm is with above-mentioned I α t, I β t, I γ tBe approximately shown in following formula (16)~formula (18).Here, I α t(m) and I β t(m) max m ' manages to try to achieve in the existing state m ' of the migration of state m at input i.
…(17)
Iγt(m’,m,i)=log(Pt(m|m’))+log(R(Yt,X))
…(18)
Wherein, X is the encoder output of m ' when migrating to m.
Equally, for I λ t, also be approximately shown in following formula (19).Here, the max m ' that the right is the 1st existed in the state m ' of state m migration in input to try to achieve in=1 o'clock, and the max m ' that the right is the 2nd existed in the state m ' of state m migration in input to try to achieve in=0 o'clock.
- max m = 0 , · · · , m - 1 max m ′ ( Iαt - 1 ( m ′ ) + Iγt ( m ′ , m , 0 ) + Iβt ( m ) ) · · · ( 19 )
The Max-Log-BCJR algorithm is according to last relation of plane, and (a)~(c) tries to achieve soft output λ by the following step t
(a) whenever receiving Y t, calculate I α with formula (16), formula (18) t(m), I γ t(m ', m, 1).
(b) receive sequence Y 1 tAll,,, calculate I β to each state m of whole moment t with formula (17) t(m).
(c) the I α that (a) and (b) is calculated t, I β t, I γ tSubstitution formula (19) is calculated each soft output I λ of t constantly t
As mentioned above, the Max-Log-BCJR algorithm does not comprise product calculation, thereby compares with bcjr algorithm, can reduce operand significantly.
Wherein, by shown in following formula (20) with probability with computing distortion, just the 2nd on the right becomes for variable | the function of a single variable of x-y|, thereby can try to achieve correct logarithm value with computing by in advance this being tabulated.
log(e x+e y)=max(x,y)+log(1+e -|x-y|)
…(20)
Max-Log-BCJR algorithm Chinese style (12) is all used formula (20) displacement, and what carry out correct probability calculation is exactly the Log-BCJR algorithm.The Log-BCJR algorithm is compared with the Max-Log-BCJR algorithm, though operand has increase, does not comprise product calculation, and it is exported except quantization error, and the soft output logarithm value of bcjr algorithm itself is not respectively.
The right is second in the formula (20), is variable | the function of a single variable of x-y|, thereby can be by easy high-accuracy arithmetic results of acquisition such as for example tabulations.Therefore, the Log-BCJR algorithm is compared with the Max-Log-BCJR algorithm, can obtain high-precision soft output.
Next the content of SW-BCJR algorithm is described.
Bcjr algorithm is in order to calculate β t, code need be terminated, so just can't receive continuous data.The SW-BCJR algorithm provides 1/M as β for whole states tInitial value, identical substitution truncation length when decoding with Viterbi (Viterbi), the truncation length D with constantly retrodicting and set tries to achieve soft output (with reference to Fig. 2).
The SW-BCJR algorithm, identical with common bcjr algorithm, by to α 0Carry out after the initialization, each soft output is constantly tried to achieve in (a)~(e) operation below each carries out constantly.
(a) try to achieve γ according to reception value and the migration probability of moment t t
(b) press β for whole state m t(m)=the 1/M initialization.
(c) according to γ T-D+1..., γ tCalculate β T-1..., β T-D
(d) with following formula (21) according to the β that is tried to achieve T-DAnd α T-D-1Calculate the soft output γ of t-D constantly T-D
λt - D = Σαt - D - 1 ( m ′ ) γt - D ( m ′ , m , 1 ) βt - D ( m ) Σαt - D - 1 ( m ′ ) γt - D ( m ′ , m , 0 ) βt - D ( m ) · · · ( 21 )
(e) by α T-D-1, γ T-DCalculate α T-D
In addition, in people's such as aforementioned Benedetto the paper, the SW-Log-BCJR algorithm that this SW-BCJR algorithm and Log-BCJR algorithm combination form has also been proposed, and the SW-Max-Log-BCJR algorithm (wherein, being called the SWAL-BCJR algorithm in the paper) that forms of SW-BCJR algorithm and Max-Log-BCJR algorithm combination.
Can receive continuous data and try to achieve soft output by with SW-Max-Log-BCJR algorithm or SW-Log-BCJR algorithm.But these algorithms with the code that is terminated is decoded different, ask when output decoding need try to achieve the β of state number * truncation length t, we can say that do not comprise product calculation, actual installation needs very large operand.
In sum, SW-Max-Log-BCJR algorithm or SW-Log-BCJR algorithm though can receive the continuous data that sends through convolutional encoding, are tried to achieve soft output, exist when obtaining a code output operand many, be difficult to this problem of high speed motion.
In addition, if the SW-Log-BCJR algorithm that adopts SW-BCJR algorithm and Log-BCJR algorithm combination to form, or the SW-Max-Log-BCJR algorithm (being also referred to as the SWAL-MAP algorithm) that forms of SW-BCJR algorithm and Max-Log-BCJR algorithm combination, can reduce the soft output that operand is tried to achieve continuous data.
But these algorithms obtain soft output by the truncation length D that retrodicts, and need try to achieve the β of state number * truncation length D value when asking a decoding output.Therefore we can say, do not comprise product calculation, have the problem that needs huge operand of handling.
Summary of the invention
Therefore, the object of the invention is, a kind of convolution code soft output decoder device and soft output decoder method that can high speed motion be provided.
The object of the invention also is, providing a kind of can constitute convolution code soft output decoder device and the soft output decoder method that soft output is decoded by simple and easy.
Among the present invention, try to achieve each transition state probabilistic information of convolution code, calculate soft output and when being exported with above-mentioned probabilistic information, with the truncation long measure differentiation storage in accordance with regulations of above-mentioned probabilistic information, carry out of renewal and the truncation length in addition calculating of soft output of truncation length with interior probabilistic information with parallel mode.
In the middle of the present invention, by carrying out of the calculating of truncation length with parallel mode, so can satisfy the realization high speed motion with the operand of each less clock or to the access amount of memory with soft output beyond the renewal of interior probabilistic information and the truncation length.
Specifically, the present invention is a kind of convolution code soft output decoder device, comprising: the probability calculation means of trying to achieve each transition state probabilistic information of convolution code; Store the probability storage means of the described probabilistic information that described probability calculation means try to achieve; And the soft output calculating means of trying to achieve soft output with the described probabilistic information of described probability storage means storage, it is characterized in that, the above described probabilistic information of described probability storage means storage truncation length, carry out described probability storage means with parallel mode truncation length is calculated the calculating of means to soft output beyond the truncation length with the renewal and the described soft output of interior probabilistic information.
In addition, the present invention is a kind of convolution code soft output decoder method, comprising: the first step of trying to achieve each transition state probabilistic information of convolution code; The described probabilistic information that this first step more than the truncation length is tried to achieve is stored in second step in the recording medium; And the third step of trying to achieve soft output with the described probabilistic information of this second step record media store, it is characterized in that, with parallel mode carry out described second step to truncation length with the calculating of the renewal of interior probabilistic information and described third step to soft output beyond the truncation length.
According to another aspect of the present invention, provide a kind of convolution code soft output decoder method, having comprised:, calculated the first probability calculation step of first probability of determining by code output code type and described reception value successively with regard to each reception value; According to described first probability,, calculate on time-axis direction to the second probability calculation step of second probability of each state with regard to each described reception value; According to described first probability, with regard to each described reception value, computational rules is lighted on the reverse direction of time shaft to the 3rd probability calculation step of the 3rd probability of each state fiducial time; And according to described first, the soft output calculation procedure of the second and the 3rd soft output of probability calculation, described the 3rd probability calculation step, the truncation long measure is distinguished described first probability in accordance with regulations, also on time-axis direction, set described fiducial time of point by described truncation length, put corresponding described truncation length range described each fiducial time as unit to the major general, calculate described the 3rd probability with the while parallel mode by described first probability of a plurality of series processing with by a plurality of sequences, from described the 3rd probability of a plurality of sequences of calculating like this, select the 3rd probability corresponding to described truncation length, export described reception and be worth corresponding described the 3rd probability, in the described first probability calculation step, by described first probability of the temporary transient preservation of probability storage device, and by the described second and the 3rd probability calculation step, described soft output calculation procedure is respectively handled corresponding order and is read successively and export, in described the 3rd probability calculation step, by described the 3rd probability of the temporary transient preservation of probability storage device, handle corresponding order by described soft output calculation procedure and read successively and export.
Brief Description Of Drawings
Fig. 1 is α in the bcjr algorithm t, β t, γ t, the key diagram of content.
Fig. 2 is the key diagram of SW-BCJR algorithm content.
Fig. 3 is the block diagram of illustrative application traffic model of the present invention.
Fig. 4 is the block diagram that convolution coder constitutes in the above-mentioned traffic model of signal.
Shown in Fig. 5 is the trellis of above-mentioned convolution coder.
Fig. 6 is the block diagram that decoder constitutes in the above-mentioned communication pattern of signal.
Fig. 7 is the key diagram of soft output calculation procedure in the above-mentioned traffic model.
Fig. 8 is that I γ calculates the block diagram that memory circuit constitutes in the above-mentioned decoder of signal.
Fig. 9 is that explanation constitutes the used sequential charts of action such as RAM that above-mentioned I γ calculates memory circuit.
Figure 10 is that I α calculates the block diagram that memory circuit constitutes in the above-mentioned decoder of signal.
Figure 11 is that the above-mentioned I α of signal calculates the block diagram that I α counting circuit constitutes in the memory circuit.
Figure 12 is the block diagram that the addition compare selection circuit constitutes in the above-mentioned I α counting circuit of signal.
Figure 13 illustrates that the above-mentioned I α of formation calculates the used sequential charts of action such as the register of memory circuit, RAM.
Figure 14 is that the above-mentioned I β of signal calculates the block diagram that memory circuit constitutes.
Figure 15 is that the above-mentioned I β of signal calculates the block diagram that I β counting circuit constitutes in the memory circuit.
Figure 16 is the block diagram that the addition compare selection circuit constitutes in the above-mentioned I β counting circuit of signal.
Figure 17 is that explanation constitutes the used sequential charts of action such as register that above-mentioned I β calculates memory circuit.
Figure 18 is the block diagram that soft output counting circuit constitutes in the above-mentioned decoder of signal.
Figure 19 is I λ in the above-mentioned soft output counting circuit of signal 1The block diagram that counting circuit constitutes.
Figure 20 is I λ in the above-mentioned soft output counting circuit of signal 0The block diagram that counting circuit constitutes.
Figure 21 is the used sequential chart of the above-mentioned soft output counting circuit action of explanation.
Figure 22 A, Figure 22 B, Figure 22 C and Figure 22 D are the key diagrams that above-mentioned I γ calculates storage management content in the memory circuit.
Figure 23 is the sequential chart of above-mentioned storage management.
Figure 24 is the block diagram that the corresponding addition compare selection circuit of signal SW-Log-BCJR algorithm constitutes.
Figure 25 is the block diagram that I γ calculates other configuration examples of memory circuit in the above-mentioned decoder of signal.
Figure 26 is that the above-mentioned I γ of explanation calculates the used sequential chart of memory circuit action.
Figure 27 is the block diagram that I β calculates other configuration examples of memory circuit in the above-mentioned decoder of signal.
Figure 28 is that the above-mentioned I β of explanation calculates the used sequential chart of memory circuit action.
The best mode that carries out an invention
Followingly specify the embodiment of the invention with reference to accompanying drawing.
The present invention is suitable for for example traffic model 1 of formation shown in Figure 3.In this traffic model 1, carry out convolutional encoding by 2 pairs of digital information D 0 of convolution coder, should offer decoder 4 through the convolution encoded data sequence by noisy memoryless channel 3, decode by 4 pairs of soft outputs of decoder through the convolution encoded data sequence.
Convolution coder 2 in this traffic model 1 is encoders of 1: 2 to 2 of 1 input and output.This convolution coder 2 is imported i by 1 of input as shown in Figure 4 t Input terminal 21, export 2 output X tLead-out terminal 22a, 22b, 26,27 formations of 23~25 and 2 registers of 3 EXOR circuit (Exclusive OR circuit is hereinafter referred to as " XOR circuit ").
Input terminal 21 is connected with lead-out terminal 22a, and is connected with the input of XOR circuit 23.The output of XOR circuit 23 is connected with register 26 inputs, and is connected with the input of XOR circuit 24.The output of XOR circuit 24 is connected with lead-out terminal 22b.The output of register 26 is connected with register 27 inputs, and is connected with XOR circuit 25 inputs.The output of register 27 is connected with XOR circuit 24 inputs, and is connected with XOR circuit 25 inputs.And the output of XOR circuit 25 is connected with XOR circuit 23 inputs.
In this convolution coder 2, offer 1 input i of input terminal 21 t,, and input to XOR circuit 23 in statu quo by lead-out terminal 22a output.XOR circuit 23 by register 26 and register 26,27 with above-mentioned input i tOffer XOR circuit 25 with the XOR output of XOR circuit 25 outputs, feed back the XOR output of above-mentioned XOR circuit 25.And this XOR circuit 23 is directly or by above-mentioned register 26,27, with above-mentioned input i tThe XOR output of exporting the two with the XOR of above-mentioned XOR circuit 25 offers XOR circuit 24.Above-mentioned XOR circuit 24 then is used as the XOR output of above-mentioned XOR circuit 23 and the XOR output of above-mentioned register 27 outputs as 1 in addition, is exported by lead-out terminal 22b.
This convolution coder 2 constitutes as mentioned above, and in a single day input terminal 21 has 1 input i tInput is just by 2 output sequence X of lead-out terminal 22a, 22b output tFig. 5 illustrates the trellis of above-mentioned convolution coder 2, and state number M is 4.
And decoder 4 is the decoders according to the SW-Max-Log-BCJR algorithm in this traffic model 1, is the convolution coder 2 corresponding decoders with constraint length 3 shown in Figure 4.
This decoder 4 is by the reception value Y of truncation length D=4 to the encoder 2 of process memoryless channel 3 inputs tHandle, export a soft output λ then tThis decoder 4 comprises as shown in Figure 6: control the used controller of total body action 41; Input reception value Y tAnd prior probability information Pr 1=logPr{i t=0}, Pr 2=logPr{i tInput terminal 42y, the 42p1 of=1}, 42p2; I γ calculates memory circuit 43; I α calculates memory circuit 44; I β calculates memory circuit 45; Soft output counting circuit 46; With the soft output I λ of output tLead-out terminal 47.
This decoder 4 is different with original SW-Max-Log-BCJR algorithm, for each time period is decoded, and off-design point number * truncation length I β tSpecifically, decoder 4 as shown in Figure 7, the I β of computing mode number * truncation length is (by β t~β T-D+1Diagram) afterwards, this follow-up truncation length I β is in addition calculated (by β in the limit T-0~β T-2D+1Diagram) the soft output in addition of its truncation length is calculated on the limit successively, and calculates the I β of next truncation length successively.Like this, decoder 4 is with I β, the I β more than the truncation of the retrodicting length in the parallel mode calculating truncation length, and the I β of each clock is calculated as state number * 2.
I γ calculates memory circuit 43 and provides control signal SC γ by controller 41, simultaneously by input terminal 42y, 42p 1, 42p 2Reception value Y is provided t, prior probability information Pr 1, Pr 2I γ calculates memory circuit 43 and utilizes reception value Y t, prior probability information Pr 1, Pr 2, to each reception value Y t, to calculate and storage I γ according to formula (18), the order that is fit to according to dealing with separately then offers I α with I γ and calculates memory circuit 44, I β calculating memory circuit 45 and soft output counting circuit 46.
This I γ calculates memory circuit 43, plays with regard to each reception value Y t, to be worth first effect of calculating means that the determined first probability γ calculates by sign indicating number type and reception.In addition, the I γ that makes I γ calculating memory circuit 43 offer I α calculating memory circuit 44 is I γ (α), and the I γ that I γ calculating memory circuit 43 offers I β calculating memory circuit 45 is I γ (β 1), I γ (β 2), the I γ that I γ calculating memory circuit 43 offers soft output counting circuit 46 is I γ (λ).
I α calculates memory circuit 44 and provides control signal SC α by controller 41, calculates memory circuit 43 by I γ simultaneously I γ is provided (α).This I α calculates memory circuit 44 and utilizes I γ (α), calculates and storage I α according to formula (16), according to the order that this processing was fit to I α is offered soft output counting circuit 46 then.This I α calculates memory circuit 44, plays according to the first probability I γ, with regard to each reception value Y t, second effect of calculating means that the coding initial condition is calculated to the second probability I α of each state by the time sequence order.In addition, the I α that makes I α calculating memory circuit 44 offer soft output counting circuit 46 is I α (λ).The I γ that order offers I α calculating memory circuit 44 is I γ (α).
I β calculates memory circuit 45 and provides control signal SC β by controller 41, calculates memory circuit 43 by I γ simultaneously I γ is provided (β 1), I γ (β 2).I γ (β 1), I γ (β 2) between have the time migration of truncation length * 2.I β calculates memory circuit 45 and utilizes I γ (β 1), I γ (β 2), calculate and store the I β of 2 systems with parallel mode according to formula (17), offer the I β of soft output counting circuit 46 its central 1 systems by the order that this processing was fit to.This I β calculates memory circuit 45 and constitutes the 3rd calculating means, according to the first probability I γ, with regard to each reception value Y t, calculate the truncation state according to the 3rd probability I β of time series reverse order to each state.In addition, the I β that makes I β calculating memory circuit 45 offer soft output counting circuit 46 is I β (λ).
Soft output counting circuit 46 calculates memory circuit 43 by I γ I γ is provided (λ), calculates memory circuit 44 by I α I α is provided (λ), calculates memory circuit 45 by I β I β is provided (λ).This soft output counting circuit 46 utilizes I γ (λ), I α (λ), I β (λ), calculates I λ t according to formula (19), and rearranges output by the time sequence order.
Illustrate that then I γ calculates the concrete formation of memory circuit 43, I α calculating memory circuit 44, I β calculating memory circuit 45 and soft output counting circuit 46.
Fig. 8 illustrates the formation that I γ calculates memory circuit 43.This I γ calculates memory circuit 43 and comprises following formation: import reception value Y respectively t, prior probability information Pr 1, Pr 2, control signal SC γ input terminal 301Y, 301P 1, 301P 2, 301S; And constitute a tables of data, to offer the reception value Y of input terminal 301Y tAs reading address signal, export reception value Y respectively to each state m tProbability IR (Y t| 00), IR (Y t| 01), IR (Y t| 10), IR (Y t| 11) ROM (read-only memory) 302a~302d.
And I γ calculates memory circuit 43 and has adder 303a, 303b, is used for probability IR (Y to ROM302a, 302b output t| 00), IR (Y t| 01), add respectively to offer input terminal 301P 1Prior probability information Pr 1, obtain [00] on the trellis, [01] corresponding each probability I γ of branch [00], I γ [01], and adder 303c, 303d, be used for probability IR (Y to ROM302c, 302d output t| 10), IR (Y t| 11), add respectively to offer input terminal 301P 2Prior probability information Pr 2, obtain [10] on the trellis, [11] corresponding each probability I γ of branch [10], I γ [11].
Here, the total bit of each probability I γ [the 00]~I γ [11] of adder 303a~303d output is set at organizing in the middle of the shape convolutional encoding of k/n at encoding rate, is figure place * 2n.Therefore, I γ calculates memory circuit 43 each probability I γ [00]~I γ [11] is set at 4, totally by 16 output probability I γ [00], I γ [01], I γ [10], I γ [11].
And I γ calculates memory circuit 43 to have: store each probability I γ [the 00]~I γ [11] of adder 303a~303d output successively according to control signal SC γ, and RAM (random access memory) 304a~304d that exports in proper order in accordance with regulations; Take out I γ that these RAM304a~304d exported as I γ (α), I γ (β according to control signal SC gamma selective ground 1), I γ (β 2), the selection circuit 308 of I γ (λ); And output I γ (α), I γ (β 1), I γ (β 2), lead-out terminal 309a~309d of I γ (λ).
I γ shown in Figure 8 calculates memory circuit 43, with regard to each reception value Y t, by the reception value Y of ROM302a~302d output to each state m tProbability IR (Y t| 00), IR (Y t| 01), IR (Y t| 10), IR (Y t| 11), obtain output [00], [01], [10], [11] pairing each probability I γ of branch [00], I γ [01], I γ [10], I γ [11] on the trellis by adder 303a~303d.Then, these probability I γ [00]~I γ [11] is stored in RAM304a~304d successively, calls in accordance with regulations, by selecting circuit 308 as I γ (α), I γ (β 1), I γ (β 2), I γ (λ) takes out.
Fig. 9 is the sequential chart of signal RAM304a~304d management.4 RAM304a~304d, memory cell formation by 16 * 4 word memory capacity is moved, so that can store dateout I γ [00]~I γ [11] of truncation length D, adder 303a~303d respectively, and probability I γ [00]~I γ [11] (A among Fig. 9) is stored in circulation successively.In addition, among this Fig. 9, each is t=1 constantly, and 2,3 ... probability I γ [00]~I γ [11] use γ 1, γ 2, γ 3... expression.
Postpone to be equivalent to the 2 double-length degree 2D times of truncation length D, in the middle of RAM304a~304d, read probability I γ [00]~I γ [11], by selecting circuit 308 that this probability I γ [00]~I γ [11] is taken out as offering the probability I γ (α) (B among Fig. 9) that I α calculates memory circuit 44.
And, alternately carry out following action, promptly just after RAM304a, RAM304b write probability I γ [00]~I γ [11] of 2D, read this probability I γ [00]~I γ [11] by the write sequence reverse order soon, or just after RAM304c, RAM304d write probability I γ [the 00]~I γ [11] of 2D length, read this probability I γ [00]~I γ [11] by the write sequence reverse order soon.By selecting circuit 308 that this probability I γ [00] that reads~I γ [11] is taken out as offering the probability I γ (β that I β calculates memory circuit 45 1) (C among Fig. 9).
And, alternately carry out following action, promptly just after RAM304b, RAM304c write probability I γ [the 00]~I γ [11] of 2D length, read this probability I γ [00]~I γ [11] by the write sequence reverse order soon, or just after RAM304d, RAM304a write probability I γ [the 00]~I γ [11] of 2D length, read this probability I γ [00]~I γ [11] by the write sequence reverse order soon.By selecting circuit 308 that this probability I γ [00] that reads~I γ [11] is taken out as offering the probability I γ (β that I β calculates memory circuit 45 2) (D among Fig. 9).
In addition, RAM304a~304d writes behind probability I γ [00]~I γ [11] of truncation length D separately through after the 2D, reads this probability I γ [00]~I γ [11] by the write sequence reverse order.By selecting circuit 308 that this probability I γ [00]~I γ [11] is taken out as the probability I γ (λ) (E among Fig. 9) that should offer soft output counting circuit 46.
Figure 10 illustrates the formation that I α calculates memory circuit 44.This I α calculates memory circuit 44 to have: the input terminal 401,402 of difference input probability I γ (α), control signal SC α; Utilize the I α of the previous moment that probability I γ (α) that input terminal 401 provides and register 405 set, calculate the I α counting circuit 403 of I α according to formula (16); And the I α and the initial value I α that select 403 outputs of I α counting circuit 0In any selector 404 that register 405 is provided with.
Selector 404 is according to control signal SC α, initialization time point just select initial value I α 0, other times point is then selected the dateout of I α counting circuit 403.Calculate at I γ on the time point of previous moment of memory circuit 43 beginning output probability I γ (α) and carry out initialization.Here, for initial value I α 0, when receiving equipment one side was known the originating point of coding, the value of state 0 can provide log1 (=0), other state values can provide log0 (=-∞).And during the originating point that receiving equipment one side does not know to encode, just whole states are provided log1/M (being log1/4 in this example) according to definition, but be that identical value get final product to whole states in fact, for example also can be to whole states taxes 0.
Figure 11 illustrates the formation of I α counting circuit 403.This I α counting circuit 403 has 4 addition compare selection circuit 411a~411d and constitutes.I γ t[00]~I γ t[11] and the I α of previous moment T-1(0)~I α T-1(3), according to the migration on the trellis, distribute to addition compare selection circuit 411a~4t-1d.Among each addition compare selection circuit 411a~411d, select the central big result of 2 addition result of I α and I γ addition, try to achieve next I α of each state constantly.
411a provides I γ to the addition compare selection circuit t[00], I γ t[11], provide I α simultaneously T-1(0), I α T-1(2), the probability I α of computing mode 0 t(0).411b provides I γ to the addition compare selection circuit t[11], I γ t[00], provides I α simultaneously T-1(2), I α T-1(0), the probability I α of computing mode 1 t(1).
411c provides I γ to the addition compare selection circuit t[10], I γ t[01], provides I α simultaneously T-1(1), I α T-1(3), the probability I α of computing mode 2 t(2).And provide I γ to addition compare selection circuit 411d t[01], I γ t[10], provide I α simultaneously T-1(1), I α T-1(3), the probability I α of computing mode 3 t(3).
Each addition compare selection circuit 411a~411d has common formation as shown in figure 12.Now be described as follows with regard to addition compare selection circuit 411a.Make I γ by adder 421 t[00] the dotted line branch probability of state 0 (among the Fig. 3 to) and I α T-1(0) probability of previous moment state 0 (among the Fig. 3 to) addition.And, make I γ by adder 422 t[11] the solid line branch probability of state 0 (among the Fig. 5 to) and I α T-1(2) probability of previous moment state 2 (among the Fig. 5 to) addition.Then, use the relatively addition result of adder 421,422 of comparison circuit 423, selector 424 takes out result big in the middle of adder 421,422 addition result as probability I α t(0).Although omit explanation, addition compare selection circuit 411b~411d too.
Return Figure 10, I α calculates memory circuit 44 to have: according to control signal SC α, and probability I α (the 0)~I α (3) of memory register 405 outputs, and the RAM406,407 that exports in proper order in accordance with regulations successively; According to control signal SC α, optionally take out the selection circuit 408 of the I α of these RAM406,407 outputs as I α (λ); And the lead-out terminal 409 of exporting this I α (λ).Here, if I α figure place is set at 8, the probability I α of I α counting circuit 403 outputs t(0)~I α t(3) figure place just is 32.RAM406,407 carries out record with these 32 as 1 word.
I α shown in Figure 10 calculates in the memory circuit 44, by I γ calculate memory circuit 43 just begun output probability I γ (α) (with reference to A among B, Figure 13 among Fig. 9) before time point carry out initialization.Initialization is selected initial value I α by selector 404 0, in register 405, set initial value I α 0(I α 0[00]~I α 0[11]) (B among Figure 13).Then from the subsequent clock cycle, in the I α counting circuit 403, calculate the probability I α that probability I γ (α) that memory circuit 43 provides and register 405 are exported according to I γ T-1, calculate next I α constantly successively t(B among Figure 13) is again with this I α tBe stored in the register 405.In addition, use α among Figure 13 respectively 1, α 2, α 3... each moment t=1 of expression, 2,3 ... corresponding probability I α (0)~I α (3).
Represent RAM406,407 management among Figure 10.2 RAM406,407, memory cell by 32 * 4 word memory capacity constitutes action, so that can store the dateout of truncation length D, register respectively is probability I α (0)~I α (3), and I α (0)~I α (3) (C among Figure 13) is stored in circulation successively.
And, alternately carry out following action, promptly just after RAM406 writes probability I α (0)~I α (3) of truncation length D, read this probability I α (0)~I α (3) by the write sequence reverse order soon, or just after RAM407 writes probability I α (0)~I α (3) of truncation length D, read this probability I α (0)~I α (3) by the write sequence reverse order soon.By selecting circuit 408 that this probability I α (0) that reads~I α (3) is taken out as the probability I α (λ) (D among Figure 13) that should offer soft output counting circuit 46.
Figure 14 illustrates the formation that I β calculates memory circuit 45.This I β calculates memory circuit 45 to have: difference input probability I γ (β 1), I γ (β 2), the input terminal 501,502,503 of control signal SC β; The probability I γ (β that utilizes input terminal 501 to provide 1) and the probability I β that sets of register 506, calculate the I β counting circuit 504 of I β according to formula (17); And any is arranged at the selector 505 in the register 506 among the I β of selection I β counting circuit 504 outputs and the initial value I β a.
Selector 505 is according to control signal SC β, initialization time point just select initial value I β a, other times point then to select the dateout of I β counting circuit 504.Just begun output probability I γ (β 1) at I γ calculating memory circuit 43 and carried out initialization on the time point before, and after this each 2D (D is a truncation length) cycle is carried out initialization.Therefore, for initial value I β a, usually whole states are composed identical value, for example 0 or log1/M (being log1/4 in this example), but when the code that is terminated is decoded, compose log1 (=0) for the value of institute's final state, but for the value of other states can compose log0 (=-∞).
Figure 15 illustrates the formation of I β counting circuit 504.This I β counting circuit 504 has 4 addition compare selection circuit 511a~511d and constitutes.I γ t[00]~I γ t[11] and I β t(0)~I β t(3), according to the migration on the trellis, distribute to addition compare selection circuit 511a~511d.Among each addition compare selection circuit 511a~511d, select the central big result of 2 addition result of I β and I γ addition, try to achieve the I β of each state of previous moment.
511a provides I γ to the addition compare selection circuit t[00], I γ t[11], provide I β simultaneously t(0), I β t(1), the probability I β of computing mode 0 T-1(0).511b provides I γ to the addition compare selection circuit t[10], I γ t[01], provides I β simultaneously t(2), I β t(3), the probability I β of computing mode 1 T-1(1).
511c provides I γ to the addition compare selection circuit t[11], I γ t[00], provides I β simultaneously t(0), I β t(1), the probability I β of computing mode 2 T-1(2).And provide I γ to addition compare selection circuit 511d t[01], I γ t[10], provide I β simultaneously t(2), I β t(3), the probability I β of computing mode 3 T-1(3).
Each addition compare selection circuit 511a~511d has common formation as shown in figure 16.Now be described as follows with regard to addition compare selection circuit 511a.Make I γ by adder 521 t[00] the dotted line branch probability of state 0 (among the Fig. 3 to) and I β t(0) (truncation length terminal point is retrodicted on time shaft to the probability of state 0 among Fig. 5) addition.And, make I γ by adder 522 t[11] the solid line branch probability of state 0 (among the Fig. 5 to) and I β t(1) (truncation length terminal point is retrodicted on time shaft to the probability of state 1 among Fig. 5) addition.Then, use the relatively addition result of adder 521,522 of comparison circuit 523, selector 524 takes out result big in the middle of adder 521,522 addition result as probability I β T-1(0).Although omit explanation, addition compare selection circuit 511b~511d too.
Return Figure 14, I β calculates memory circuit 45 to have: utilize the probability I β that sets in probability I γ (β 2) that input terminal 502 provides and the register 509, according to the I β counting circuit 507 of formula (17) calculating I β; And any is arranged at the selector 508 in the register 509 among the I β of selection I β counting circuit 507 outputs and the initial value I β b.
Selector 508 is according to control signal SC β, initialization time point just select initial value I β b, other times point then to select the dateout of I β counting circuit 507.Just begun output probability I γ (β 2) at I γ calculating memory circuit 43 and carried out initialization on the time point before, and after this carried out initialization every 2D (D the is a truncation length) cycle.Here, for initial value I β b, also with above-mentioned initial value I β a same settings.In addition, although detailed, I β counting circuit 507 and the identical formation of above-mentioned I β counting circuit 504 (with reference to Figure 15, Figure 16).
And I β calculating memory circuit 45 has: according to control signal SC β, optionally take out the selection circuit 510 of I β (the 0)~I β (3) of register 506,509 outputs as I β (λ); And the lead-out terminal 512 of exporting this I β (λ).Here, if I β figure place is 8, the probability I β that I β counting circuit 504,507 is exported respectively T-1(0)~I β T-1(3) figure place just is 32.
I β shown in Figure 14 calculates in the memory circuit 45, calculates memory circuit 43 at I γ and has just begun output probability I γ (β 1) (with reference to A among C, Figure 17 among Fig. 9) time point and after this every the 2D cycle register 506 is carried out initialization before.This initialization is selected initial value I β a by selector 505, sets initial value I β a in register 506.Then, in the I β counting circuit 504, calculate the probability I γ (β that memory circuit 43 provides according to I γ from the subsequent clock cycle 1) and the I β of register 506 output t, calculate the I β of previous moment successively T-1, again with this I β T-1Be stored in the register 506, become the output (C among Figure 17) of next clock time point.In addition, use β among Figure 17 respectively 1, β 2, β 3... each moment t=1 of expression, 2,3 ... corresponding probability I β (0)~I β (3).
And, calculate memory circuit 43 at I γ and just begun output probability I γ (β 2) (with reference to B among D, Figure 17 among Fig. 9) time point and after this every the 2D cycle register 509 is carried out initialization before.This initialization is selected initial value I β b by selector 508, sets initial value I β b in register 509.Then, in the I β counting circuit 507, calculate the probability I γ (β that memory circuit 43 provides according to I γ from the subsequent clock cycle 2) and the I β of register 509 output t, calculate the I β of previous moment successively T-1, again with this I β T-1Be stored in the register 509, become the output (D among Figure 17) of next clock time point.Then, the output of selecting circuit 510 optionally to take out register 506,509 shown in E among Figure 17 obtains offering the probability I β (λ) of soft output counting circuit 46.
Figure 18 illustrates the formation of soft output counting circuit 46.This soft output counting circuit 46 has: the input terminal 601,602,603 of difference input probability I α (λ), I β (λ), I γ (λ); Utilize these probability I α (λ), I β (λ), I γ (λ), respectively the 1st, the 2nd I λ on the right of the calculating formula (19) 1 Counting circuit 604, I λ 0Counting circuit 605; Output I λ with above-mentioned counting circuit 604 1Deduct the output I λ of counting circuit 605 0Obtain the I λ of formula (19) tSubtracter 606; I λ with these subtracter 606 outputs tRearrange LIFO (last in, first out) memory 607 of output by the time sequence order; And this soft output I λ of output tLead-out terminal 608.
Figure 19 illustrates I λ 1The formation of counting circuit 604.This I λ 1Counting circuit 604 has 4 adder 604a~604d and auctioneering circuit 604e constitutes.According to the state transition on the trellis, with the following adder 604a~604d that distributes to of signal.Specifically, provide I α to adder 604a T-1(0), I β t(1), I γ t[11], provide I α to adder 604b T-1(1), I β t(2), I γ t[10], provide I α to adder 604c T-1(2), I β t(0), I γ t[11], this export-oriented adder 604d provides I α T-1(3), I β t(3), I γ t[10].
Then, select the maximum of each adder 604a~604d addition results by auctioneering circuit 604e, this maximum output is as I λ 1
Equally, Figure 20 illustrates I λ 0The formation of counting circuit 605.
This I λ 0Counting circuit 605 has 4 adder 605a~605d and auctioneering circuit 605e constitutes.According to the state transition on the trellis, with the following adder 605a~605d that distributes to of signal.Specifically, provide I α to adder 605a T-1(0), I β t(0), I γ t[00], provides I α to adder 605b T-1(1), I β t(3), I γ t[01], provides I α to adder 605c T-1(2), I β t(1), I γ t[00], this export-oriented adder 605d provides I α T-1(3), I β t(2), I γ t[01].
Then, select the maximum of each adder 605a~605d addition results by auctioneering circuit 605e, this maximum output is as I λ 0
Soft output counting circuit 46 shown in Figure 180 provides probability I α (λ), I β (λ), I γ (λ) (A, B, C among Figure 21) respectively to input terminal 601,602,603.Then, each clock cycle, I λ 1The 1st on 604 pairs of formulas of counting circuit (19) the right calculates I λ 1, I λ 0The 2nd on 605 pairs of formulas of counting circuit (19) the right calculates I λ 0, by the I λ of each moment t of subtracter 606 outputs t(D among Figure 21).Then, the I λ that exports successively by subtracter 606 tOffer LIFO memory 607, rearrange, export the soft output I λ that rearranges by the time sequence order by this LIFO memory 607 tIn addition, use λ among Figure 21 respectively 1, λ 2, λ 3... each moment t=1 of expression, 2,3 ... corresponding soft output I λ t
Then, with accompanying drawing the management that utilizes controller 41 to carry out in the above-mentioned decoder 4 is described in further detail.Figure 22 A, Figure 22 B, Figure 22 C, Figure 22 D by memory contents and the output by time sequence order diagram register 405, RAM406,407, register 506,509, illustrate the storage administration content.RAM304a~304d, in the middle of the RAM406,407, " ↓ " mark refers to the address of being instructed is write, and " ↑ " mark then refers to read in the middle of the address of being instructed.
In the middle of Figure 22 A, Figure 22 B, Figure 22 C, Figure 22 D, when t=13 for example, carry out the operation of following (1)~(6) simultaneously.
(1) storage I γ in RAM304d 13
(2) the I α that exports according to register 405 4I γ with RAM304b output 5Try to achieve I α 5, be stored in again in the register 405.
(3) the I α that the previous moment of memory register 405 outputs is tried to achieve in RAM407 4
(4) the I β that exports according to register 506 4I γ with RAM304a output 4Try to achieve I β 3, be stored in again in the register 506.
(5) the I β that exports according to register 509 12I γ with RAM304c output 12Try to achieve I β 11, be stored in again in the register 509.
(6) the I β that exports according to register 506 4, RAM304a output I γ 4I α with RAM406 output 3Try to achieve I λ 4
Constantly also carry out identical action for other.By to this repetition, but each tries to achieve λ constantly tBut in this method, λ tBe to obtain, thereby utilize LIFO memory 607 as mentioned above, with soft output λ by the reverse order of the original time sequence tRearrange back output in proper order by original time series.Figure 23 illustrates the sequential chart according to the storage administration t=13~t=20 of above operation.
As mentioned above, carry out the calculating (calculating that utilizes I β counting circuit 507 to carry out among Figure 14) of I β in the truncation length and the calculating (calculating that utilizes I β counting circuit 504 to carry out among Figure 14) of the above I β of truncation length that retrodicts with parallel mode in the present embodiment, the I β of each clock is calculated as state number * 2.Therefore, compare, can reduce amount of calculation significantly with existing SW-Max-Log-BCJR algorithm.And each clock is once just finished for the access of each memory.Thereby, according to present embodiment, can be at high speed to the convolution code action of decoding.
In addition, the storage administration in the foregoing description is not the computational methods that depend on I α, I β, I γ, and for example, I γ computing method also can adopt the additive method except reference ROM tables of data.
And in Figure 11, Figure 15, Figure 19, the counting circuit shown in Figure 20, also can the SW-Log-BCJR algorithm be installed by revising shown in the adding formula (20).Next, illustrate according to the SW-Log-BCJR algorithm and try to achieve soft output I λ tSituation.
With the situation of correction shown in the adding formula (20) in Figure 11 circuit is that example is illustrated.
In order to revise shown in the adding formula (20), need be transformed to formation shown in Figure 24 with each addition compare selection circuit 411a~411d from formation shown in Figure 12.Among this Figure 24, corresponding part with Fig. 9 adds that same numeral represents.
Addition compare selection circuit 411a now is described.Make I γ by adder 421 t[00] and I α T-1(0) addition.And make I γ by adder 422 t[11] and I α T-1(2) addition.Then, with adder 426 addition results x, the y of adder 421,422 are subtracted each other, this subtracts each other result (x-y) and offers positive and negative decision circuit 427.Positive and negative decision circuit 427 was 0 above time output " 1 " signal when subtracting each other result (x-y), than 0 hour output " 0 " signal.
The output signal of positive and negative decision circuit 427 is as selecting signal SEL to offer selector 424.Take out the addition results x of adder 421 when selecting signal SEL be " 1 " signal, and the addition results y of taking-up adder 422 when selecting signal SEL to be " 0 " signal.Therefore, selector 424 just takes out the central big result of addition result x, y of adder 421,422 selectively, can be equivalent to the computing on the 1st on formula (20) the right.
And, the result (x-y) that subtracts each other of subtracter 426 is offered absolute calculation circuit 428, calculate absolute value | x-y|.Then, with this absolute value | x-y| obtains second the log (1+e in formula (20) the right as reading the ROM429 that address signal offers the composition data table in the middle of ROM429 -| x-y|).Then, adder 430 with the output signal max of selector 424 (x, y) and the output signal log (1+e of ROM429 -| x-y|) addition, this addition result can be used as probability I α t(0) output.This probability I α t(0) is based on the SW-Log-BCJR algorithm.In addition, although omit explanation, too for addition compare selection circuit 411b~411d.
The situation of revising shown in the adding formula (20) in Figure 11 circuit more than is described, but also can carries out equally revising shown in the formula (20), therefore the SW-Log-BCJR algorithm can be installed Figure 15, Figure 19, circuit shown in Figure 20.
In addition, in the foregoing description, be that situation with constraint length=3, truncation length=4 is an example, but constraint length, truncation length can be the arbitrary values beyond this value.And, even if the memory read/write content is identical, can be by utilizing Multiport-RAM but not single port RAM, for example RAM304a~304d transposing is 2 two-port RAMs, or is 1 duplexing RAM etc. with RAM406,407 transposings, the formation of RAM can be carried out all distortion.
And, aspect storage I β is not storage administration such as I α in RAM, all distortion can be arranged.In addition, more than example what take is SW-Max-Log-BCJR algorithm and SW-Log-BCJR algorithm, but this also can consider all distortion such as other soft output decoder algorithm application.
In addition, in the foregoing description, be to postpone the time suitable in the middle of RAM304a~304d to read probability I γ [00]~I γ [11] with truncation length 2 double-length degree 2D, to soft output decoder, but be not limited to the 2 double-length degree 2D of truncation length D the time of delay of probabilistic information, also can be for more than the truncation length D.
Also can in above-mentioned I γ calculating memory circuit 43 shown in Figure 8, increase RAM304e, make the 3 suitable times of double-length degree 3D of probabilistic information delay and truncation length D as for example shown in Figure 25.
The I γ of this formation shown in Figure 25 calculates memory circuit 43, corresponding each probability I γ of branch [00], I γ [01], I γ [10], the I γ [11] (A among Figure 26) in output [00], [01], [10], [11] on storage and the trellis that adder 303a~303d obtains successively in RAM304a~304e, output delay is equivalent to data I γ [00]~I γ [11] that 3 double-length degree 3D times of truncation length D preserve.Then, probability I γ [the 00]~I γ [11] that selects circuit 308 will postpone like this to export calculates memory circuit 13 outputs (B, C among Figure 26) as probability I γ (α) to I α.
And RAM304a~304d and selection circuit 308 by truncation length D unit subregion, are set some fiducial time in the terminal of each truncation length D with probability I γ [00]~I γ [11] on the time point of process truncation length D on the time-axis direction.RAM304a~304d and selection circuit 308 in case preserve 2 times of pairing probability I γ [00]~I γ [11] of the preceding truncation length D of above-mentioned each some fiducial time, are just exported above-mentioned probability I γ [00]~I γ [11] by the input sequence reverse order.Thus, RAM304a~304e and selection circuit 308 export probability I γ [00]~I γ [11] as the first probability I γ (β that I β is calculated memory circuit 45 1) (D, E among Figure 26), and by same order, will relative this first probability I γ (β 1) postpone probability I γ [00]~I γ [11] output that the probability I γ of truncation length D constituted as the second probability I γ (β that I β is calculated memory circuit 45 2) (F, G among Figure 26)
Otherwise RAM304a~304e and selection circuit 308 for the probability I γ (λ) that gives soft output counting circuit 46, are pressed probability I γ [the 00]~I γ [11] of the timing preservation of delay stipulated time by the order output of calculating memory circuit 44 to I α.Therefore, I γ calculates memory circuit 43 according to exporting the first probability I γ with the corresponding order of processing of I α calculating memory circuit 44, I β calculating memory circuit 45, soft output counting circuit 46.
And I β calculates memory circuit 45 as shown in figure 27, by RAM513,514 to the probability β that selects circuit 510 to provide I β counting circuit 504,507 to calculate T-1(0)~β T-1(3).
This I β shown in Figure 27 calculates in the middle of memory circuit 45, and I β counting circuit 504,507 is with respectively by the probability β of leading 1 clock cycle of register 506,509 inputs t(0)~β t(3) be benchmark, calculate the first probability I γ (β of memory circuit 43 outputs according to I γ 1) and the second probability I γ (β 2), calculate reception value Y tRetrodict to the probability β of each state T-1(0)~β T-1(3).
Specifically, this I β calculates in memory circuit 45, selector 505,508 according to control signal SC β will be respectively by the probability β of leading 1 clock cycle of I β counting circuit 504,507 outputs t(0)~β t(3) or initial value I β a, I β b select to export to register 506,509.
Here, initial value I β a, I β b compose identical value to whole states as mentioned above usually, for example 0 or log1/M (being log1/4 in this example), but when the code that is terminated is decoded, compose log1 (=0) for the value of institute's final state, but for the value of other states can compose log0 (=-∞).
Then, selector 505,508 as shown in figure 28, with 2 double-length degree 2D with above-mentioned truncation length D be the unit and the seasonal effect in time series first probability I γ (β that retrodicts 1) and the second probability I γ (β 2) repeat corresponding, respectively to the first probability I γ (β 1) and the second probability I γ (β 2) carry out repetition, but be that the timing of unit before 1 clock cycle of switching selects to export to register 506,509 (A, B, D, E among Figure 28) with initial value I β a, I β b with this length 2D.And, in the timing in addition, select the probability β of leading 1 clock cycle of output I β counting circuit 504,507 outputs t(0)~β t(3).
And RAM513,514 can store the probability β of truncation length respectively by the capacity that is had T-1(0)~β T-1(3) memory cell is formed and is formed, and is the probability β that unit obtains according to such displacement truncation length and with 2 doubling times of truncation length D T-1(0)~β T-1(3) (B among Figure 28~D), the probability β that the input that circulates is successively partly obtained by later half truncation length respectively T-1(0)~β T-1(3).Therefore to read with initial value I β a, I β b selectively be the 2 probability β of system that benchmark calculates to RAM513,514 T-1(0)~β T-1(3) the more sufficient part of reliability in.In addition, RAM513,514 probability β by the such storage of time sequence order output T-1(0)~β T-1(3).
This I β calculates memory circuit 45, put in the pairing truncation length range in each fiducial time of setting for the first probability I γ at least, handle the first probability I γ by a plurality of sequences with the while parallel mode, calculate the 3rd probability I β by a plurality of sequences, just select the probability I β of output truncation length according to the probability I β of these a plurality of sequences.In addition, in the present embodiment, putting these fiducial times then, each truncation length setting is the termination time point of follow-up truncation length.
Then, selecting circuit 510 by such order, in the middle of RAM513,514, will be the alternately probability β of output of unit with truncation length D T-1(0)~β T-1(3) export soft output counting circuit 15 to.
In the middle of more than constituting, input i tIn convolution coder 2, carry out convolutional encoding and handle, be transformed to state number m here and be 4 output sequence X by constraint length 3 t, this output sequence X tThrough no record channel 3 input decoders 4.In this decoder 4, the input signal pattern is intended digital conversion and is handled, and detects reception value Y t, this reception value Y tInput to I γ and calculate memory circuit 43.
This I γ calculates in the memory circuit 43, reception value Y t,, calculate the pairing reception value of each state Y according to the tables of data of the ROM302a~302d of each state correspondence tProbability (formula (16) the right the 1st), by adder 303a~303d and initial value logPr{i t=0}, logPr{i tTherefore the first probability I γ (A among Figure 26) of each state m correspondence is calculated in=1} addition successively by the time sequence.
The first probability I γ that calculates like this, being stored in successively with truncation length D is that the memory cell that unit forms constitutes RAM304a~304e, 3 times of corresponding times of delay and truncation length D, export I α to by the time sequence order and calculate memory circuit 43 (C among Figure 26).Therefore, calculate order, the timing that processing is fit in the memory circuit 43 by this I α and calculate the memory circuit 43 outputs first probability I γ (α) to I α.
And, the first probability I γ is the unit subregion by truncation length D, set each of point through the time point of truncation length time again by each truncation length D fiducial time, pairing fiducial time be stored in RAM304a~304e before the point in each truncation length, just by each fiducial time point export I β to along the time shaft reverse order and calculate memory circuit 45 (D among Figure 26~G).At this moment some fiducial time, by being set at follow-up truncation length termination time point, in the first probability I γ process that part output does not finish as yet before 1 truncation length pairing fiducial time of point, part is exported before beginning 1 follow-up truncation length pairing fiducial time of point, thus according to 2 systems of 1 truncation lengthy content and timing slip, export I β to by the time shaft reverse order and calculate memory circuit 45 (D among Figure 26~G).Therefore, press I β and calculate and to handle the order that is fit to, regularly in the memory circuit 45, calculate the second probability I γ (β that memory circuit 45 is exported 2 systems to this I β 1), I γ (β 2).
And the first probability I γ is identical with the output of calculating memory circuit 44 to I α, delay stipulated time, and by the time sequence order, and by handling the order that is fit in the soft output counting circuit 46 and regularly exporting soft output counting circuit 46 to.
Then, I α calculates in the memory circuit 44, calculate each reception value second probability α (λ) to each state on time-axis direction according to the first probability I γ, by handling the order that is fit in the soft output counting circuit 46 and regularly exporting soft output counting circuit 46 to.
Otherwise I β calculates in the memory circuit 45, the 2 probability I γ (β of system that import on the time shaft reverse order 1), I γ (β 2), respectively in I β counting circuit 504,507 with 1 clock after the probability β of corresponding state tAfter the addition, the result that selective value is big calculates the probability of each state on the reverse direction of time shaft thus.In addition, the probability that calculates so successively feeds back to I β counting circuit 504,507 with initial value I β a, I β b selectively by selector 505,508, calculates the probability (in Figure 28 A, B, D, E) of said reference time point on the reverse direction of time shaft with the while parallel mode.
The 2 system's probability that calculate are like this stored selectively and are exported away from datum mark but the probability of the truncation length of enough reliabilities is arranged.At this moment, when selecting output probabilities by RAM513,514, above-mentioned probability I β (λ) opposite time shaft by with storage the time rearranges in proper order, exports soft output counting circuit 46 to.Thus, the 3rd probability I β (λ) also exports by handling corresponding order in the soft output counting circuit 46 by the time sequence.
In the soft output counting circuit 46, as shown in Figure 7, the probability I α (λ) that can import, I β (λ), I γ (λ) not rearranging, and do not rearrange under the situation of the soft output I λ that is calculated and handle, therefore can obtain soft output by simply constituting.
And, during to 1 symbol decoding, can calculate the 3rd probability I β by the calculating of 2 systems * state number, substitute calculating, thereby reduce this part calculation process amount constraint length * state number, make the whole simplification that constitutes.
Specifically, input to probability I α (λ), I β (λ), the I γ (λ) of soft output counting circuit 46 like this, to each corresponding state addition, these maximums are pressed and reception value Y after detecting when each input value " 1 " and " 0 " respectively tCorresponding time series is exported in proper order by subtracter 606 and is subtracted each other the soft output I λ that obtains t
Constitute like this, set to the corresponding benchmark of the first probability I γ subregion constantly by truncation length D, by each benchmark is unit with truncation length probability I γ constantly, press the time shaft reverse order by output of a plurality of systems and processing, by export first, second and the 3rd probability I γ, I α, I β by the order that at this moment subsequent treatment was fit to separately, can pass through the calculating of 2 systems * state number, substitute calculating, obtain the soft output of each code element constraint length * state number.At this moment, in the soft output counting circuit, calculate soft output I λ under the situation of probability I γ, I α, I β that can import or the soft output I λ that calculated not rearranging, so can be by simply constituting soft output decoder.
In addition, in the foregoing description, be to being that unit sets benchmark situation explanation constantly with truncation length, but the invention is not restricted to this, can on all places, set benchmark constantly as required.And, according to the benchmark moment of such setting, when the length before corresponding truncation length of the 1 benchmark moment is longer than the foregoing description, export this part first probability I γ by many systems, and, I β calculates in the memory circuit, needs to handle by the corresponding system of probability I γ of output like this.
And, in the foregoing description, be that the situation of just calculating soft output by truncation length=4 describes, but the invention is not restricted to this, can be all lengths with the truncation length setting as required.
In addition, in the foregoing description, be just to carry out the situation that convolutional encoding handles to be illustrated, but the invention is not restricted to this, can be widely applicable for and utilize various constraint lengths to carry out the occasion that convolutional encoding is handled by constraint length=3.
And, in the foregoing description, be just to be illustrated, but the invention is not restricted to this, also can be widely applicable for occasion by soft outputs of various soft output decoder algorithm computation such as SW-Log-BCJR algorithms by the situation of the soft output of SW-Max-Log-BCJR algorithm computation.
In sum, according to the present invention, the above probabilistic information of storage truncation length D, by carry out renewal and the calculating of soft output in addition of truncation length with interior probabilistic information with parallel mode to truncation length D, the amount of calculation of each clock can be reduced significantly and, decoding action can be carried out at high speed convolution code to the access amount of each memory.And, can obtain a kind of like this soft output decoder device and soft output decoder method, by by a plurality of sequence computings with the above scope of truncation length as unit probability to each state on the reverse direction of time shaft, come soft output is calculated, handle the order that is fit to by subsequent calculations and export the probability that at this moment calculates, with simply constituting soft output decoder.

Claims (13)

1. convolution code soft output decoder device comprises:
Try to achieve the probability calculation device of each transition state probabilistic information of convolution code;
The described probabilistic information that described probability calculation device is tried to achieve is stored in the probability storage device in the recording medium; And
Try to achieve the soft output calculation element of soft output with the described probabilistic information of described recording medium storage,
It is characterized in that,
Described probability storage device is stored the described probabilistic information more than the truncation length in described recording medium,
With parallel mode carry out described probability storage device to truncation length with the calculating of the renewal of interior probabilistic information and described soft output calculation element to soft output beyond the truncation length.
2. convolution code soft output decoder device as claimed in claim 1 is characterized in that,
Described probability calculation device and soft output calculation element, by logarithm calculate the product calculation of described probability with computing, calculate described probability by the maximum operation of logarithm and computing.
3. convolution code soft output decoder device as claimed in claim 1 is characterized in that,
Described probability calculation device and soft output calculation element, by logarithm calculate the product calculation of described probability with computing, calculate described probability by the maximum operation of logarithm and function of a single variable computing and computing.
4. convolution code soft output decoder device as claimed in claim 1 is characterized in that,
Described probability calculation device comprises:
With regard to each reception value, calculate first calculation element of first probability of determining by code output code type and described reception value;
According to described first probability, with regard to each described reception value, the calculation code initial condition is by second calculation element of time sequence order to second probability of each state; And
According to described first probability, with regard to each described reception value, calculate the truncation state and press and three calculation element of described time series reversed in order order to the 3rd probability of each state,
Described probability storage device is the information of described the 3rd probability at the probabilistic information of truncation length with interior renewal.
5. convolution code soft output decoder device as claimed in claim 4 is characterized in that,
The described first probability calculation device is preserved described first probability by described probability storage device is temporary transient, and respectively handles corresponding order and read successively and export with the 3rd probability calculation device, described soft output calculation element by described second,
Described the 3rd probability calculation device truncation long measure is in accordance with regulations distinguished described first probability, also on time-axis direction, set described fiducial time of point by described truncation length, put corresponding described truncation length range described each fiducial time as unit to the major general, calculate described the 3rd probability with the while parallel mode by described first probability of a plurality of series processing with by a plurality of sequences, from described the 3rd probability of a plurality of sequences of calculating like this, select the 3rd probability corresponding to described truncation length, be worth corresponding described the 3rd probability by the described reception of the temporary transient preservation of described probability storage device, handle corresponding order by described soft output calculation element and read successively and export.
6. convolution code soft output decoder device as claimed in claim 5 is characterized in that,
The described first probability calculation device,
Press the time shaft sequential delays stipulated time, export temporary transient first probability of preserving of described probability storage device to the described second probability calculation device,
Described first probability packet of putting corresponding described truncation length to the major general described each fiducial time is contained in interior scope as unit, with the while parallel mode, press the time shaft reverse order by a plurality of sequences, export temporary transient first probability of preserving of described probability storage device to described the 3rd probability calculation device
Press the time shaft sequential delays stipulated time, export temporary transient described first probability of preserving of described probability storage device to described soft output calculation element.
7. convolution code soft output decoder device as claimed in claim 5 is characterized in that, is set at follow-up described truncation length concluding time point with putting described fiducial time.
8. convolution code soft output decoder method comprises:
Try to achieve the first step of each transition state probabilistic information of convolution code;
The described probabilistic information that this first step more than the truncation length is tried to achieve is stored in second step in the recording medium; And
Try to achieve the third step of soft output with the described probabilistic information of this second step record media store,
It is characterized in that,
With parallel mode carry out described second step to truncation length with the calculating of the renewal of interior probabilistic information and described third step to soft output beyond the truncation length.
9. a convolution code soft output decoder method is characterized in that,
With regard to each reception value, calculate the first probability calculation step of first probability of determining by code output code type and described reception value successively;
According to described first probability,, calculate on time-axis direction to the second probability calculation step of second probability of each state with regard to each described reception value;
According to described first probability, with regard to each described reception value, computational rules is lighted on the reverse direction of time shaft to the 3rd probability calculation step of the 3rd probability of each state fiducial time; And
According to the soft output calculation procedure of described first, second and the soft output of the 3rd probability calculation,
Described the 3rd probability calculation step; The truncation long measure is distinguished described first probability in accordance with regulations; Also press described truncation length at time-axis direction setting described fiducial time of point; To the corresponding described truncation length range of major general's described each of some fiducial time as unit; Calculate described the 3rd probability with the while parallel mode by described first probability of a plurality of series processing with by a plurality of sequences; From described the 3rd probability of a plurality of sequences of calculating like this, select the 3rd probability corresponding to described truncation length; Export corresponding described the 3rd probability of described reception value
In the described first probability calculation step, by described first probability of the temporary transient preservation of probability storage device, and respectively handle corresponding order and read successively and export with the 3rd probability calculation step, described soft output calculation procedure by described second, in described the 3rd probability calculation step, by described the 3rd probability of the temporary transient preservation of probability storage device, handle corresponding order by described soft output calculation procedure and read successively and export.
10. convolution code soft output decoder method as claimed in claim 9 is characterized in that,
Press the time shaft sequential delays stipulated time, export temporary transient first probability of preserving of described probability storage device to the described second probability calculation step, described first probability packet of putting corresponding described truncation length to the major general described each fiducial time is contained in interior scope as unit, with the while parallel mode, press the time shaft reverse order by a plurality of sequences, export temporary transient first probability of preserving of described probability storage device to described the 3rd probability calculation step, press the time shaft sequential delays stipulated time, export temporary transient first probability of preserving of described probability storage device to described soft output calculation procedure.
11. convolution code soft output decoder method as claimed in claim 9 is characterized in that, is set at follow-up described truncation length concluding time point with putting described fiducial time.
12. convolution code soft output decoder method as claimed in claim 9, it is characterized in that, described first, second and the 3rd probability calculation step, in described soft output calculation procedure, by logarithm calculate the product calculation of described probability with computing, calculate described probability by the maximum operation of logarithm and computing.
13. convolution code soft output decoder method as claimed in claim 9, it is characterized in that, described first, second and the 3rd probability calculation step, in described soft output calculation procedure, by logarithm calculate the product calculation of described probability with computing, calculate described probability by the maximum operation of logarithm and function of a single variable computing and computing.
CNB998008192A 1998-05-28 1999-05-17 Soft output decoder for convolution code and soft output decoding method Expired - Fee Related CN1144378C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP14676298 1998-05-28
JP146762/1998 1998-05-28
JP238987/1998 1998-08-25
JP23898798 1998-08-25

Publications (2)

Publication Number Publication Date
CN1272253A CN1272253A (en) 2000-11-01
CN1144378C true CN1144378C (en) 2004-03-31

Family

ID=26477498

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB998008192A Expired - Fee Related CN1144378C (en) 1998-05-28 1999-05-17 Soft output decoder for convolution code and soft output decoding method

Country Status (6)

Country Link
US (1) US6192084B1 (en)
EP (2) EP1311069A1 (en)
JP (1) JP4178752B2 (en)
KR (1) KR100544555B1 (en)
CN (1) CN1144378C (en)
WO (1) WO1999062183A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563877B1 (en) * 1998-04-01 2003-05-13 L-3 Communications Corporation Simplified block sliding window implementation of a map decoder
US6580769B1 (en) * 2000-02-14 2003-06-17 Motorola, Inc. Method and apparatus for backward recursion next state generation in recursive convolutional decoding
US6516437B1 (en) * 2000-03-07 2003-02-04 General Electric Company Turbo decoder control for use with a programmable interleaver, variable block length, and multiple code rates
US6757859B1 (en) * 2000-05-01 2004-06-29 Zarlink Semiconductor Inc. Parallel turbo trellis-coded modulation
JP3613134B2 (en) * 2000-05-12 2005-01-26 日本電気株式会社 High speed turbo decoder
JP3514217B2 (en) 2000-06-29 2004-03-31 日本電気株式会社 Turbo decoding method and receiver
US6865710B2 (en) * 2000-09-18 2005-03-08 Lucent Technologies Inc. Butterfly processor for telecommunications
US7234096B2 (en) * 2001-04-18 2007-06-19 Sharp Kabushiki Kaisha Decoding method and recording-medium reproducing apparatus
EP1207625B1 (en) * 2001-07-05 2005-05-11 Nec Corporation Method of decoding turbo-encoded data and receiver for decoding turbo-encoded data
US7661059B2 (en) * 2001-08-06 2010-02-09 Analog Devices, Inc. High performance turbo and Viterbi channel decoding in digital signal processors
US7260770B2 (en) * 2001-10-22 2007-08-21 Motorola, Inc. Block puncturing for turbo code based incremental redundancy
JP4047279B2 (en) * 2001-10-25 2008-02-13 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Nonlinear scaling of reliability values for turbo decoder systems.
KR100703307B1 (en) * 2002-08-06 2007-04-03 삼성전자주식회사 Turbo decoding apparatus and method
US7107509B2 (en) * 2002-08-30 2006-09-12 Lucent Technologies Inc. Higher radix Log MAP processor
JP4554366B2 (en) * 2002-09-18 2010-09-29 エヌエックスピー ビー ヴィ Method for decrypting data using a data window
US7346833B2 (en) * 2002-11-05 2008-03-18 Analog Devices, Inc. Reduced complexity turbo decoding scheme
KR20070029744A (en) * 2004-05-18 2007-03-14 코닌클리즈케 필립스 일렉트로닉스 엔.브이. Turbo decoder input reordering
GB2416967B (en) * 2004-07-29 2007-01-31 Toshiba Res Europ Ltd Turbo equalization in a MIMO digital wireless wideband system
JP4840651B2 (en) * 2006-07-27 2011-12-21 ソニー株式会社 Decoding device and decoding method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4038251A1 (en) * 1990-11-30 1992-06-04 Philips Patentverwaltung Max probability receiver e.g. for mobile communications
IT1279114B1 (en) * 1995-02-17 1997-12-04 Cselt Centro Studi Lab Telecom PROCEDURE FOR RECEIVING SIGNALS AFFECTED BY INTERSYMBOL INTERFERENCE AND RELATED DEVICE.
JP3674111B2 (en) * 1995-10-25 2005-07-20 三菱電機株式会社 Data transmission device
US5721746A (en) * 1996-04-19 1998-02-24 General Electric Company Optimal soft-output decoder for tail-biting trellis codes
US5933462A (en) * 1996-11-06 1999-08-03 Qualcomm Incorporated Soft decision output decoder for decoding convolutionally encoded codewords

Also Published As

Publication number Publication date
EP1017178A4 (en) 2001-02-21
EP1017178A1 (en) 2000-07-05
WO1999062183A1 (en) 1999-12-02
JP4178752B2 (en) 2008-11-12
CN1272253A (en) 2000-11-01
KR20010022310A (en) 2001-03-15
US6192084B1 (en) 2001-02-20
KR100544555B1 (en) 2006-01-24
EP1311069A1 (en) 2003-05-14

Similar Documents

Publication Publication Date Title
CN1144378C (en) Soft output decoder for convolution code and soft output decoding method
CN1171391C (en) Alternate method, alternating device, accelerating coding method and accelerating coding device
CN1836394A (en) Apparatus and method for coding/decoding block ldpc codes in a mobile communication system for maximizing error correction performance and minimizing coding complexity
CN1175579C (en) Encoder
CN1256812C (en) Engine encoder and channel encoding method
CN1132318C (en) Method and apparatus for parallel encoding and decoding of data
CN1494770A (en) Interleaver for TURBO decoder
CN1310458C (en) Method for coding/decoding a stream of coded digital data with bit-interleaving in multiple transmission and in reception in the presence of intersymbol interference and corresponding system
CN1692557A (en) Encoding device, encoding method, encoding program, decoding device, decoding method, decoding program
CN1659785A (en) Method and system for multi-rate lattice vector quantization of a signal
CN1526196A (en) Reduced soft output information packet selection
CN1317793A (en) Data reproducing method and appts., and method and/or appts. for reproducing data
CN101060629A (en) Image compression/decompression method and image coder/decoder and decoding circuit
CN1630204A (en) CRC computing method and system having matrix conversion technology
CN1126397A (en) Error-correcting encoder, error-correcting decoder and data transmitting system with error-correctincodes
CN1873778A (en) Method for decodeing speech signal
CN1496049A (en) Channel coding method for communication system
CN1297617A (en) Interleaving/deinterleaving apparatus and method for communication system
CN1330455A (en) TURBO (turbo) code decoding circuit and coding decoding circuit
CN101047733A (en) Short message processing method and device
CN1685621A (en) Method and apparatus for deinterleaving interleaved data stream in a communication system
CN1173480C (en) Weitebi decoder and transmission equipment
CN1276588C (en) Apparatus and method for generating codes in communications system
CN1208906C (en) Correction coding method, decoding method and its apparatus
CN1286275C (en) Debugging device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20040331

Termination date: 20130517