CN102340320A - Bidirectional and parallel decoding method of convolutional Turbo code - Google Patents

Bidirectional and parallel decoding method of convolutional Turbo code Download PDF

Info

Publication number
CN102340320A
CN102340320A CN2011101917279A CN201110191727A CN102340320A CN 102340320 A CN102340320 A CN 102340320A CN 2011101917279 A CN2011101917279 A CN 2011101917279A CN 201110191727 A CN201110191727 A CN 201110191727A CN 102340320 A CN102340320 A CN 102340320A
Authority
CN
China
Prior art keywords
constantly
likelihood ratio
branch metric
state
metric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101917279A
Other languages
Chinese (zh)
Other versions
CN102340320B (en
Inventor
王臣
周亮
詹明
曾黎黎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN 201110191727 priority Critical patent/CN102340320B/en
Publication of CN102340320A publication Critical patent/CN102340320A/en
Application granted granted Critical
Publication of CN102340320B publication Critical patent/CN102340320B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a decoding method of a convolutional Turbo code for reducing decoding time delay and saving a memory. The decoding method comprises the following steps of: simultaneously carrying out forward recursion and backward recursion in a component decoding process; dividing the forward recursion and the backward recursion into two stages with equivalent computation quantity; and sequentially calculating and obtaining posterior likelihood ratio information at the beginning of the second stage. The time delay from the beginning of recursion operation to the end of the posterior likelihood ratio information operation is shortened once compared with the traditional decoding process. Furthermore, the traditional posterior likelihood ratio operation is serial, while the posterior likelihood ratio operation of the invention is carried out bidirectionally and simultaneously in parallel, the required calculation time and the recursive calculation time are overlapped, and it is unnecessary to distribute additional calculation time; in addition, a bidirectional parallel structure can ensure that the memory used for storing state metric is reduced by half. Furthermore, through the calculation of splitting branch metric, redundancy calculation is reduced, and the space for storing the branch metric is reduced by half.

Description

CTC two-way simultaneous interpretation method
Technical field
The invention belongs to the communications field, relate generally to chnnel coding, especially the correlation technique of Turbo code decoding.
Background technology
Since the proposition of iterative decoding notion begins, Turbo code is just by broad research and application.CTC (CTC) is higher with its code efficiency, coding rate is faster and the bigger characteristics of free distance are able in recent years fast-developingly, has been elected to be the forward error correction type of physical layer at present by standard 802.16e and 802.16m.
Standard 802.16m selects for use double-binary convolutional Turbo code (DB-CTC) as one of channel coding schemes.Systematic code DB-CTC is each parallel constantly input 2 bit information when compiling, and exports 6 bits.Since adopted the scheme of secondary coding, the forward and backward state consistency of encoding, thus need not the bit that finishes up.Yet these characteristics of DB-CTC also make it decipher more complicated.
The decoder of DB-CTC adopts the structure of two same components decoder parallel iterations; Data to be deciphered (signal code resulting soft value of information (the soft output likelihood ratio of channel information) after soft demodulation of CTC after the signal transmission) are imported these two component decoders (component decoder 1, component decoder 2) respectively; Component decoder 1 output posteriority likelihood ratio information, and with this posteriority likelihood ratio information as external information again through passing back after interweaving to component decoder 2 as the priori likelihood ratio information of deciphering; Component decoder 2 is exported posteriority likelihood ratio information through decoding back, and after deinterleaving, becomes the priori likelihood ratio information of component decoder 1 as external information, accomplishes iteration thus one time.When reaching preset maximum iteration time, the posteriority likelihood ratio information of component decoder 2 outputs obtains final decode results through deinterleaving and hard decision.The iterative decoding of CTC makes the decode results convergence through the mode of between two component decoders, transmitting external information each other just gradually, and improves decoding performance thus.
Owing to contain a large amount of multiplyings and intermediate variable in the implementation procedure of MAP algorithm, so decoding complexity is high, and decoding delay is long.So modified model Log-MAP algorithm or its reduced form Max-Log-MAP decoding algorithm of MAP algorithm commonly used in simulation study and engineering.Though reduced multiplying, adopted the component decoder of Log-MAP algorithm or Max-Log-MAP algorithm that long decoding delay is still arranged.
The component decoder is used for the soft value of information of channel that receives through MAP algorithm computation output posteriority likelihood ratio information.Adopted the decode procedure of the component decoder of Log-MAP algorithm or Max-Log-MAP algorithm to comprise: the calculating of the calculating of branch metric, forward recursive, backward recursive and posteriority likelihood ratio information.I posteriority likelihood ratio information L constantly i(i=0 ..., N-1) obtain to state measurement and i branch metric calculation constantly by i forward state metric, the back of the i+1 moment constantly, N is two binary bits group length of importing in the CTC cataloged procedure.Usually; The component decoder is used behind soft value of information calculating 0 to the N-1 branch metric constantly of the soft demodulation of channel; Need to calculate successively from the forward state metric of 0 to n-hour through forward recursive earlier; And store, calculate constantly back successively through backward recursive more afterwards to state measurement from N to 0, and storage.At last, carry out the calculating of posteriority likelihood ratio information.0 to N-1 constantly the back of forward state metric, 1 to n-hour is used for 0 to N-1 constantly the direct calculating of posteriority likelihood ratio information to state measurement, constantly back to the state measurement of state measurement as the initialization next iteration of the forward state metric of n-hour, the 0th.
Summary of the invention
Technical problem to be solved by this invention is that the decoding architecture of employing two-way simultaneous provides a kind of interpretation method that reduces decoding delay, saves the CTC of memory.
The present invention solves the problems of the technologies described above the technological means that is adopted to be, CTC two-way simultaneous interpretation method comprises:
Two component decoders iterative decoder in parallel is adopted in data input to be decoded;
When not reaching preset maximum iteration time, output posteriority likelihood ratio information is converted into external information after the component decoder for decoding, again through interweave or deinterleaving after, input to said another component decoder as priori likelihood ratio information;
When reaching preset maximum iteration time, the posteriority likelihood ratio information that the component decoder of last work is exported after deciphering through deinterleaving and hard decision, obtains decode results;
It is characterized in that the decode procedure of component decoder specifically may further comprise the steps:
The initialization step of branch metric calculation and forward-backward recutrnce: the data to be deciphered of utilizing input and priori likelihood ratio information calculations and storage from 0 to (N/2)-1 constantly preceding N/2 branch metric, the branch metric in N/2 to N-1 back N/2 the moment constantly constantly, the forward state metric in the 0th moment of initialization and n-hour back to state measurement; Said N is two binary bits group length;
The phase I step: the forward state metric with initialized the 0th moment is a starting point, participates in forward recursive with preceding N/2 branch metric constantly and calculates, and obtains the forward state metric in 0 to the N/2 moment successively, and stores; Simultaneously, be starting point with the back of n-hour to state measurement, use N/2 the branch metric constantly in back to participate in backward recursive and calculate, obtain the back of N to the N/2 moment successively to state measurement, and storage;
The second stage step: is starting point with the back of N/2 the moment to state measurement; Participating in backward recursive with preceding N/2 branch metric tolerance constantly calculates; Obtain constantly back successively to state measurement from N/2 to 0; And participate in the posteriority likelihood ratio with (N/2)-1 of phase I storage to 0 constantly forward state metric and preceding N/2 branch metric constantly successively and calculate, obtain from (N/2)-1 to 0 constantly the posteriority likelihood ratio information in preceding N/2 the moment; Simultaneously; With N/2 forward metrics constantly is starting point; Participate in forward recursive calculating with N/2 the branch metric constantly in back and obtain the forward state metric of N/2 successively to n-hour; And participate in the calculating of posteriority likelihood ratio with the back of (N/2)+1 to n-hour of phase I storage to N/2 branch metric constantly of state measurement and back successively, obtain back N/2 posteriority likelihood ratio information constantly constantly from N/2 to N-1.
After comparing the completion of existing component decoder need forward recursive, just backward recursive begins, and posteriority likelihood ratio information could begin to calculate then; The present invention carries out forward recursive and backward recursive simultaneously, and forward direction/backward recursive is divided into two suitable stages of operand, and posteriority likelihood ratio information just can calculate in the beginning of second stage successively.Promptly begin the time-delay when the computing of posteriority likelihood ratio information finishes from the beginning recursive operation, the present invention compares existing decode procedure and has shortened one times.And; The computing of existing posteriority likelihood ratio is serial; Posteriority likelihood ratio computing of the present invention adopts two-way simultaneous to carry out simultaneously, and overlap with the time of recursive calculation required computing time, then do not need the other Distribution Calculation time; In addition, the structure of two-way simultaneous can reduce by half so that be used for the memory of store status tolerance.
Concrete, being calculated as of branch metric:
γ k(s’,s)=L a(u k)+1/2×v kar ka+1/2×v kar kb+1/2×v kyr ky+1/2×v kwr kw
Wherein, k representes current time, and s ' is the possible state of current time, and s is next possible state constantly, γ k(s ', s) transfer to the k+1 branch metric of state s constantly, u for k moment state s ' k=(u Ka, u Kb) be k two binary bits information of input coding device constantly, L a(u k) be k priori likelihood ratio information constantly, v k=(v Ka, v Ka, v Ky, v Kw) for sending bit, r k=(r Ka, r Kb, r Ky, r Kw) the k data to be deciphered (the soft value of information after the soft demodulation) constantly that receive of expression channel.
Further, in order to reduce the amount of calculation of branch metric, when Branch Computed is measured, branch metric is split as two factor p kAnd q k:
p k=L a(u k)+1/2×v kar ka+1/2×v kar kb
q k=1/2×v kyr ky+1/2×v kwr kw
According to v k=(v Ka, v Ka, v Ky, v Kw) value, p then kTotal p K, 00, p K, 01, p K, 10, p K, 11Four kinds of values, q kTotal q K, 00, q K, 01, q K, 10, q K, 11Four kinds of values are so 32 branch metrics between per two moment (16 different values) can all construct with 8 factors.Classical relatively Log-MAP algorithm and Max-Log-MAP algorithm each constantly Branch Computed tolerance need carry out 80 multiplication and 64 sub-additions, the branch metric after fractionation only need be done 24 multiplication and 28 sub-additions (comprising the addition that splits tolerance with factor structure); Each moment Branch Computed degree of classical relatively Log-MAP algorithm and Max-Log-MAP algorithm need be stored 16 different values, and the branch metric after splitting only need be stored 8 different factors.
The invention has the beneficial effects as follows; In the decode procedure of branch's decoder; Adopt the mode computing mode tolerance and posteriority likelihood ratio information of two-way simultaneous, will carry out the operation of forward direction, backward recursive and carry out simultaneously, the time reduces by half; And be dissolved into the computing time of posteriority likelihood ratio in the time of recursive operation, reduced the memory space of decoding delay and state measurement thus greatly; Further, through splitting the calculating of branch metric, the space of reducing redundant computation and stores branch tolerance reduces by half.
Description of drawings
Fig. 1 is coded in the grid chart between any two moment for DB-CTC;
Fig. 2 is an inner decoding sketch map in the component decoder.
Embodiment
The present invention is directed to that the decode procedure in the component decoder improves in the convolution Turbo decoding, other processing procedures are constant, and the convolution Turbo decode procedure comprises:
The iterative decoder of data to be deciphered, two component decoder parallel connections of priori likelihood ratio information input;
When not reaching preset maximum iteration time, output posteriority likelihood ratio information is converted into external information after the component decoder for decoding, again through interweave or deinterleaving after, input to said another component decoder as priori likelihood ratio information;
When reaching preset maximum iteration time, the component decoder for decoding output posteriority likelihood ratio information of working at last through deinterleaving and hard decision, obtains decode results.
Present embodiment improves component decoder Log-MAP interpretation method under the condition that does not change original error performance: comprise the fractionation and the two-way simultaneous computing of branch metric.
Branch metric is the critical quantity of contact component decoder input and output; All can use branch metric in recursive calculation forward state metric and back during to state measurement and when calculating posteriority likelihood ratio information, and the final purpose of component decoder is output posteriority likelihood ratio information (calculating of posteriority likelihood ratio information needs forward state metric, back to state measurement and branch metric).
As shown in Figure 1, any two all have 32 branch metrics constantly, totally 16 different values.The computing formula of branch metric is:
γ k(s’,s)=L a(u k)+1/2×v kar ka+1/2×v kar kb+1/2×v kyr ky+1/2×v kwr kw
Wherein, s ' is the possible state of current time, and s is next possible state constantly, γ k(s ', s) transfer to the k+1 branch metric of state s constantly, u for k moment state s ' k=(u Ka, u Kb) be k two binary bits information of input coding device constantly, L a(u k) be k priori likelihood ratio information constantly, v k=(v Ka, v Ka, v Ky, v Kw) for sending bit, r k=(r Ka, r Kb, r Ky, r Kw) the k soft value constantly that receives of expression channel.
Branch metric is split as two factor p kAnd q k, wherein:
p k=L a(u k)+1/2×v kar ka+1/2×v kar kb
q k=1/2×v kyr ky+1/2×v kwr kw
According to v k=(v Ka, v Ka, v Ky, v Kw) value, p then kTotal p K, 00, p K, 01, p K, 10, p K, 11Four kinds of values, q kTotal q K, 00, q K, 01, q K, 10, q K, 11Four kinds of values are so 16 branch metrics between per two moment can all construct with 8 factors.Calculate redundancy so the factor that Branch Computed tolerance splits into can reduce, save memory.
After introducing the fractionation of branch metric, ask k+1 forward state metric α constantly K+1(k 0 ..., forward recursive computing N) is: k forward state metric matrix A constantly kWith k forward recursive Matrix C constantly kAddition behind the right index is taken from summation, each capable element of matrix i of obtaining of summation, takes from right logarithm after the summation again, is exactly k+1 state s ' constantly I-1Pairing forward state metric, i=1,2 ... 8.
Wherein, k forward state metric matrix A constantly kWith k forward recursive Matrix C constantly kAddition is expressed as:
A k + C k = α k ( s 0 ) α k ( s 1 ) α k ( s 6 ) α k ( s 7 ) α k ( s 2 ) α k ( s 4 ) α k ( s 3 ) α k ( s 5 ) α k ( s 5 ) α k ( s 3 ) α k ( s 4 ) α k ( s 2 ) α k ( s 7 ) α k ( s 1 ) α k ( s 6 ) α k ( s 0 ) α k ( s 1 ) α k ( s 7 ) α k ( s 0 ) α k ( s 6 ) α k ( s 3 ) α k ( s 5 ) α k ( s 2 ) α k ( s 4 ) α k ( s 4 ) α k ( s 2 ) α k ( s 5 ) α k ( s 3 ) α k ( s 6 ) α k ( s 0 ) α k ( s 7 ) α k ( s 1 ) + p k , 00 + q k , 00 p k , 01 + q k , 10 p k , 10 + q k , 11 p k , 11 + q k , 01 p k , 00 + q k , 10 p k , 01 + q k , 00 p k , 10 + q k , 01 p k , 11 + q k , 11 p k , 00 + q k , 11 p k , 01 + q k , 01 p k , 10 + q k , 00 p k , 11 + q k , 10 p k , 00 + q k , 01 p k , 01 + q k , 11 p k , 10 + q k , 10 p k , 11 + q k , 00 p k , 00 + q k , 00 p k , 01 + q k , 10 p k , 10 + q k , 11 p k , 11 + q k , 01 p k , 00 + q k , 10 p k , 01 + q k , 00 p k , 10 + q k , 01 p k , 11 + q k , 11 p k , 00 + q k , 11 p k , 01 + q k , 01 p k , 10 + q k , 00 p k , 11 + q k , 10 p k , 00 + q k , 01 p k , 01 + q k , 11 p k , 10 + q k , 10 p k , 11 + q k , 00
a k(s i) (i=1,2 ... 8) expression k moment state s iForward state metric.From forward direction recursion matrix C kIn find out that branch metric is by p kWith q kTwo kinds of factors constitute.Matrix C kRow 1 and 5,2 and 6,3 and 7,4 and 8 equate respectively, promptly only need calculate C kPreceding four lines just can express C kArbitrary factor p K, ijOr q K, ijIn Matrix C kIn the number of times that occurs be 8, only need calculate once and 4 times of non-classical Log-MAP algorithm these factors, so just eliminated the redundant computation in original algorithm.
Similarly, ask the back of the k moment to state measurement β kThe backward recursive computing be: the back to the state measurement matrix B K+1With the backward recursive matrix D kSummation, each element of every row of the matrix that summation obtains are sued for peace respectively after taking from right index, take from right logarithm after the summation and are the pairing backward recursive result of this row.
Wherein, the k+1 moment is back to the state measurement matrix B K+1With k backward recursive matrix D constantly kAddition is expressed as:
B k + 1 + D k = β k + 1 ( s 0 ) β k + 1 ( s 3 ) β k + 1 ( s 4 ) β k + 1 ( s 7 ) β k + 1 ( s 4 ) β k + 1 ( s 7 ) β k + 1 ( s 0 ) β k + 1 ( s 3 ) β k + 1 ( s 1 ) β k + 1 ( s 2 ) β k + 1 ( s 5 ) β k + 1 ( s 6 ) β k + 1 ( s 5 ) β k + 1 ( s 6 ) β k + 1 ( s 1 ) β k + 1 ( s 2 ) β k + 1 ( s 1 ) β k + 1 ( s 2 ) β k + 1 ( s 5 ) β k + 1 ( s 6 ) β k + 1 ( s 5 ) β k + 1 ( s 6 ) β k + 1 ( s 1 ) β k + 1 ( s 2 ) β k + 1 ( s 0 ) β k + 1 ( s 3 ) β k + 1 ( s 4 ) β k + 1 ( s 7 ) β k + 1 ( s 4 ) β k + 1 ( s 7 ) β k + 1 ( s 0 ) β k + 1 ( s 3 ) + p k , 00 + q k , 00 p k , 11 + q k , 00 p k , 10 + q k , 11 p k , 01 + q k , 11 p k , 00 + q k , 00 p k , 11 + q k , 00 p k , 10 + q k , 11 p k , 01 + q k , 11 p k , 00 + q k , 10 p k , 11 + q k , 10 p k , 10 + q k , 01 p k , 01 + q k , 01 p k , 00 + q k , 10 p k , 11 + q k , 10 p k , 10 + q k , 01 p k , 01 + q k , 01 p k , 01 + q k , 00 p k , 10 + q k , 00 p k , 11 + q k , 11 p k , 00 + q k , 11 p k , 01 + q k , 00 p k , 10 + q k , 00 p k , 11 + q k , 11 p k , 00 + q k , 11 p k , 01 + q k , 10 p k , 10 + q k , 10 p k , 11 + q k , 01 p k , 00 + q k , 01 p k , 01 + q k , 10 p k , 10 + q k , 10 p k , 11 + q k , 01 p k , 00 + q k , 01
β K+1(s i) (i=1,2 ... 8) expression k+1 moment state s iBack to state measurement.
Because the relative independentability of forward recursive and backward recursive computing and symmetry, can realize the two-way simultaneous recursive operation at one time, accomplish and the calculating of likelihood ratio information is parallel too.
The two-way simultaneous computing is as shown in Figure 2 in the component decoder, and wherein dotted line is divided into two stages with the two-way simultaneous structure, and empty arrow is represented the direction of computing.The solid line grid that comprises state measurement is the unit memory, and the state measurement in the dotted line grid is the state measurement of interim storage (memory of also promptly saving that is used for store status tolerance).α k, β kRepresent that respectively k whole forward state metric and k whole backs constantly constantly are to state measurement.The soft information that the component decoder receives participates in calculating the branch metric factor, and deposits memory in, like P k={ p K, 00, p K, 01, p K, 10, p K, 11, Q k={ q K, 00, q K, 01, q K, 10, q K, 11Be k 8 factors constantly.α 0And β NBe used for the initialization recursive operation, α N/2And β N/2Be phase I recurrence gained, be used as the starting point of second stage state measurement recurrence.α NAnd β 0Be used for the corresponding state tolerance of initialization next iteration.Posteriority likelihood ratio information is the output of component decoder, in Fig. 2, comprises posteriority likelihood ratio L k(k=0,1,2 ..., module N-1) is responsible for calculating posteriority likelihood ratio information.
The decode procedure of component decoder specifically may further comprise the steps:
The calculation procedure that branch metric splits: the data to be deciphered of utilizing input and priori likelihood ratio information calculations and storage from 0 to (N/2)-1 constantly preceding N/2 P, the Q factor, whole P, the Q factor in back N/2 the moment in N/2 to the N-1 moment constantly;
Initialization step: initialization the 0th forward state metric constantly; Initialization n-hour back to state measurement; N is two binary bits group length;
Phase I step: carry out phase I forward recursive and phase I backward recursive simultaneously;
Phase I forward recursive: with the forward state metric α in the 0th moment 0Be starting point, structure forward state metric matrix A k, with preceding N/2 branch metric factor P, Q structure forward recursive Matrix C constantly k(k=0,1,2 ..., (N/2)-1).Calculate (N/2+1) * 8 forward state metric that obtained for 0 to the N/2 moment successively through forward recursive, and storage;
Phase I backward recursive: back with n-hour to state measurement β NBe starting point, the structure back is to the state measurement matrix B K+1, with N/2 branch metric factor P, the Q structure backward recursive matrix D constantly in back k(k=N-1, N-2 ..., N/2), calculate through backward recursive and to obtain successively behind N to N/2 (N/2+1) * 8 constantly to state measurement, and storage;
The second stage step:
Second stage backward recursive and likelihood ratio are calculated: back to state measurement β with N/2 the moment N/2Be starting point, the structure back is to the state measurement matrix B K+1, with preceding N/2 branch metric factor P, Q structure backward recursive matrix D constantly k(k=N/2-1; N/2-2; ..., 0), calculate through backward recursive and to obtain successively behind N/2 to 0 (N/2+1) * 8 constantly to state measurement; And calculate to 0 constantly forward state metric and preceding N/2 branch metric factor P, Q constantly with (N/2)-1 of phase I storage successively, obtain from (N/2)-1 to 0 constantly the posteriority likelihood ratio information in preceding N/2 the moment; The 0th moment back that backward recursive calculating obtains at last is to state measurement β 0Be used for the back of initialization next iteration to state measurement;
Second stage forward recursive and posteriority likelihood ratio information calculations: with N/2 forward state metric α constantly N/2Be starting point, structure forward state metric matrix A k, with N/2 branch metric factor P, the Q structure forward recursive Matrix C constantly in back k(k=N/2; N/2+1; ..., N-1), obtain (N/2+1) * 8 forward state metric of N/2 successively to n-hour through forward recursive; And successively with the calculating to N/2 branch metric factor constantly of state measurement and back of phase I storage from the back of (N/2)+1 to n-hour, obtain the posteriority likelihood ratio information in back N/2 the moment constantly from N/2 to N-1; The n-hour forward state metric α that forward recursive calculates NBe used for the forward state metric of initialization next iteration.
The two-way simultaneous computing of phase I is part forward state metric and the parallel recurrence of part back to state measurement, and the two-way simultaneous computing of second stage is the parallel recurrence of part forward and backward state measurement and the parallel computation of posteriority likelihood ratio information.Each and then calculating of state measurement constantly of calculating of posteriority likelihood ratio information constantly in second stage, thus be fused to the computing time of posteriority likelihood ratio in the time of forward-backward recutrnce computing, and brought reducing by half of state measurement memory space thus.So being the decoding of DB-CTC, the fractionation of branch metric and two-way simultaneous structure brought decoding to economize the effect of memory space more fast and more.
If this execution mode is expanded to Max-Log-MAP; Then the computing of branch metric fractionation is identical; When the two-way simultaneous computing; Two-way simultaneous structure and calculation step are constant, and what need variation is in the computing of forward direction and backward recursion computing and posteriority likelihood ratio information, to introduce the max log approximate processing.

Claims (3)

1. CTC two-way simultaneous interpretation method comprises:
Two component decoders iterative decoder in parallel is adopted in data input to be decoded;
When not reaching preset maximum iteration time, output posteriority likelihood ratio information is converted into external information after the component decoder for decoding, again through interweave or deinterleaving after, input to said another component decoder as priori likelihood ratio information;
When reaching preset maximum iteration time, the posteriority likelihood ratio information that the component decoder of last work is exported after deciphering through deinterleaving and hard decision, obtains decode results;
It is characterized in that the decode procedure of component decoder specifically may further comprise the steps:
The initialization step of branch metric calculation and forward-backward recutrnce: the data to be deciphered of utilizing input and priori likelihood ratio information calculations and storage from 0 to (N/2)-1 constantly preceding N/2 branch metric, the branch metric in N/2 to N-1 back N/2 the moment constantly constantly, the forward state metric in the 0th moment of initialization and n-hour back to state measurement; Said N is two binary bits group length;
The phase I step: the forward state metric with initialized the 0th moment is a starting point, participates in forward recursive with preceding N/2 branch metric constantly and calculates, and obtains the forward state metric in 0 to the N/2 moment successively, and stores; Simultaneously, be starting point with the back of n-hour to state measurement, use N/2 the branch metric constantly in back to participate in backward recursive and calculate, obtain the back of N to the N/2 moment successively to state measurement, and storage;
The second stage step: is starting point with the back of N/2 the moment to state measurement; Participating in backward recursive with preceding N/2 branch metric tolerance constantly calculates; Obtain constantly back successively to state measurement from N/2 to 0; And participate in the posteriority likelihood ratio with (N/2)-1 of phase I storage to 0 constantly forward state metric and preceding N/2 branch metric constantly successively and calculate, obtain from (N/2)-1 to 0 constantly the posteriority likelihood ratio information in preceding N/2 the moment; Simultaneously; With N/2 forward metrics constantly is starting point; Participate in forward recursive calculating with N/2 the branch metric constantly in back and obtain the forward state metric of N/2 successively to n-hour; And participate in the calculating of posteriority likelihood ratio with the back of (N/2)+1 to n-hour of phase I storage to N/2 branch metric constantly of state measurement and back successively, obtain back N/2 posteriority likelihood ratio information constantly constantly from N/2 to N-1.
2. CTC two-way simultaneous interpretation method according to claim 1 is characterized in that, being calculated as of branch metric:
γ k(s’,s)=L a(u k)+1/2×v kar ka+1/2×v kar kb+1/2×v kyr ky+1/2×v kwr kw
Wherein, k representes current time, and s ' is the possible state of current time, and s is next possible state constantly, γ k(s ', s) transfer to the k+1 branch metric of state s constantly, u for k moment state s ' k=(u Ka, u Kb) be k two binary bits information of input coding device constantly, L a(u k) be k priori likelihood ratio information constantly, v k=(v Ka, v Ka, v Ky, v Kw) for sending bit, r k=(r Ka, r Kb, r Ky, r Kw) the k data to be deciphered constantly that receive of expression channel.
3. CTC two-way simultaneous interpretation method according to claim 1 is characterized in that, in the branch metric calculation step, branch metric is split as two factor p kAnd q k:
p k=L a(u k)+1/2×v kar ka+1/2×v kar kb
q k=1/2×v kyr ky+1/2×v kwr kw
CN 201110191727 2011-07-08 2011-07-08 Bidirectional and parallel decoding method of convolutional Turbo code Expired - Fee Related CN102340320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110191727 CN102340320B (en) 2011-07-08 2011-07-08 Bidirectional and parallel decoding method of convolutional Turbo code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110191727 CN102340320B (en) 2011-07-08 2011-07-08 Bidirectional and parallel decoding method of convolutional Turbo code

Publications (2)

Publication Number Publication Date
CN102340320A true CN102340320A (en) 2012-02-01
CN102340320B CN102340320B (en) 2013-09-25

Family

ID=45515856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110191727 Expired - Fee Related CN102340320B (en) 2011-07-08 2011-07-08 Bidirectional and parallel decoding method of convolutional Turbo code

Country Status (1)

Country Link
CN (1) CN102340320B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012146124A1 (en) * 2011-04-26 2012-11-01 中兴通讯股份有限公司 Turbo decoding method and device
CN103688502A (en) * 2013-07-01 2014-03-26 华为技术有限公司 Method for achieving Turbo isostatic compensation, Turbo equalizer, and system
CN105739992A (en) * 2016-02-26 2016-07-06 珠海煌荣集成电路科技有限公司 GCC compiler based method and system used by software to control memory partitioning and mapping
CN106856425A (en) * 2016-12-29 2017-06-16 中国科学院微电子研究所 For the turbo decoders and method of work of Long Term Evolution
CN112653474A (en) * 2020-12-22 2021-04-13 西南大学 Design method of compact LDPC-CC decoder for reducing average iteration number
CN114553370A (en) * 2022-01-19 2022-05-27 北京理工大学 Decoding method, decoder, electronic device, and storage medium
CN115085742A (en) * 2022-08-18 2022-09-20 杰创智能科技股份有限公司 Decoding method, decoding device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388674A (en) * 2008-10-23 2009-03-18 华为技术有限公司 Decoding method, decoder and Turbo code decoder
CN101411071A (en) * 2006-01-27 2009-04-15 高通股份有限公司 MAP decoder with bidirectional sliding window architecture
CN101651458A (en) * 2008-08-13 2010-02-17 华为技术有限公司 Turbo parallel decoding method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101411071A (en) * 2006-01-27 2009-04-15 高通股份有限公司 MAP decoder with bidirectional sliding window architecture
CN101651458A (en) * 2008-08-13 2010-02-17 华为技术有限公司 Turbo parallel decoding method, device and system
CN101388674A (en) * 2008-10-23 2009-03-18 华为技术有限公司 Decoding method, decoder and Turbo code decoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张忠培,周亮: "一种Turbo码译码的矩阵算法", 《电子与信息学报》, vol. 24, no. 2, 28 February 2002 (2002-02-28), pages 266 - 271 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012146124A1 (en) * 2011-04-26 2012-11-01 中兴通讯股份有限公司 Turbo decoding method and device
US10574263B2 (en) 2013-07-01 2020-02-25 Huawei Technologies Co., Ltd. Method for implementing turbo equalization compensation, turbo equalizer and system
CN103688502A (en) * 2013-07-01 2014-03-26 华为技术有限公司 Method for achieving Turbo isostatic compensation, Turbo equalizer, and system
WO2015000100A1 (en) * 2013-07-01 2015-01-08 华为技术有限公司 Method for realizing turbo equalization compensation, and turbo equalizer and system
CN105739992A (en) * 2016-02-26 2016-07-06 珠海煌荣集成电路科技有限公司 GCC compiler based method and system used by software to control memory partitioning and mapping
CN105739992B (en) * 2016-02-26 2019-05-07 启龙科技有限公司 The method and system of software control memory partition and mapping based on GCC compiler
CN106856425A (en) * 2016-12-29 2017-06-16 中国科学院微电子研究所 For the turbo decoders and method of work of Long Term Evolution
CN112653474A (en) * 2020-12-22 2021-04-13 西南大学 Design method of compact LDPC-CC decoder for reducing average iteration number
CN112653474B (en) * 2020-12-22 2022-12-13 西南大学 Design method of compact LDPC-CC decoder for reducing average iteration number
CN114553370A (en) * 2022-01-19 2022-05-27 北京理工大学 Decoding method, decoder, electronic device, and storage medium
CN114553370B (en) * 2022-01-19 2024-03-15 北京理工大学 Decoding method, decoder, electronic device and storage medium
CN115085742A (en) * 2022-08-18 2022-09-20 杰创智能科技股份有限公司 Decoding method, decoding device, electronic equipment and storage medium
CN115085742B (en) * 2022-08-18 2022-11-15 杰创智能科技股份有限公司 Decoding method, decoding device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102340320B (en) 2013-09-25

Similar Documents

Publication Publication Date Title
Yuan et al. Low-latency successive-cancellation list decoders for polar codes with multibit decision
CN102340320B (en) Bidirectional and parallel decoding method of convolutional Turbo code
CN111162797B (en) Encoding device and encoding method of rate compatible 5G LDPC code
Zhang et al. Reduced-latency SC polar decoder architectures
CN101777924B (en) Method and device for decoding Turbo codes
CN100425000C (en) Double-turbine structure low-density odd-even check code decoder
CN108847848B (en) BP decoding algorithm of polarization code based on information post-processing
CN101478314B (en) Reed-solomon coder-decoder and decoding method thereof
CN101388674B (en) Decoding method, decoder and Turbo code decoder
CN104092470A (en) Turbo code coding device and method
CN105634508A (en) Realization method of low complexity performance limit approximate Turbo decoder
EP2621092A1 (en) Method for Viterbi decoder implementation
CN111786683B (en) Low-complexity polar code multi-code block decoder
CN102611464B (en) Turbo decoder based on external information parallel update
CN102594369B (en) Quasi-cyclic low-density parity check code decoder based on FPGA (field-programmable gate array) and decoding method
CN103595424A (en) Component decoding method, decoder, Turbo decoding method and Turbo decoding device
Dong et al. Design and FPGA implementation of stochastic turbo decoder
Natarajan et al. Lossless parallel implementation of a turbo decoder on GPU
CN102571107A (en) System and method for decoding high-speed parallel Turbo codes in LTE (Long Term Evolution) system
CN100490333C (en) Maximum posterior probailistic decoding method and decoding device
Mandwale et al. Implementation of High Speed Viterbi Decoder using FPGA
CN201918982U (en) LDPC and high-speed decoding device of shorten codes of same
CN103701475A (en) Decoding method for Turbo codes with word length of eight bits in mobile communication system
Han et al. Simplified multi-bit SC list decoding for polar codes
Venkatesh et al. High speed and low complexity XOR-free technique based data encoder architecture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130925

Termination date: 20160708