CN1157883C - Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code - Google Patents
Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code Download PDFInfo
- Publication number
- CN1157883C CN1157883C CNB011201940A CN01120194A CN1157883C CN 1157883 C CN1157883 C CN 1157883C CN B011201940 A CNB011201940 A CN B011201940A CN 01120194 A CN01120194 A CN 01120194A CN 1157883 C CN1157883 C CN 1157883C
- Authority
- CN
- China
- Prior art keywords
- decoding
- parallel
- dpram
- algorithm
- decoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000009977 dual effect Effects 0.000 claims abstract description 9
- 238000005259 measurement Methods 0.000 claims description 34
- 238000009825 accumulation Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 17
- 238000000034 method Methods 0.000 abstract description 9
- 230000001105 regulatory effect Effects 0.000 abstract description 2
- 230000008520 organization Effects 0.000 description 6
- 238000010295 mobile communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 101100383698 Secale cereale rscc gene Proteins 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Landscapes
- Error Detection And Correction (AREA)
Abstract
The present invention relates to a method for doubly paralleling the decoding calculation of soft input soft out (SISO) and a plurality of gliding windows to greatly reduce the processing time of average decoding in unit bit, namely to substantially increase the real-time processing capability of a Turbo decoder and to properly control a memory demand in an expectant order of magnitude, so the memory demand can not expand with the increase of the encoding block length. The so-called dual parallelism means that on one hand, decoding processing, such as branch measure calculation, front / backward direction state measure calculation, bit log likelihood ratio calculation, etc. is completed in parallel by taking the maximum state number of an encoder as a unit, and a pipeline (pipe-line) mode is used for the realization; on the other hand, two or more than two gliding windows expand in parallel to parallel log-MAP calculation, so some balance of the decoding processing speed and the memory demand can be obtained by regulating the number of the gliding windows so as to be convenient for adopting some single chip programmable logic devices (such as FPGA or CPLD)) to realize high-speed Turbo decoding.
Description
(1) technical field:
The present invention relates to communication system, be specifically related in the wide-band mobile communication the high-speed Turbo code decoding algorithm and based on the hardware implementation method and the device of this algorithm.
(2) background technology:
For the wide-band mobile communication target that realizes that the ITU-2000 standard proposes, 3-G (Generation Three mobile communication system) standardization body (3th Generation Partnership Project (3GPP)) has proposed Wideband Code Division Multiple Access (WCDMA) (WCDMA) Radio Transmission Technology (RTT) scheme.Similar with other 3G system standard motions, this scheme mainly adopts two kinds of forward channel error correction coding modes: convolution code and Turbo code, wherein, higher data rate (such as 〉=32kbps) and than low error rate (business datum of Pe<10-6) all adopts Turbo code.
Turbo code is the novel channel decoding technology that is proposed first on 93 years international communication conference (International Communication Conference 1993--ICC ' 93) by people such as French scholar C.Berrou, the coder structure of its that unique parallel cascade (Parallel ConcatenatedCode--PCC) type and the iterative decoding mechanism of similar automobile engine operation principle are given deep impression, and hence obtain one's name and be " Turbo code ", its distinguishing feature is that the data errored bit performance (BER-Eb/No) behind the iterative decoding is very near shannon limit (gap of approximately having only 0.35-0.7dB) in additive white Gaussian noise channel (Additive Gaussian White Noise--AGWN).International monopoly that they have been this patent application.
The Parallel Concatenated Convolutional Code (PCCC) that people such as C.Berrou propose, promptly Turbo code is to construct like this:
(1) directly give a recursive system convolution coder (RecursiveSystem Convolutional Code-RSCC) carries out convolutional encoding to information bit on the one hand, produce corresponding check bit, general encoding rate is 1/2, and the coding bound degree is not too big (such as k=3
- 5);
(2) same information bit also send a RSCC encoder to carry out convolutional encoding after through a random interleaver scramble, produces new check bit, the RSCC encoder can with (1) in identical, also can be different;
(3) information sequence is a unit with an interleaver interleave depth, and the alternately output of check digit that interspersed two RSCC of each information bit produce forms the Turbo code code stream;
(4) as required, can increase interleaver and RSCC encoder, form the Turbo code data of lower code efficiency, also deletion check bit regularly forms the Turbo code code stream than high code-rate.
Turbo decoder architecture and decoding algorithm thereof that people such as C.Berrou propose have following characteristics:
(1) converts receiver demodulation output to can reflect received code sequence confidence level bit log-likelihood ratio earlier, import as decoder;
(2) coded sequence of Jie Shouing has or not the decoding respectively that interweaves according to information bit, the soft inputting and soft that adopted output (Soft-in-soft-out-SISO) decoding algorithm is based on the maximum likelihood soft-decision algorithm of posteriority mistake symbol (bit) probability minimum, produces thus about information bit soft to declare output;
(3) the soft information of the output soft information that deducts input forms additional information that reflection information bit confidence level the changes prior probability as another level (information bit through interweave or do not interweave) decoder input bit information, with original soft information superposition mutually, decoding so iterates.
Traditional SISO algorithm is referred to as BJCR (with the name of inventor's prefix) algorithm or MAP (MaxA Posterior) algorithm usually, and in order to reduce complexity, reality generally adopts the log-MAP algorithm on the engineering, and it is the modified model of MAP algorithm.The log-MAP algorithm can be represented with following recurrence formula:
K is an information sample point subscript, and m is a current state, and M is the encoding state number.λ
kBe the output log-likelihood ratio of k input bit, α
k I, m, β
k mBe the forward direction and the reverse state metric of k moment m state, δ
k I, mBranch metric when importing i down for k moment m state.F (j, m) for current state m down input j ∈ 0, the NextState that arrives behind the 1}, b (j, m) for input j ∈ 0, can arrive the previous state of current state m behind the 1}, K is normaliztion constant A=(2/ σ
2) log
εE, σ
2The awgn channel noise variance, z
kBe prior probability (AprP).x
k, y
kBe respectively k the information bit sample value and the check bit sample value that receive, i, c
I, mBe information bit and corresponding coded-bit under the m state.Wherein operator E is defined as: aEb=-ln (e
-a+ e
-b).
Because interleaving/deinterleaving, back existence to processing such as state metric calculation, need receive that based on log-MAP Turbo Codes Algorithm decoder complete coding groups just can carry out one time iterative computation, interweaving delays time and handle time-delay increases with interleave depth and RSC sign indicating number status number, thereby influences business data transmission real-time and the supported maximum traffic data rate of decoder.With regard to 3-G (Generation Three mobile communication system), the data rate that reaches the ITU-2000 proposition is up to 384kbps 2Mbps, if adopt above-mentioned traditional coding and decoding mode, (such as WCDMA, CDMA2000 and TD-SCDMA etc.) defined Turbo code is deciphered processing in real time in the various solutions that are difficult to propose with monolithic field programmable gate array or compound programmable logic device (FPGA/CPLD or DSP) chip realization 3GPP.
(3) summary of the invention
Purpose of the present invention is exactly at above-mentioned the deficiencies in the prior art, proposes a kind of parallel slide windows log-MAP algorithm that can improve soft inputting and soft output (SISO) decoding processing speed greatly, and provides the monolithic FPGA/CPLD hardware solution of realizing this algorithm.
Parallel slide windows Log-MAP algorithm (PSW-Log-MAP) is below described;
Turbo code is applied to mobile communication system need solves two subject matters: reduce decoding complexity and reduce decoding delay.In the last few years, some low complex degree suboptimization posterior probability algorithms have appearred in succession, as: SOVA, log-MAP, Max-log-MAP etc., but these algorithms all must just begin the backward recursive metric calculation after receiving frame data, thereby need more state storage and long decoding delay for long data frame.People such as S.Benedetto have proposed a kind of fixedly continuous decoding algorithm of decoding delay that has, be called sliding window log-MAP algorithm (SW-log-MAP) [2], sliding window log-MAP algorithm is divided into the subframe that some length are l with frame coding data, with the subframe is unit decoding, decoding algorithm still adopts the log-MAP algorithm, and difference is that p decoding data of each subframe afterbody multiprocessing is in order to carry out initialization to the back to state measurement.The l/N that adopts required state storage space of the decoding algorithm of this sliding window structure and decoding delay can reduce to traditional MAP algorithm.Yet calculating and emulation show, directly adopt the Turbo decoder of sliding window log-MAP algorithm, and the decoding rate that reach the 384k-2Mbps of 3GPP regulation also has sizable difficulty.For this reason, the present invention improves sliding window log-MAP algorithm, makes it can handle the subframe that plural length is l simultaneously, and we call parallel slide windows log-MAP algorithm (PSW-log-MAP) to this algorithm.The analysis showed that of back, under the situation of not sacrificing decoding performance, suitably select sliding window parameter l and p again, parallel slide windows log-MAP algorithm can exchange significantly improving of decoding speed for less realization scale and memory capacity cost, is particularly suitable for monolithic FPGA/CPLD and realizes the high-speed Turbo decoder.
For the sake of simplicity, here with secondary and behavior example, the principle of parallel slide windows log-MAP algorithm is described, as shown in Figure 3.Is length that the Frame of N is divided into (T+1) individual subframe, wherein
(
Expression is not more than the integer of a), dash area is represented to be used for after each subframe of initialization the decoding data to state measurement among the figure.The algorithm workflow is as follows:
(1) t=1 is set.
(2) back is to state metric calculation
If a) 2t≤T is provided with sliding window length w=l+p.Calculate the back of SW (2t-1) and SW (2t) simultaneously to state measurement, wherein:
To SW (2t-1): initialization β:
Utilize equation (3), calculate β
k: k stores β simultaneously from (2t-1) l+p-1 to 2 (t-1) l
k: k is from (2t-1) l-1 to 2 (t-1) l.
To SW (2t): initialization β:
Utilize equation (3), calculate β
k: k to (2t-1) l, stores β from 2tl+p-1 simultaneously
k: k from 2tl to (2t-1) l.
B) if 2t>T.
If T is an odd number, it is w=l+p that sliding window length is set.Calculate the back of SW (T+1) and SW (T) simultaneously to state measurement, wherein:
To SW (T): the SW (2t-1) under operation t≤T situation;
To SW (T+1): initial platform β:
(supposing that every frame all contains the ZF bit) utilizes equation (3), calculates and storage β
k: k from N to (2t-1) l.
If T is an even number, only calculate the back of SW (T+1): initialization β to state measurement:
(supposing that every frame all contains the ZF bit) utilizes equation (3), calculates and storage β
k: k is from N to 2 (t-1) l.
(3) forward state metric α and log-likelihood ratio LLR calculate: utilize state measurement parallel computation unit and LLR parallel computation unit.
If a) 2t≤T is provided with sliding window length w=l+p.Calculate forward state metric and the LLR of SW (2t-1) and SW (2t) simultaneously, wherein:
To SW (2t-1): initialization α:
If t=1
If t ≠ 0 α
2 (t-1) l mOther are calculated by previous sliding window and utilize equation (3), calculate α
k: k uses equation (4) to calculate λ from 2 (t-1) l+1 to k=(2t-1) l simultaneously
k: k is from k=2 (t-1) l to k=(2t-1) l-1.
To SW (2t): initialization α:
Utilize equation (3), calculate α
k: k to 2tl, uses equation (4) to calculate λ from (2t-1) l-p+1 simultaneously
k: k makes t=t+1 from k=(2t-1) l to k=2tl-1., returns step 2) begin the calculating of next sliding window.
B) if 2t>T,
If T is an odd number.Calculate forward state metric and the LLR of SW (T+1) and SW (T) simultaneously, similar when operating same 2t≤T, only to SW (T+1), calculate α
k: k to N, calculates λ from (2t-1) l-p+1 simultaneously
k: k finishes this frame and calculates from (2t-1) l to N., begin next frame from step 1) then and calculate.
If T is an even number, only calculate the forward state metric α of SW (T+1)
k: k calculates λ simultaneously from 2 (t-1) l+1 to N
k: k finishes from 2 (t-1) l to N algorithm.
Realize the high-speed decoder of Turbo code of maximal posterior probability algorithm of parallel slide windows, it is characterized in that: be made up of algorithm realization unit and ram in slice unit, wherein algorithm realizes that the unit comprises that the read-write of decoding I/O metadata cache is controlled, path metric adds up parallel computation, state measurement parallel computation and bit LLR parallel computation module; The ram in slice unit comprises L, D, P, Q metadata cache DPRAM array, the table memory and back to the state measurement memory that interweaves, and wherein the data flow step of each several part annexation is as follows:
Decoding I/O metadata cache read-write control module is accepted soft information D in, and deposit metadata cache DPRAM array in, branch metric parallel computation module in the decoding I/O metadata cache read-write control module is calculated two or more pairing branched measurement values in sliding window coverage information position simultaneously and is exported to path metric accumulation calculating module, exporting to state measurement parallel computation module behind this module calculating forward-facing state path does backward recursion calculating and exports to the back to the state measurement memory, state measurement parallel computation module is calculated forward state metric simultaneously, the above-mentioned path metric parallel computation module that adds up is exported to LLR parallel computation module simultaneously and is calculated bit log-likelihood ratio, like this after the several times iterative processing, data are through the output of decoding I/O metadata cache read-write control module, be Hard decision decoding output, enter the sub-decoder decoding that interweaves therebetween if desired, then read interleaving address from the table memory cell that interweaves.
Characteristics of the present invention are: soft inputting and soft output (SISO) decoding computing and multi-slide-windows are dual parallel, both can significantly reduce the unit bit on average deciphers the processing time, promptly increase substantially the processing capability in real time of Turbo decoder, have and can suitably control memory demand in the order of magnitude of an expectation, making it does not increase with coding groups length and constantly expands.So-called dual parallel, be that unit parallelly finishes branch metric calculation, front/rearly realizes to decodings such as state metric calculation, bit log-likelihood ratio calculating processing and with streamline (pipe-line) mode with encoder maximum rating number on the one hand exactly; On the other hand, adopt two or more sliding windows to launch parallel log-MAP computing simultaneously, like this, can be by regulating sliding window quantity, try to achieve certain balance of decoding processing speed and memory demand, be convenient to adopt certain monolithic programmable logic device (FPGACPLD) to realize high-speed Turbo decoding.
Beneficial effect of the present invention:
1, significantly reduces the processed in units time, promptly increase substantially the processing capability in real time of TURBO decoder.
2, can suitably control memory demand in the order of magnitude of an expectation, making it does not increase with coding groups length and constantly expands.
3, be convenient to adopt some monolithic programmable logic device (such as FPGA and CPLD) to realize telling TURBO decoding.
(4) description of drawings:
Fig. 1 is a PSW-Log-MAP algorithm schematic diagram;
The TURBO decoder architecture figure that Fig. 2 realizes for monolithic FPGA;
Fig. 3 is parallel log-MAP decoder architecture schematic diagram;
Fig. 4 is parallel log-MAP algorithm computation flow process;
Fig. 5 is the pipeline organization of output log-likelihood calculations.
(5) embodiment:
Fig. 2 can adopt the Virtex1000E of XIINX company for the decoder implementation structure that monolithic FPGA realizes, and with two parallel slide windows Log-MAP algorithms.
The parallel LOG-MAP decoder architecture of a kind of 8 states as shown in Figure 3.At first be the calculating of back to state measurement β, according to the tolerance D of branch of input and previous calculations obtain back to state measurement, current back of 8 of parallel computations to the state measurement value, and they are temporary in one 64 bit register are used for behind the next bit information also they being stored into size simultaneously and being N to the calculating of state measurement
nBeing convenient to later LLR in * 64 the sheet internal state metric memory calculates.So circulation is calculated up to finishing whole sliding window N bit.Decoder begins the calculating of forward state metric and bit log-likelihood ratio then.
Since adopt the sliding window algorithm, initial condition the unknown of forward state metric, and the training data sequence that therefore needs certain-length N is to determine the initial value of state measurement.Forward state metric a also is 8 state parallel computations, and its computational process is similar to the calculating of state measurement to the back.Only the calculated direction in grid chart is by behind the forward direction, and do not store the forward state metric result, calculate forward path tolerance accumulated value deltai0 and the deltai1 (I=0 that meets at 8 state nodes corresponding to 8 couple of information bit " 0 " and " 1 " whenever the path metric accumulator module, 7), decoder reads from sheet internal state metric memory behind 8 of the corresponding moment to state measurement value β, calls the information bit log-likelihood value LLR in bit LLR module this moment of parallel computation then.
(2) calculation process and streamline (PIipeline) structure
Fig. 4 is an output bit log-likelihood ratio calculation process, and the design of calculation process is a symmetric net trrellis diagram structure of utilizing convolution code, the bit log-likelihood ratio of complexity is calculated resolved into for four steps and finish, and the computation complexity in each step is roughly the same.When the decomposition method of Fig. 4 made decoder adopt hardware to realize, the time-delay of each step to device had roughly the same requirement, and these step of decomposition are convenient to the hardware pipeline design for scheme as a series of beats on the streamline.
Fig. 5 is the specific implementation of hardware pipeline structure.It is a five-stage pipeline organization plan.This pipeline organization scheme is that the fixedly characteristics by the LOG-MAP algorithm are determined, because the LOG-MAP algorithm is a kind of iterative algorithm, so the final decoding speed of whole LOG-MAP algorithm has been determined in forward state metric and back to the interative computation speed of state measurement.According to the characteristics of LOG-MAP algorithm, we will export the complicated calculations of likelihood value and decompose by the complexity of state measurement, and the calculating of exporting likelihood value the most at last was divided into for four steps calculates, and has therefore determined the structure of hardware pipeline.Adopt the five-stage pipeline organization plan can make encoder obtain faster speed, the arithmetic speed of LOG-MAP algorithm, the arithmetic speed of LOG-MAP algorithm depends on the computational speed of state measurement, such as, in each specific hardware, if the forward state metric of each information bit and back all can be finished in 20ns to the calculating of state measurement, adopt our The pipeline design scheme, each information adopts the decoding of LOG-MAP algorithm only to need 40ns.
(3) dual port RAM array structure
Make full use of the powerful BLOCKRAM function that the VirtexE Series FPGA provides, all memory spaces that the Turbo decoder is required all have been made in the sheet, adopt unique RAM structure simultaneously, thereby thoroughly solved the bottleneck problem of high-speed Turbo decoder on reading and writing data.
A. dual port RAM array structure:
The data buffer zone by four independently dual port RAM (DPRAM) form, a kind of I/O data of dual port RAM corresponding stored: the LLR of the each MAP output of L-DPRAM dynamic memory, the system data that the D-DPRAM storage receives, the first encoder parity data that the P-RAM storage receives, the second encoder parity data that the Q-RAM storage receives.The structure of this many DPRAM guaranteed two parallel sliding windows in a clock cycle to reading in four kinds of I/O data, thoroughly solved.
B. two L-RAM structures:
We have adopted the LLR of two each MAP outputs of L-DPRAM dynamic memory, the a certain moment, two parallel sliding windows read the last time LLR of MAP output from first L-DPRAM, the LLR that this MAP has been calculated deposits second L-DPRAM in simultaneously, and second L-DPRAM carried out read operation when calculating next MAP, first L-DPRAM is carried out write operation, replace down so always.This structure makes forward state metric and LLR calculate to finish in an average clock cycle becomes possibility.
(4) parallel computation of algorithm
Except decoder structurally two sliding window parallel processings, when carrying out the computing of Log-MAP algorithm, also adopt parallel organization in a large number, mainly contain:
A. the parallel computation of branch metric:
Grid map analysis to 3GPP Turbo encoder is not difficult to find out, although carve mutually nearly 16 transfer branches arranged between 8 states temporarily, but have only 4 kinds of branch metric output valves, therefore for each the group coding data that receives, can in a clock cycle, calculate this 4 branched measurement values simultaneously, and the path metric accumulator module of exporting to the back is carried out state measurement and is upgraded.
B. the path metric parallel computation that adds up:
8 * 8 i.e. 64 bus structures have been adopted in the path metric accumulation state metric module, after in temporary register, obtaining 8 8-bit state measurement values and 4 7bit branched measurement values of previous moment, adopt one-level selector and one-level adder can realize adding up of 16 path metric values.The log-domain state measurement upgrades and has equally also adopted parallel organization, 8 pairs of branches of 8 states is asked simultaneously the operation of difference, minimum value and index summation.
C. log-likelihood ratio (LLR) parallel computation
Can see that from equation (1) calculating of LLR has comprised the continued operation of a large amount of operator E, and (E is defined as aEb=-ln (e
-a+ e
-b)), directly the hardware of operator Y is realized very complexity, adopt here and adopted a kind of approximate calculation method more E:
aEb=-ln(e
-a+e
-b)=min(ab)-ln(l+e
-|a-b|)=min(a,b)-E(|a-b|)
Wherein second logarithm is operated available simple look-up method realization.When E operator operand increased, calculating in twos successively still needed the more clock cycle, as calculating a LLR, needs iteration at least 8 times.Given this, improved the degree of parallelism of E operator, promptly to the adding up and carry out E operation in pairs of 8 forward directions and reverse state metric, that is:
aEbEcEdEeEfEgEh=(aEb)E(cEd)E(eEf)E(gEh)
So only need 3 iteration can calculate a LLR, further adopt streamline (Pipeline) structure can guarantee that the single clock cycle of LLR exports continuously.
Claims (2)
1. realize the high-speed decoder of Turbo code of maximal posterior probability algorithm of parallel slide windows, it is characterized in that: be made up of algorithm realization unit and ram in slice unit, wherein algorithm realizes that the unit comprises that the read-write of decoding I/O metadata cache is controlled, path metric adds up parallel computation, state measurement parallel computation and bit LLR parallel computation module; The ram in slice unit comprises L, D, P, Q metadata cache DPRAM array, the table memory and back to the state measurement memory that interweaves, and wherein the data flow step of each several part annexation is as follows:
Decoding I/O metadata cache read-write control module is accepted soft information D in, and deposit metadata cache DPRAM array in, branch metric parallel computation module in the decoding I/O metadata cache read-write control module is calculated two or more pairing branched measurement values in sliding window coverage information position simultaneously and is exported to path metric accumulation calculating module, exporting to state measurement parallel computation module behind this module calculating forward-facing state path does backward recursion calculating and exports to the back to the state measurement memory, state measurement parallel computation module is calculated forward state metric simultaneously, the above-mentioned path metric parallel computation module that adds up is exported to LLR parallel computation module simultaneously and is calculated bit log-likelihood ratio, like this after the several times iterative processing, data are through the output of decoding I/O metadata cache read-write control module, be Hard decision decoding output, enter the sub-decoder decoding that interweaves therebetween if desired, then read interleaving address from the table memory cell that interweaves.
2. the high-speed decoder of Turbo code of realization maximal posterior probability algorithm of parallel slide windows according to claim 1 is characterized in that:
1) described L, D, P, Q metadata cache DPRAM array, adopt the dual port RAM array structure, it comprise four kinds independently DPRAM be used to store the external information tolerance that four kinds of different metrics: L-DPRAM store each MAP output, D-DPRAM store bits of information input tolerance, P-DPEAM stores the check digit input tolerance of first decoder, the check digit input tolerance of second decoder of Q-DPRAM storage, each dual port RAM all has two and overlaps independently data and address bus, distributes to two parallel slide windows respectively and uses;
2) described L, D, P, Q metadata cache DPEAM array adopt two L-DPEAM structures, promptly adopt the likelihood value LLR of two each MAP outputs of L-DPRAM dynamic memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011201940A CN1157883C (en) | 2001-07-11 | 2001-07-11 | Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB011201940A CN1157883C (en) | 2001-07-11 | 2001-07-11 | Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1328386A CN1328386A (en) | 2001-12-26 |
CN1157883C true CN1157883C (en) | 2004-07-14 |
Family
ID=4663971
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB011201940A Expired - Fee Related CN1157883C (en) | 2001-07-11 | 2001-07-11 | Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1157883C (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100364301C (en) * | 2003-03-17 | 2008-01-23 | 西南交通大学 | Parallel Turbo coding-decoding method based on block processing for error control of digital communication |
EP1700381B1 (en) * | 2003-12-22 | 2008-07-02 | Koninklijke Philips Electronics N.V. | Siso decoder with sub-block processing and sub-block based stopping criterion |
CN100542050C (en) * | 2004-02-06 | 2009-09-16 | 中国科学院沈阳自动化研究所 | Design method of high-speed turbo decoder with self-adaptability |
CN101026439B (en) * | 2007-02-07 | 2012-08-29 | 重庆重邮信科通信技术有限公司 | Decoding method for increasing Turbo code decoding rate |
JP5479580B2 (en) * | 2009-06-18 | 2014-04-23 | ゼットティーイー コーポレーション | Method and apparatus for parallel TURBO decoding in LTE |
CN102111162B (en) * | 2009-12-28 | 2015-02-04 | 重庆重邮信科通信技术有限公司 | Turbo component decoding method, component decoder, branch calculator and Turbo decoder |
CN103595424B (en) * | 2012-08-15 | 2017-02-08 | 重庆重邮信科通信技术有限公司 | Component decoding method, decoder, Turbo decoding method and Turbo decoding device |
CN103916141B (en) * | 2012-12-31 | 2017-04-05 | 华为技术有限公司 | Turbo code interpretation method and device |
CN112968709B (en) * | 2016-05-31 | 2022-08-19 | 展讯通信(上海)有限公司 | Turbo code decoding method and Turbo code decoder |
CN107798328A (en) * | 2016-08-30 | 2018-03-13 | 合肥君正科技有限公司 | A kind of destination object searching method and device |
CN111211792B (en) * | 2018-11-22 | 2023-05-30 | 北京小米松果电子有限公司 | Turbo decoding method, device and system |
-
2001
- 2001-07-11 CN CNB011201940A patent/CN1157883C/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN1328386A (en) | 2001-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6603412B2 (en) | Interleaved coder and method | |
CN1366739A (en) | Method and apparatus for decoding turbo-encoded code sequence | |
CN1157883C (en) | Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code | |
US7020827B2 (en) | Cascade map decoder and method | |
CN1505273A (en) | Interleaving / deinterleaving device and method for communication system | |
CN104092470B (en) | A kind of Turbo code code translator and method | |
CN101034951A (en) | Implementation method for in-Turbo code interweaver | |
CN100454768C (en) | Non-logarithm-domain high-speed maximum posteroir probability Turbo decoding method | |
CN1140148C (en) | Method for executing Tebo decoding in mobile communication system | |
CN1349361A (en) | Method for decoding Tebo code | |
CN1323462A (en) | Memory architecture for MAP decoder | |
CN1145266C (en) | Turbo code decoding method and decoder | |
CN1307432A (en) | Decoding method and decoder for Turbo code | |
CN1129257C (en) | Maximum-likelihood decode method f serial backtracking and decoder using said method | |
CN1142629C (en) | Decoding method and decoder for Tebo code | |
CN1738229A (en) | Woven convolutional code error detection and correction coder, encoder in TD-SCDMA system | |
CN1133276C (en) | Decoding method and decoder for high-speed parallel cascade codes | |
Huang et al. | A high speed turbo decoder implementation for CPU-based SDR system | |
CN1171392C (en) | Method and device for interleaving and deinterleaving of parallel cascade convolution code | |
CN1398047A (en) | Turbine coder-decoder with parallel slide windows and its implementation method | |
CN2506034Y (en) | Turbo decoder | |
CN112332868A (en) | Turbo parallel decoding method based on DVB-RCS2 | |
Natarajan et al. | Lossless parallel implementation of a turbo decoder on GPU | |
CN103701475A (en) | Decoding method for Turbo codes with word length of eight bits in mobile communication system | |
KR100625242B1 (en) | Apparatus and method for turbo decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C06 | Publication | ||
PB01 | Publication | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C19 | Lapse of patent right due to non-payment of the annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |