CN1328386A - Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code - Google Patents

Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code Download PDF

Info

Publication number
CN1328386A
CN1328386A CN01120194A CN01120194A CN1328386A CN 1328386 A CN1328386 A CN 1328386A CN 01120194 A CN01120194 A CN 01120194A CN 01120194 A CN01120194 A CN 01120194A CN 1328386 A CN1328386 A CN 1328386A
Authority
CN
China
Prior art keywords
dpram
algorithm
map
parallel
llr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN01120194A
Other languages
Chinese (zh)
Other versions
CN1157883C (en
Inventor
徐友云
李烜
宋文涛
罗汉文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Research Institute of Telecommunications Transmission of Ministry Information Industry
Original Assignee
Shanghai Jiaotong University
Research Institute of Telecommunications Transmission of Ministry Information Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University, Research Institute of Telecommunications Transmission of Ministry Information Industry filed Critical Shanghai Jiaotong University
Priority to CNB011201940A priority Critical patent/CN1157883C/en
Publication of CN1328386A publication Critical patent/CN1328386A/en
Application granted granted Critical
Publication of CN1157883C publication Critical patent/CN1157883C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The "soft in soft out" (SISO) decode operation is parallel to the multiple slide windows to greatly decreased the average decode time of unit bit or the real-time processing power of Turbo decocer and to properly control the requipment in memory space. Said dual parallelism is that the maximal state, number of encoder is used to perform different computing in parallel mode and pipe-line mode while two or more slide windows are used to perform parallel log-MAP operations. As a result programmable logic device chip, such as FPGA and CPLD, may be used for high-speed Turbo decode.

Description

Maximal posterior probability algorithm of parallel slide windows and high-speed decoder of Turbo code thereof
The present invention relates to communication system, be specifically related in the wide-band mobile communication the high-speed Turbo code decoding algorithm and based on the hardware implementation method and the device of this algorithm.
For the wide-band mobile communication target that realizes that the ITU-2000 standard proposes, 3-G (Generation Three mobile communication system) standardization body (3rd Generation Partnership Project (3GPP)) has proposed Wideband Code Division Multiple Access (WCDMA) (WCDMA) Radio Transmission Technology (RTT) scheme.Similar with other 3G system standard motions, this scheme mainly adopts two kinds of forward channel error correction coding modes: convolution code and Turbo code, wherein, higher data rate (such as 〉=32kbps) and than low error rate (business datum of Pe<10-6) all adopts Turbo code.
Turbo code is the novel channel decoding technology that is proposed first on 93 years international communication conference (InternationalCommunication Conference 1993---ICC ' 93) by people such as French scholar C.Berrou, (Parallel Concatenated Code---PCC) coder structure of type and the iterative decoding mechanism of similar automobile engine operation principle are given deep impression to its that unique parallel cascade, and hence obtain one's name and be " Turbo code ", its distinguishing feature be additive white Gaussian noise channel (Additive Gaussian White Noise---AGWN) in, (the BER~of the data errored bit performance behind the iterative decoding Eb/No) very near shannon limit (gap of approximately having only 0.35-0.7dB).International monopoly that they have been this patent application.
The Parallel Concatenated Convolutional Code (PCCC) that people such as C.Berrou propose, promptly Turbo code is to construct like this:
(1) information bit is directly given on the one hand a recursive system convolution coder (RecursiveSystem Convolutional Code-RSCC) is carried out convolutional encoding, produce corresponding check bit, general encoding rate is 1/2, and the coding bound degree is not too big (such as k=3~5);
(2) same information bit also send a RSCC encoder to carry out convolutional encoding after through a random interleaver scramble, produces new check bit, the RSCC encoder can with (1) in identical, also can be different;
(3) information sequence is a unit with an interleaver interleave depth, and the alternately output of check digit that interspersed two RSCC of each information bit produce forms the Turbo code code stream;
(4) as required, can increase interleaver and RSCC encoder, form the Turbo code data of lower code efficiency, also deletion check bit regularly forms the Turbo code code stream than high code-rate.
Turbo decoder architecture and decoding algorithm thereof that people such as C.Berrou propose have following characteristics:
(1) converts receiver demodulation output to can reflect received code sequence confidence level bit log-likelihood ratio earlier, import as decoder;
(2) coded sequence of Jie Shouing has or not the decoding respectively that interweaves according to information bit, (Soft-in-soft-out-SISO) decoding algorithm is based on the maximum likelihood soft-decision algorithm of posteriority mistake symbol (bit) probability minimum to the output of the soft inputting and soft that adopted, produces thus about information bit soft to declare output;
(3) the soft information of the output soft information that deducts input forms additional information that reflection information bit confidence level the changes prior probability as another level (information bit through interweave or do not interweave) decoder input bit information, with original soft information superposition mutually, decoding so iterates.
Traditional SISO algorithm is referred to as BJCR (with the name of inventor's prefix) algorithm or MAP (Max A Posterior) algorithm usually, and in order to reduce complexity, reality generally adopts the log-MAP algorithm on the engineering, and it is the modified model of MAP algorithm.The log-MAP algorithm can be represented with following recurrence formula: λ k = E m = 0 M - 1 α k m + δ k 0 , m + β k + 1 f ( 0 , m ) - E m = 0 M - 1 α k m + δ k 1 , m + β k + 1 f ( j , m ) - - - ( 1 ) α k m = E j = 0 1 α k - 1 b ( j , m ) + δ k - 1 j , b ( j , m ) - - - ( 2 ) β k m = E j = 0 1 β k + 1 f ( j , m ) + δ k j , m - - - ( 3 ) δ k 1 , m = K - ( z k + A x k ) i - A y k c ( i , m ) - - - ( 4 ) K is an information sample point subscript, and m is a current state, and M is the encoding state number.λ kBe the output log-likelihood ratio of k input bit, α k I, m, β k mBe the forward direction and the reverse state metric of k moment m state, δ k I, mBranch metric when importing i down for k moment m state.F (j, m) for current state m down input j ∈ 0, the NextState that arrives behind the 1}, b (j, m) for input j ∈ 0, can arrive the previous state of current state m behind the 1}, K is normaliztion constant A=(2/ σ 2) log εE, σ 2The awgn channel noise variance, z kBe prior probability (AprP) .x k, y kBe respectively k the information bit sample value and the check bit sample value that receive, i, c ' are information bit and corresponding coded-bit under the m state.Wherein operator E is defined as: aEb=-1n (e -a+ e -b).
Because interleaving/deinterleaving, back existence to processing such as state metric calculation, need receive that based on log-MAP Turbo Codes Algorithm decoder complete coding groups just can carry out one time iterative computation, interweaving delays time and handle time-delay increases with interleave depth and RSC sign indicating number status number, thereby influences business data transmission real-time and the supported maximum traffic data rate of decoder.With regard to 3-G (Generation Three mobile communication system), the data rate that reaches the ITU-2000 proposition is up to 384kbps~2Mbps, if adopt above-mentioned traditional coding and decoding mode, (such as WCDMA, CDMA2000 and TD-SCDMA etc.) defined Turbo code is deciphered processing in real time in the various solutions that are difficult to propose with monolithic field programmable gate array or compound programmable logic device (FPGA/CPLD or DSP) chip realization 3GPP.
Purpose of the present invention is exactly at above-mentioned the deficiencies in the prior art, propose a kind of parallel slide windows log-MAP algorithm that can improve soft inputting and soft output (SISO) decoding processing speed greatly, and provide monolithic FPGA/CPLD hardware realization solution based on this algorithm.
Characteristics of the present invention are: soft inputting and soft output (SISO) decoding computing and multi-slide-windows are dual parallel, both can significantly reduce the unit bit on average deciphers the processing time, promptly increase substantially the processing capability in real time of Turbo decoder, have and can suitably control memory demand in the order of magnitude of an expectation, making it does not increase with coding groups length and constantly expands.So-called dual parallel, be that unit parallelly finishes branch metric calculation, front/rearly realizes to decodings such as state metric calculation, bit log-likelihood ratio calculating processing and with streamline (pipe-line) mode with encoder maximum rating number on the one hand exactly; On the other hand, adopt the log-MAP computing that walks abreast of two or more sliding window parallel expansions, like this, can be by regulating sliding window quantity, try to achieve certain balance of decoding processing speed and memory demand, be convenient to adopt some monolithic programmable logic device (such as FPGA or CPLD) to realize high-speed Turbo decoding.
Parallel slide windows log-MAP algorithm proposed by the invention and as follows based on the high-speed decoder of Turbo code realization technology scheme of this algorithm:
Parallel slide windows Log-MAP algorithm (PSW-Log-MAP) is below described
Turbo code is applied to mobile communication system need solves two subject matters: reduce decoding complexity and reduce decoding delay.In the last few years, some low complex degrees, suboptimization posterior probability algorithm have appearred in succession, as: SOVA, log-MAP, Max-log-MAP etc., but these algorithms all must just begin the backward recursive metric calculation after receiving frame data, thereby need more state storage and long decoding delay for long data frame.People such as S.Benedetto have proposed a kind of fixedly continuous decoding algorithm of decoding delay that has, be called sliding window log-MAP algorithm (SW-log-MAP) [2], sliding window log-MAP algorithm is divided into the subframe that some length are l with frame coding data, with the subframe is unit decoding, decoding algorithm still adopts the log-MAP algorithm, and difference is that p decoding data of each subframe afterbody multiprocessing is in order to carry out initialization to the back to state measurement.The l/N that adopts required state storage space of the decoding algorithm of this sliding window structure and decoding delay can reduce to traditional MAP algorithm.Yet calculating and emulation show, directly adopt the Turbo decoder of sliding window log-MAP algorithm, and the decoding rate that reach the 384k~2Mbps of 3GPP regulation also has sizable difficulty.For this reason, the present invention improves sliding window log-MAP algorithm, makes it can handle the subframe that plural length is l simultaneously, and we call parallel slide windows log-MAP algorithm (PSW-log-MAP) to this algorithm.The analysis showed that of back, under the situation of not sacrificing decoding performance, as long as suitably select sliding window parameter l and p, parallel slide windows log-MAP algorithm can exchange significantly improving of decoding speed for less realization scale and memory capacity cost, is particularly suitable for monolithic FPGA/CPLD and realizes the high-speed Turbo decoder.
For the sake of simplicity, here with secondary and behavior example, the principle of parallel slide windows log-MAP algorithm is described, as shown in Figure 1.Is length that the Frame of N is divided into (T+1) individual subframe, wherein
Figure A0112019400081
(
Figure A0112019400082
Expression is not more than the integer of a), dash area is represented to be used for after each subframe of initialization the decoding data to state measurement among the figure.The algorithm workflow is as follows: (1) is provided with t=1.(2) calculate to state measurement β the back
If a) 2t≤T is provided with sliding window length w=l+p.Calculate the back of SW (2t-1) and SW (2t) simultaneously to state measurement, wherein:
To SW (2t-1): initialization β: β (2t-1) l+p m=0 m utilizes equation (3), calculates β k: κ stores β simultaneously from (2t-1) l+p-1 to 2 (t-1) l k: k is from (2t-1) l-1 to 2 (t-1) l.
To SW (2t): initialization β: β 2tl+p m=0 m utilizes equation (3), calculates β k: k to (2t-1) l, stores β from 2tl+p-1 simultaneously k: k from 2tl to (2t-1) l.
B) if 2t>T.
If T is an odd number, it is w=l+p that sliding window length is set.Calculate the back of SW (T+1) and SW (T) simultaneously to state measurement, wherein:
To SW (T): the SW (2t-1) under operation t≤T situation;
To SW (T+1): initialization β: β N m=0 m (supposing that every frame end is all with the ZF bit) utilizes equation (3), calculates and storage β k: k from N to (2t-1) l.
If T is an even number, only calculate the back of SW (T+1): initialization β: β to state measurement N m=0 m (supposing that the equal end of every frame is all with the ZF bit) utilizes equation (3), calculates and storage β k: k is from N to 2 (t-1) l.(3) forward state metric α and log-likelihood ratio LLR calculate
If a) 2t≤T is provided with sliding window length w=l+p.Calculate forward state metric and the LLR of SW (2t-1) and SW (2t) simultaneously, wherein:
To 8W (2t-1): initialization α: if t=1
If t ≠ 0 α 2 (t-1) l mCalculate by previous sliding window and to utilize equation (3), calculate α k: k uses equation (4) to calculate λ from 2 (t-1) l+1 to k=(2t-1) l simultaneously k: k is from k=2 (t-1) l to k=(2t-1) l-1.
To SW (2t): initialization α: α (2t-1) l-p m=0 m utilizes equation (3), calculates α k: k to 2tl, uses equation (4) to calculate λ from (2t-1) l-p+1 simultaneously k: k from k=(2t-1) l to k=2tl-1.
Make t=t+1, return step 2) begin the calculating of next sliding window.
B) if 2t>T,
If T is an odd number.Calculate forward state metric and the LLR of SW (T+1) and SW (T) simultaneously, similar when operating same 2t≤T, only to SW (T+1), calculate α k: k to N, calculates λ from (2t-1) l-p+1 simultaneously k: k finishes from (2t-1) l to the N. algorithm.
If T is an even number, only calculate the forward state metric α of SW (T+1) k: k calculates λ simultaneously from 2 (t-1) l+1 to N k: k is from 2 (t-1) l to N.Algorithm finishes.
Characteristics of the present invention are: soft inputting and soft output (SISO) decoding computing and multi-slide-windows are dual parallel, both can significantly reduce the unit bit on average deciphers the processing time, promptly increase substantially the processing capability in real time of Turbo decoder, have and can suitably control memory demand in the order of magnitude of an expectation, making it does not increase with coding groups length and constantly expands.So-called dual parallel, be that unit parallelly finishes branch metric calculation, front/rearly realizes to decodings such as state metric calculation, bit log-likelihood ratio calculating processing and with streamline (pipe-line) mode with encoder maximum rating number on the one hand exactly; On the other hand, adopt the log-MAP computing that walks abreast of two or more sliding window parallel expansions, like this, can be by regulating sliding window quantity, try to achieve certain balance of decoding processing speed and memory demand, be convenient to adopt some monolithic programmable logic device (such as FPGA or CPLD) to realize high-speed Turbo decoding.
Beneficial effect of the present invention:
1. significantly reduce the processed in units time, promptly put forward the processing capability in real time of Turbo decoder significantly.
2. can suitably control memory demand in the order of magnitude of an expectation, making it does not increase with coding groups length and constantly expands.
3. be convenient to adopt some monolithic programmable logic device (such as FPGA or CPLD) to realize high-speed Turbo decoding.
Description of drawings:
Fig. 1 is a PSW-Log-MAP algorithm schematic diagram
The Turbo decoder architecture figure that Fig. 2 realizes for monolithic FPGA
Fig. 3 is parallel log-MAP decoder architecture schematic diagram
Fig. 4 is parallel log-MAP algorithm computation flow process
Fig. 5 is the pipeline organization of output log-likelihood calculations
Below the monolithic FPGA of explanation high-speed Turbo decoder realizes embodiment
Here be given on the Virtex 1000E FPGA of a slice Xilinx company, adopt the developing example of the WCDMA high-speed decoder of Turbo code of two parallel slide windows log-MAP algorithms realization.Under two parallel slide windows structures, unit bit average handling time only needs 1 clock cycle, if adopt the 32MHz system clock to drive, by 3 iterative computation, on average the highest decoding processing speed can reach 5.3Mbps.If the degree of parallelism of raising PSW-Log-MAP algorithm (as adopting many sliding windows) is expected a unit bit process holding time and drops to 1 below the clock cycle, can further improve decoding speed.
Adopt two parallel slide windows Log-MAP algorithms the high-speed Turbo decoder monolithic FPGA implementation structure as shown in Figure 2.It mainly is made up of algorithm realization unit and ram in slice unit, and wherein algorithm realizes that the read-write of I/O metadata cache is controlled, path metric adds up, and modules such as parallel computation, state measurement parallel computation, bit LLR parallel computation are formed by deciphering in the unit.The ram in slice unit is interweaved and is shown memory (INLV-DPRAM) by L, D, P, Q metadata cache DPRAM array, and the back is formed to state measurement memory (SM-SPRAM).The dash area representative belongs to each module of second parallel slide windows among the figure.Since two sliding windows be walk abreast and processing method identical, so only drawn among the figure signal flow mode and form between one of them sliding window module.On the ram in slice resource was used, two sliding windows were independently back separately to the state measurement memory except having, and share the identical metadata cache DPRAM array and the table memory that interweaves.Describe in detail below in conjunction with the primary structure characteristics of aforementioned algorithm this FPGA implementation method.(1) parallel log-MAP decoder architecture
The log-MAP decoder architecture that a kind of 8 states walk abreast as shown in Figure 3.At first be the calculating of back to state measurement β, according to the input branch metric D and previous calculations obtain back to state measurement, current back of 8 of parallel computations to the state measurement value, and they are temporary in one 64 bit register are used for behind next bit information to the calculating of state measurement, also they being stored size simultaneously is N uBeing convenient to later LLR in * 64 the sheet internal state tolerance memory calculates.So circulation is up to finishing whole sliding window N uThe calculating of bit.Decoder begins the calculating of forward state metric and bit log-likelihood ratio then.
Owing to adopt the sliding window algorithm, therefore initial condition the unknown of forward state metric needs certain-length N tThe training data sequence to determine the initial value of state measurement.Forward state metric α also is 8 state parallel computations, and its computational process is similar to the calculating of state measurement to the back.Only the calculated direction in grid chart is by behind the forward direction, and does not store the forward state metric result.Calculate forward path tolerance accumulated value deltaI0 and the deltaI1 (I=0 that meets at 8 state nodes corresponding to 8 couple of information bit " 0 " and " 1 " whenever the path metric accumulator module, ... 7), decoder reads from sheet internal state metric memory behind 8 of the corresponding moment to state measurement value β, calls the information bit log-likelihood value LLR in bit LLR module this moment of parallel computation then.(2) calculation process and streamline (Pipeline) structure
Fig. 4 is an output bit log-likelihood ratio calculation process, and the design of calculation process is a symmetric net trrellis diagram structure of utilizing convolution code, the bit log-likelihood ratio of complexity is calculated resolved into for four steps and finish, and the computation complexity in per step is roughly the same.The decomposition method of Fig. 4 makes when decoder adopts hardware to realize that the time-delay of each step to device has roughly the same requirement, and these step of decomposition are convenient to the hardware pipeline design for scheme as a series of beats on the streamline.
Fig. 5 is the specific implementation of hardware pipeline structure.It is a five-stage pipeline organization plan.This pipeline organization scheme is that the inherent characteristics by the Log-MAP algorithm is determined, because the Log-MAP algorithm is a kind of iterative algorithm, so the final decoding speed of whole Log-MAP algorithm has been determined in forward state metric and back to the interative computation speed of state measurement.According to the characteristics of Log-MAP algorithm, we will export the complicated calculations of likelihood value and decompose by the complexity of state metric calculation, and the calculating of exporting likelihood value the most at last was divided into for four steps calculates, and has therefore determined the structure of hardware pipeline.Adopt the five-stage pipeline organization plan can make encoder obtain the fastest speed, the arithmetic speed of Log-MAP algorithm only depends on the computational speed of state measurement, such as, in each specific hardware, if the forward state metric of each information bit and back all can be finished in 20ns to the calculating of state measurement, adopt our The pipeline design power case, each information bit adopts the decoding of Log-MAP algorithm only to need 40ns
(3) dual port RAM array structure
Make full use of abundant block RAM (BLOCKMEMORY) resource that the Virtex-E Series FPGA provides, all memory spaces that the Turbo decoder is required all have been made in the sheet, adopt unique RAM structure simultaneously, solved the bottleneck problem of high-speed Turbo decoder on reading and writing data.Specifically comprise:
A. dual port RAM array structure:
Data buffer area is made up of a dual port RAM (DPRAM) array, it comprise four kinds independently DPRAM be used to store four kinds of different metrics: L-DPRAM, store the external information tolerance (Lin) of each MAP output; D-DPRAM, store bits of information input tolerance (Din); P-DPRAM, the check digit input tolerance (Pin) of storing first decoder; Q-DPRAM, the check digit input tolerance (Qin) of second decoder of storage.Each dual port RAM all has two and overlaps independently data and address bus, distributing to two parallel slide windows respectively uses, this structure guaranteed two parallel slide windows in a clock cycle to reading in four kinds of I/O metrics, broken through the bottleneck of high speed decoder on reading and writing data.
B. two L-DPRAM structures:
Adopt the likelihood value (LLR) of two each MAP outputs of L-DPRAM dynamic memory, the a certain moment, two parallel slide windows log-MAP calculation procedure read the LLR of last time MAP computing output from first L-DPRAM, the LLR that this MAP computing has been calculated deposits second L-DPRAM in simultaneously, and two parallel slide windows carry out read operation to second L-DPRAM during MAP computing next time, first L-DPRAM is carried out write operation, replace down so always.Adopt this structure, the back is calculated and can be finished in a clock cycle to state measurement and LLR.
(4) parallel computation of algorithm
Except decoder structurally two sliding window parallel processings, when carrying out the computing of log-MAP algorithm, also adopted parallel organization in a large number, mainly contain:
A. branch metric parallel computation
The grid chart of the WCDMA Turbo encoder that analysis 3GPP provides is not difficult to find out, although nearly 16 transfer branches are arranged between 8 states of adjacent moment, but have only 4 kinds of different branched measurement values, therefore for each the group coding data that receives, can in a clock cycle, calculate this 4 kinds of branched measurement values simultaneously, and the path metric accumulator module of exporting to the back is carried out state measurement and is upgraded.
B. the path metric parallel computation that adds up
Path metric add up and the state measurement module in adopted 8 * 8 i.e. 64 bus structures, after in temporary register, obtaining 8 8-bit state measurement values and 4 7-bit branched measurement values of previous moment, adopt one-level selector and one-level adder can realize adding up of 16 path metric values.The log-domain state measurement upgrades and equally also to have adopted parallel organization, and 8 pairs of branches of 8 states are asked difference, minimum value and index sum operation simultaneously.
C. log-likelihood ratio (LLR) parallel computation
Can see that from equation (1) calculating of LLR has comprised the continued operation of a large amount of operator E, and (E is defined as aEb=-ln (e -a+ e -b)), directly the hardware of operator E is realized very complexity, adopted a kind of approximate calculation method here to E:
aEb=-ln(e -a+e -b)=min(a,b)-ln(1+e -|a-b|)
Wherein second logarithm is operated available simple look-up method realization.When E operator operand increased, calculating in twos successively still needed the more clock cycle, needs iteration at least 8 times as calculating a LLR.Given this, we have improved the degree of parallelism that the E operator calculates, promptly to the adding up and carry out the E operation in pairs of 8 forward directions and reverse state metric, that is:
aEbEcEdEeEfEgEh=(aEb)E(cEd)E(eEf)E(gEh)
Like this, only need 3 iteration can calculate a LLR, further adopt streamline (Pipeline) structure can guarantee that the single clock cycle of LLR exports continuously.

Claims (3)

1. maximal posterior probability algorithm of parallel slide windows is characterized in that flow process is carried out according to the following steps
(1) t=1 is set;
(2) calculate to state measurement β the back:
If a) 2t≤T is provided with sliding window length w=l+p, calculate the back of SW (2t-1) and SW (2t) simultaneously to state measurement,
B) if 2t>T,
If T is an odd number, it is w=l+p that sliding window length is set, and calculates the back to state measurement of SW (T+1) and SW (T) simultaneously;
If T is an even number, only calculate the back of SW (T+1) to state measurement;
(3) forward state metric α and log-likelihood ratio LLR calculate:
If a) 2t≤T is provided with sliding window length w=l+p, calculate forward state metric and the LLR of SW (2t-1) and SW (2t) simultaneously,
B) if 2t>T,
If T is an odd number, calculate forward state metric and the LLR of SW (T+1) and SW (T) simultaneously,
If T is an even number, only calculate the forward state metric α of SW (T+1) kK is from 2 (t-1) l+1
To N calculates λ simultaneously k: k is from 2 (t-1) l to N.
2. the high-speed decoder of Turbo code of maximal posterior probability algorithm of parallel slide windows is characterized in that: be made up of algorithm realization unit and ram in slice unit, wherein algorithm realizes that the read-write of I/O metadata cache is controlled, path metric adds up, and modules such as parallel computation, state measurement parallel computation, bit LLR parallel computation are formed by deciphering in the unit, the ram in slice unit is by L, D, P, Q metadata cache DPRAM array, interweave and show memory (INLV-DPRAM), the back is formed to state measurement memory (SM-SPRAM).
3. the high-speed decoder of Turbo code of maximal posterior probability algorithm of parallel slide windows according to claim 2 is characterized in that:
1) described L, D, P, Q metadata cache DPRAM array adopt the dual port RAM array structure, it comprise four kinds independently DPRAM be used to store four kinds of different metrics: L-DPRAM, store the external information tolerance (Lin) of each MAP output; D-DPRAM, store bits of information input tolerance (Din); P-DPRAM, the check digit input tolerance (Pin) of storing first decoder; Q-DPRAM, the check digit input tolerance (Qin) of second decoder of storage, each dual port RAM all has two and overlaps independently data and address bus, distributes to two parallel slide windows respectively and uses;
2) described L, D, P, Q metadata cache DPRAM array, adopt two L-DPRAM structures, promptly adopt the likelihood value (LLR) of two each MAP outputs of L-DPRAM dynamic memory, the a certain moment, two parallel slide windows log-MAP calculation procedure read the LLR of last time MAP computing output from first L-DPRAM, the LLR that this MAP computing has been calculated deposits second L-DPRAM in simultaneously, and two parallel slide windows carry out read operation to second L-DPRAM during MAP computing next time, first L-DPRAM is carried out write operation, replace down so always.
CNB011201940A 2001-07-11 2001-07-11 Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code Expired - Fee Related CN1157883C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011201940A CN1157883C (en) 2001-07-11 2001-07-11 Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011201940A CN1157883C (en) 2001-07-11 2001-07-11 Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code

Publications (2)

Publication Number Publication Date
CN1328386A true CN1328386A (en) 2001-12-26
CN1157883C CN1157883C (en) 2004-07-14

Family

ID=4663971

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011201940A Expired - Fee Related CN1157883C (en) 2001-07-11 2001-07-11 Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code

Country Status (1)

Country Link
CN (1) CN1157883C (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100364301C (en) * 2003-03-17 2008-01-23 西南交通大学 Parallel Turbo coding-decoding method based on block processing for error control of digital communication
CN100542050C (en) * 2004-02-06 2009-09-16 中国科学院沈阳自动化研究所 A kind of method for designing that has adaptivity and high speed turbo decoder
CN1898874B (en) * 2003-12-22 2010-05-26 皇家飞利浦电子股份有限公司 Siso decoder with sub-block processing and sub-block based stopping criterion
CN102111162A (en) * 2009-12-28 2011-06-29 重庆重邮信科通信技术有限公司 Turbo component decoding method, component decoder, branch calculator and Turbo decoder
CN102396158A (en) * 2009-06-18 2012-03-28 中兴通讯股份有限公司 Method and apparatus for parallel turbo decoding in long term evolution system (lte)
CN101026439B (en) * 2007-02-07 2012-08-29 重庆重邮信科通信技术有限公司 Decoding method for increasing Turbo code decoding rate
CN103595424A (en) * 2012-08-15 2014-02-19 重庆重邮信科通信技术有限公司 Component decoding method, decoder, Turbo decoding method and Turbo decoding device
CN103916141A (en) * 2012-12-31 2014-07-09 华为技术有限公司 Turbo code decoding method and device
CN107798328A (en) * 2016-08-30 2018-03-13 合肥君正科技有限公司 A kind of destination object searching method and device
CN111211792A (en) * 2018-11-22 2020-05-29 北京松果电子有限公司 Turbo decoding method, device and system
CN112968709A (en) * 2016-05-31 2021-06-15 展讯通信(上海)有限公司 Turbo code decoding method and Turbo code decoder

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100364301C (en) * 2003-03-17 2008-01-23 西南交通大学 Parallel Turbo coding-decoding method based on block processing for error control of digital communication
CN1898874B (en) * 2003-12-22 2010-05-26 皇家飞利浦电子股份有限公司 Siso decoder with sub-block processing and sub-block based stopping criterion
CN100542050C (en) * 2004-02-06 2009-09-16 中国科学院沈阳自动化研究所 A kind of method for designing that has adaptivity and high speed turbo decoder
CN101026439B (en) * 2007-02-07 2012-08-29 重庆重邮信科通信技术有限公司 Decoding method for increasing Turbo code decoding rate
CN102396158A (en) * 2009-06-18 2012-03-28 中兴通讯股份有限公司 Method and apparatus for parallel turbo decoding in long term evolution system (lte)
CN102111162B (en) * 2009-12-28 2015-02-04 重庆重邮信科通信技术有限公司 Turbo component decoding method, component decoder, branch calculator and Turbo decoder
CN102111162A (en) * 2009-12-28 2011-06-29 重庆重邮信科通信技术有限公司 Turbo component decoding method, component decoder, branch calculator and Turbo decoder
CN103595424A (en) * 2012-08-15 2014-02-19 重庆重邮信科通信技术有限公司 Component decoding method, decoder, Turbo decoding method and Turbo decoding device
CN103595424B (en) * 2012-08-15 2017-02-08 重庆重邮信科通信技术有限公司 Component decoding method, decoder, Turbo decoding method and Turbo decoding device
CN103916141A (en) * 2012-12-31 2014-07-09 华为技术有限公司 Turbo code decoding method and device
CN103916141B (en) * 2012-12-31 2017-04-05 华为技术有限公司 Turbo code interpretation method and device
CN112968709A (en) * 2016-05-31 2021-06-15 展讯通信(上海)有限公司 Turbo code decoding method and Turbo code decoder
CN112968709B (en) * 2016-05-31 2022-08-19 展讯通信(上海)有限公司 Turbo code decoding method and Turbo code decoder
CN107798328A (en) * 2016-08-30 2018-03-13 合肥君正科技有限公司 A kind of destination object searching method and device
CN111211792A (en) * 2018-11-22 2020-05-29 北京松果电子有限公司 Turbo decoding method, device and system
CN111211792B (en) * 2018-11-22 2023-05-30 北京小米松果电子有限公司 Turbo decoding method, device and system

Also Published As

Publication number Publication date
CN1157883C (en) 2004-07-14

Similar Documents

Publication Publication Date Title
CN1178399C (en) Highly parallel MAP decoder
CN1168221C (en) Partitioned deinterleaver memory for MAP decoder
US7020827B2 (en) Cascade map decoder and method
CN1366739A (en) Method and apparatus for decoding turbo-encoded code sequence
CN1157883C (en) Maximal posterior probability algorithm of parallel slide windows and its high-speed decoder of Turbo code
CN102111162A (en) Turbo component decoding method, component decoder, branch calculator and Turbo decoder
CN101162908A (en) Dual-binary Turbo code encoding method and encoder based on DVB-RCS standard
CN1254121C (en) Method for decoding Tebo code
Lee et al. Design space exploration of the turbo decoding algorithm on GPUs
CN1140148C (en) Method for executing Tebo decoding in mobile communication system
CN101034951A (en) Implementation method for in-Turbo code interweaver
CN1211931C (en) Memory architecture for MAP decoder
CN103812510A (en) Decoding method and device
Bougard et al. A class of power efficient VLSI architectures for high speed turbo-decoding
CN1157854C (en) High-speed Turbo code decoder
CN115664429A (en) Dual-mode decoder suitable for LDPC and Turbo
CN1738229A (en) Woven convolutional code error detection and correction coder, encoder in TD-SCDMA system
Huang et al. A high speed turbo decoder implementation for CPU-based SDR system
CN1148006C (en) Method and decoder for decoding turbo code
CN1145266C (en) Turbo code decoding method and decoder
CN2506034Y (en) Turbo decoder
CN1133276C (en) Decoding method and decoder for high-speed parallel cascade codes
CN109831217A (en) A kind of Turbo code decoder, the component decoder for Turbo code and component interpretation method
CN103701475A (en) Decoding method for Turbo codes with word length of eight bits in mobile communication system
CN1434594A (en) Shortened viterbi decoding method and decoder thereof

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee