CN101777924A - Method and device for decoding Turbo codes - Google Patents

Method and device for decoding Turbo codes Download PDF

Info

Publication number
CN101777924A
CN101777924A CN201010003408A CN201010003408A CN101777924A CN 101777924 A CN101777924 A CN 101777924A CN 201010003408 A CN201010003408 A CN 201010003408A CN 201010003408 A CN201010003408 A CN 201010003408A CN 101777924 A CN101777924 A CN 101777924A
Authority
CN
China
Prior art keywords
decoder
decoding unit
sign indicating
indicating number
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201010003408A
Other languages
Chinese (zh)
Other versions
CN101777924B (en
Inventor
赵训威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyun Technology Co. Ltd.
Original Assignee
New Postcom Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Postcom Equipment Co Ltd filed Critical New Postcom Equipment Co Ltd
Priority to CN201010003408.6A priority Critical patent/CN101777924B/en
Publication of CN101777924A publication Critical patent/CN101777924A/en
Priority to PCT/CN2010/001528 priority patent/WO2011082509A1/en
Application granted granted Critical
Publication of CN101777924B publication Critical patent/CN101777924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6522Intended application, e.g. transmission or communication standard
    • H03M13/65253GPP LTE including E-UTRA
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2739Permutation polynomial interleaver, e.g. quadratic permutation polynomial [QPP] interleaver and quadratic congruence interleaver
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2771Internal interleaver for turbo codes
    • H03M13/2775Contention or collision free turbo code internal interleaver
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention discloses a method and a device for decoding Turbo codes. The method comprises the following steps: as for each code block in a input sequence, dividing each code block into M numbered code segments, inputting the M numbered code segments into M numbered decoding units respectively, decoding the code segments respectively input by the M numbered decoding units in a parallel manner according to the Log-MAP algorithm, and outputting the decoded code segments, wherein M is a natural number larger than 1; in the decoding process, the forward path metric parameters and backward path metric parameters of the boundary of respective corresponding code segment are transferred between decoding units corresponding to adjacent code segments. The technical scheme of the invention can improve the Turbo decoding speed.

Description

A kind of Turbo code interpretation method and device
Technical field
The present invention relates to the communication system technology, particularly relate to a kind of Turbo code interpretation method and device.
Background technology
Because the outstanding error correcting capability of nearly Shannon circle of Turbo code, Long Term Evolution (LTE, Long Term Evolution) system selects the channel coding schemes of Turbo code as high-speed data service for use.
Fig. 1 is the schematic diagram of the Turbo encoder in the prior LTE system.As shown in Figure 1, LTE adopts traditional by two parallel component coders and the Turbo encoder that interleaver is formed.Two component coders are respectively component coder 1 and component coder 2.Wherein, each component coder adopts identical structure with the WCDMA system, comprises three registers, and state number is 8.And interleaver has adopted twice replaced polynomial (QPP, Quadratic Permutation Polynomial) interleaver.Suppose the bit stream c of input interleaver kLength be K, promptly this bit stream is c 0, c 1..., c K-1, be c ' through the bit stream of the back output that interweaves 0, c ' 1..., c ' K-1, they satisfy corresponding relation c ' so i=c ∏ (i), the corresponding relation of the element sequence number before and after interweaving satisfies quadratic polynomial: ∏ (i)=(f 1I+f 2i 2) mod K, i=0,1 ..., K-1.Form with tabulation in the existing standard has provided under the various weaving lengths quadratic polynomial parameter f 1And f 2Pairing numerical value.The code check of encoder shown in Figure 1 is 1/3, exports 3 component (x k, z k, z ' k), wherein, xk is the data of input channel, z kAnd z ' kBe verification sequence, owing to be subjected to the Turbo code influence of 12 tail bits altogether, the length of each component code is D=K+4.
Maximum a posteriori probability (the MAP of soft inputting and soft output (SISO) is adopted in Turbo code decoding, Maximum A Posteriori) algorithm, this algorithm is a posterior probability of calculating each state transitions, message bit and the coded identification of markoff process under the situation of given channel observation sequence, as long as calculate all possible posterior probability of this tittle, promptly can get the value with maximum a posteriori probability by hard decision is estimated value.The MAP algorithm is an optimal algorithm of realizing the Turbo iterative decoding.
Log-domain maximum a posteriori probability (Log-MAP) algorithm is that the log-domain of MAP algorithm is realized.The calculation procedure of Log-MAP algorithm is as follows:
(a) from k=0, according to following formula (1) Branch Computed metric D k I, m:
D k i , m = ln γ k i , m = ln p ( d k = i ) + 2 σ 2 x k i + 2 σ 2 y k p i , m Formula (1)
Wherein, claim that γ is the branch metric parameter, k is a time index, and m is the state subscript, and σ is a constant, x kBe channel observation sequence, y kBe verification sequence, p is D k I, mPrior information, the value of p can get 0 when initial, gets the external information in the last iterative process afterwards.
(b) when k=0, the initialization forward path is measured A, and utilizes D according to following formula (2) k I, mFrom k=0 to k=N-1, calculate and storage forward path tolerance A k m:
A k m = ln α k m = ln ( Σ j = 0 1 α k - 1 b ( j , m ) · γ k j , b ( j , m ) ) = max j * ( A k - 1 b ( j , m ) + D k j , b ( j , m ) ) Formula (2)
Here, claim that α is the forward path metric parameter.
(c) when k=N-1, to path metric B, and utilize D after the initialization according to following formula (3) k I, mCalculating and storage back are to path metric B from k=N-2 to k=0 k m:
B k m = ln β k m = ln ( Σ j = 0 1 β k - 1 f ( j , m ) · γ k j , m ) = max j * ( B k + 1 f ( j , m ) + D k j , m ) Formula (3)
Here, claim that β is the forward path metric parameter.
(d) according to following formula (4) from k=0 to k=N-1 computing information bit log-likelihood ratio LLR:
L ( d k | Y 1 N ) = ln ( Σ m α k - 1 m · γ k 1 , m β k f ( 1 , m ) Σ m α k - 1 m · γ k 0 , m β k f ( 0 , m ) )
= max m * ( A k - 1 m + D k 1 , m + B k f ( 1 , m ) ) - max m * ( A k - 1 m + D k 0 , m + B k f ( 0 , m ) ) Formula (4)
Calculate external information L according to LLR e:
Le (dk)=L (dk|Y1N)-[La (dk)+lcxk) formula (5)
(e) D that external information is calculated during as next iteration k I, mPrior information, loop iteration computing said process until the iterations It that reaches maximum, and is made corresponding judgement according to the LLR in the last iterative process and is exported.
Wherein, max *(x, y)=ln (e x+ e y)=max (x, y)+ln (1+e -| x-y|), comprise maximizing computing and correction function f (x)=ln (1+e -x) computing, function f (x) can adopt look-up tables'implementation.
The basic structure of the Turbo decoder of realizing based on above-mentioned Log-MAP algorithm as shown in Figure 2.
Fig. 2 is the basic block diagram of existing Turbo decoder.As shown in Figure 2, soft inputting and soft output (SISO) decoder of realization Turbo decoder is made up of the cascade of two component decoders, be respectively component decoder 1 and component decoder 2, employed interleaver is identical in the Turbo encoder among interleaver and Fig. 1, is the QPP interleaver.Being input as of component decoder 1: the log-likelihood ratio y of channel observation sequence k s, the log-likelihood ratio y of the verification sequence of component coder 1 output among Fig. 1 k 1p, and the prior information L that extracts by the output of component decoder 2 A1(d k).The output of component decoder 1 is log-likelihood ratio L E1(x).L E1(x) deduct the log-likelihood ratio y of channel observation sequence k sAnd the prior information L of input component decoder 1 A1(d k), obtain the external information L that component decoder 1 is exported 1e(d k).To L 1e(d k) interweaving obtains L A2(d k).Being input as of component decoder 2: the log-likelihood ratio y of channel observation sequence k sSequence after interweaving, the log-likelihood ratio y of the checking sequence of component coder 2 outputs among Fig. 1 k 2p, and the prior information L that extracts by the output of component decoder 1 A2(d k).The output of component decoder 2 is log-likelihood ratio L E2(x).L E2(x) the prior information L that deducts the channel observation sequence log-likelihood ratio after interweaving and import component decoder 2 A2(d k), obtain the external information L that component decoder 2 is exported 2e(d k).L 2e(d k) knit the prior information that obtains next iteration input component decoder 1 through reciprocal cross.Like this, through iteration repeatedly, the external information that component decoder 1 and component decoder 2 are produced tends towards stability, and posterior probability ratio approaches the maximum-likelihood decoding to whole sign indicating number gradually.
Existing communication system in universal mobile telecommunications system (UMTS, UniversalMobile Telecommunication system), is selected the channel coding schemes of Turbo code as high-speed data service for use, and the data rate of its requirement is about 2Mbit/s.The decoding speed that is existing Turbo decoder is about 2Mbit/s.
But the specific design index request peak rate of LTE system reaches up 50Mbit/s, descending 100Mbit/s.The decoding speed that this means the Turbo decoder in the LTE system must satisfy the decoding output speed greater than 100Mbit/s.
Therefore, the decoding speed of existing Turbo decoder has much room for improvement.
Summary of the invention
The invention provides a kind of Turbo code interpretation method, this method can improve the Turbo code decoding speed.
The present invention also provides a kind of Turbo code code translator, and this device has improved the Turbo code decoding speed.
For achieving the above object, technical scheme of the present invention is achieved in that
The invention discloses a kind of Turbo code interpretation method, this method comprises:
For each code block in the list entries, this code block is divided into M sign indicating number section, M is the natural number greater than 1, M sign indicating number section inputed to M decoding unit respectively, decipher input code section separately is parallel according to log-domain maximum a posteriori probability Log-MAP algorithm by M decoding unit, and the sign indicating number section after the output decoding;
Wherein, pairing two decoding units of two sign indicating number sections that any front and back are adjacent are called first decoding unit and second decoding unit, then in decode procedure, first decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section to second decoding unit, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; And second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
The invention also discloses a kind of Turbo code code translator, this device comprises: M decoding unit, M are the natural number greater than 1;
Each code block in the list entries inputs to a described M decoding unit respectively after being divided into M sign indicating number section;
Each decoding unit is used to receive the sign indicating number section of input, according to log-domain maximum a posteriori probability Log-MAP algorithm the input code section is deciphered, with the sign indicating number section output after the decoding;
Wherein, pairing two decoding units of two sign indicating number sections that any front and back are adjacent are called first decoding unit and second decoding unit, then in decode procedure, first decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section to second decoding unit, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; And second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
As seen from the above technical solution, the present invention is this to input to a plurality of decoding units respectively after each code block in the list entries is divided into a plurality of yards sections, decipher the sign indicating number section of input separately is parallel according to the Log-MAP algorithm by a plurality of decoding units, wherein, in decode procedure, transmit the forward path metric parameter and the technical scheme of back of correspondence code section boundary between the pairing decoding unit of adjacent code section to the path metric parameter, a plurality of decoding units are parallel to be deciphered owing to have, therefore improved decoding speed greatly, and the quantity of decoding unit is many more, and decoding speed is fast more.
Description of drawings
Fig. 1 is the schematic diagram of the Turbo encoder in the prior LTE system;
Fig. 2 is the basic block diagram of existing Turbo decoder;
Fig. 3 is the composition structured flowchart of the Turbo code code translator in the embodiment of the invention;
Fig. 4 is the transmission schematic diagram of boundary condition value between the adjacent decoding unit in the Turbo decoder in the embodiment of the invention;
Fig. 5 is the composition structural representation of the code translator that comprises the multi-stage pipeline sub-decoder in the embodiment of the invention;
Fig. 6 is the partial interior structural representation of the SISO decoder in the embodiment of the invention;
Fig. 7 is the overall sequential operation figure of the Turbo code translator in the embodiment of the invention.
Embodiment
Because existing Turbo code code translator includes only a decoding unit that comprises basic structure shown in Figure 2, by this unique decoding unit each code block of list entries (being sequence to be decoded) is deciphered, decoding speed is subject to the decoding efficiency of this unique decoding unit, therefore decoding speed is low, can not satisfy the high-speed data service demand of systems such as LTE.
Based on this, core concept of the present invention is: by a plurality of decoding units employing Log-MAP algorithms the parallel Turbo code that carries out of different sign indicating number sections of each code block in the list entries is deciphered, and needs according to the Log-MAP algorithm, in decode procedure, need to transmit the forward path metric parameter of correspondence code section boundary separately and back between the adjacent decoding unit to the path metric parameter.
The parallel scheme of deciphering of this a plurality of decoding unit can improve the speed of Turbo code decoding exponentially, and the parallel decoding unit quantity of deciphering is many more, and decoding speed is fast more.Therefore, can be in the middle of actual according to the number of the decoding speed requirements set decoding unit of reality.
Fig. 3 is the composition structured flowchart of the Turbo code code translator in the embodiment of the invention.As shown in Figure 3, the Turbo code code translator in the present embodiment comprises: the decoding unit that input-buffer, output buffers, interleaving/deinterleaving memory and M are parallel, and M is any natural number greater than 1, for example 4,8,16 etc.
In Fig. 3, input-buffer and output buffers be used for string and with and string conversion, input-buffer realizes that also ping-pong operation is to improve throughput.The ping-pong operation here is meant that input-buffer will import data and optionally export to different decoding units, for example, first yard section of a certain code block is sent to decoding unit 1, second sign indicating number section of this code block is sent to decoding unit 2, by that analogy.The interleaving/deinterleaving memory is used for storing that process that each decoding unit carries out Turbo code decoding computing interweaves or the data of deinterleaving computing.M decoding unit is used for according to the Log-MAP algorithm input code section (the input code section comprises channel observation sequence and verification sequence) being deciphered, and the sign indicating number section after will deciphering outputs to output buffers.Each decoding unit is here all finished the decoding algorithm function that basic structure shown in Figure 2 is finished, and promptly each decoding unit all comprises two component decoders and the corresponding interleaver and the deinterleaver of cascade.
According to the formula (2) in the log-MAP algorithm of introducing in the background technology and formula (3) as can be known, depend on the forward path metric parameter of its previous point corresponding to the calculating of the forward path metric parameter α of certain point, and depend on the back of a point thereafter to the calculating of path metric parameter beta thereafter to the path metric parameter.Therefore, because the calculating of Log-MAP algorithm needs, code translator shown in Figure 3 is in decode procedure, transmit the forward path metric parameter of correspondence code section boundary separately and back between the pairing decoding unit of each adjacent code section to the path metric parameter, be specially: adjacent pairing two decoding units of two sign indicating number sections in front and back are called first decoding unit and second decoding unit arbitrarily, then in decode procedure, first decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section to second decoding unit, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; And second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
For example, when one 80000 list entries need carry out Turbo code decoding, (mode that list entries is divided into a plurality of code blocks is same as the prior art at first to be divided into a plurality of code blocks with these 80000, do not do qualification among the present invention, for example can divide etc.) according to the size of input-buffer, here hypothesis is divided into 10 code blocks, i.e. each code block 8000 point; For each code block, this code block is divided into M sign indicating number section, here establish M and equal 8, preferably be divided into M=8 of equal in length sign indicating number section, i.e. each yard section 1000 points input to these 8 sign indicating number sections respectively that 8 decoding units are parallel to be deciphered, be that in 1 pair of this code block of decoding unit 1~1000 deciphers, 2 pairs 1001~2000 of decoding units are deciphered ..., 8 pairs 7001~8000 of decoding units are deciphered.The front is mentioned, according to the formula (2) in the log-MAP algorithm of introducing in the background technology and formula (3) as can be known, depend on the forward path metric parameter of its previous point corresponding to the calculating of the forward path metric parameter α of certain point, and depend on the back of a point thereafter to the calculating of path metric parameter beta thereafter to the path metric parameter.Therefore decoding unit 1 need pass to decoding unit 2 with the forward path metric parameter of the 1000th point, so that decoding unit 2 calculates the forward path metric parameter of the 1001st point; Correspondingly, decoding unit 2 need pass to decoding unit 1 to the path metric parameter with the back of the 1001st point, so that decoding unit 1 calculates the back to the path metric parameter of the 1000th point.Between same other the adjacent decoding unit,, all in decode procedure, transmit the forward path metric parameter of correspondence code section boundary and back to the path metric parameter as between decoding unit 2 and the decoding unit 3, between decoding unit 3 and the decoding unit 4 etc.
Fig. 4 is the transmission schematic diagram of boundary condition value between the adjacent decoding unit in the Turbo decoder in the embodiment of the invention.The boundary condition value here refers to the forward path metric parameter α value of the pairing sign indicating number of each decoding unit section boundary point or afterwards to path metric parameter beta value.Referring to Fig. 4, transmit the α value or the β value of correspondence code segment boundary point between the component decoder of the correspondence in each adjacent decoding unit mutually, for example for adjacent decoding unit 1 and decoding unit 2, transmit the α value or the β value of correspondence code segment boundary point between its component decoder 1 separately, transmit the α value or the β value of correspondence code segment boundary point between its component decoder 2 separately.
In the present invention, the number M of the decoding unit that comprises in the Turbo code code translator can be decided according to actual conditions, for example in the LTE system, determines the number of decoding unit according to the LTE decoding rate demand of reality.Therefore this code block is divided into a plurality of yards sections,, has improved the throughput of Turbo code code translator greatly and reduced decoding and postpone by the parallel scheme of deciphering of a plurality of decoding units.According to the Turbo code Design of Interleaver criterion in the LTE system, can guarantee code translator interweave and deinterleaving during, the memory access conflict problem can not take place between each decoding unit, thereby guaranteed the reliability of above-mentioned parallel organization design.
In order further to improve the decoding speed of Turbo code code translator, in one embodiment of the present of invention, each decoding unit in the Turbo code code translator all is designed to the structure of multi-stage pipeline sub-decoder, be a plurality of sub-decoders that each decoding unit comprises concatenated in order, these a plurality of sub-decoders are finished It interative computation (It is for deciphering required total iterations of finishing according to the Log-MAP algorithm to the input code section) jointly, and each sub-decoder is finished It/R interative computation wherein, and It and It/R are natural number.For example, total iterations 12, if each decoding unit comprises the two-stage sub-decoder, then first order sub-decoder is finished preceding 6 interative computations of a sign indicating number section, second level sub-decoder is finished back 6 interative computations of this yard section; Like this after first order sub-decoder is finished preceding 6 interative computations of the corresponding sign indicating number section in a certain code block, proceed the 7th iteration of this yard section by second level sub-decoder, and this moment, first sub-decoder can carry out interative computation the 1st time to the corresponding sign indicating number section in the next code block, so decoding rate is equivalent to be doubled.Again for example, total iterations still is 12 o'clock, if each decoding unit comprises three grades of sub-decoders, then first order sub-decoder is finished preceding 4 interative computations of sign indicating number section, second level sub-decoder is finished 4 interative computations of the centre of this yard section, and third level sub-decoder is finished back 4 interative computations of this yard section; In this case, decoding rate is equivalent to improve twice.By that analogy, the number of the cascade sub-decoder that is provided with in the decoding unit is many more, and promptly pipeline series is many more, and decoding rate is fast more.
Fig. 5 is the composition structural representation of the code translator that comprises the multi-stage pipeline sub-decoder in the embodiment of the invention.As shown in Figure 5, in the present embodiment with M decoding unit, R level production line sub-decoder is that example describes, here R is any natural number greater than 1, all one-level sub-decoders in M decoding unit constitute first order streamlines, all secondary sub-decoders in M decoding unit constitute second level streamlines ..., all the R level sub-decoders in M decoding unit constitute the R level production line.
Illustrated the internal structure of first order streamline and second level streamline in Fig. 5, the structure of each level production line afterwards (to the R level, R was more than or equal to 3 o'clock from the third level) is identical with the structure of second level streamline, the Therefore, omited.
In Fig. 5, represent data selector with MUX, represent the SISO decoder that soft inputting and soft is exported with SISO, show the external information memory with ram table, represent external information with Le.Then corresponding M UX, SISO in MUX11, SISO11, RAM11, MUX21, SISO21, RAM21 and the subsequent pipeline and RAM device constitute decoding unit 1, wherein, MUX11, SISO11, RAM11 constitute the one-level sub-decoder in the decoding unit 1, MUX21, SISO21 and RAM21 constitute the secondary sub-decoder in the decoding unit 1, by that analogy.Equally, corresponding M UX in MUX12, SISO12, RAM12, MUX22, SISO22, RAM22 and the subsequent pipeline, SISO and RAM device constitute decoding unit 2, wherein, MUX21, SISO21, RAM21 constitute the one-level sub-decoder in the decoding unit 2, MUX22, SISO22 and RAM22 constitute the secondary sub-decoder in the decoding unit 2, by that analogy.The formation of follow-up decoding unit also by that analogy.In each sub-decoder, the SISO decoder is read and write corresponding RAM by the data switch bus.
In Fig. 5, each data selector MUX is used for when interative computation first initial external information exported to corresponding SISO decoder, after interative computation the time external information that will obtain from the RAM of correspondence export to the SISO decoder of correspondence; Wherein, the MUX in the one-level sub-decoder is 0 as initial external information, the MUX in other each sub-decoders except that the one-level sub-decoder, and the external information that the last interative computation of its upper level sub-decoder is obtained is as initial external information.Each SISO decoder utilizes the data and the input code section of MUX output to carry out interative computation It/R time, and the external information that each interative computation obtains is sent to corresponding RAM; Each RAM is used to store the external information that the each interative computation of SISO decoder is produced, and provides this external information to the MUX of correspondence, carries out next time interative computation so that MUX sends to the SISO unit with this external information.
In Fig. 5, pairing two decoding units of two sign indicating number sections that any front and back are adjacent are called first decoding unit and second decoding unit, then in decode procedure: the corresponding sub-decoder of each sub-decoder in first decoding unit in second decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section, so that the corresponding sub-decoder in second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; The corresponding sub-decoder of each sub-decoder in second decoding unit in first decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section, so that the corresponding sub-decoder in first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.For example, between the SISO13 in the sub-decoder 1 between the SISO11 and the SISO12 in the sub-decoder 1 in the decoding unit 2 in the sub-decoder 1 in decoding unit 1, in the sub-decoder 1 in the decoding unit 2 in SISO12 and the decoding unit 3, between the SISO21 in the sub-decoder 2 in the decoding unit 1 and the SISO22 in the sub-decoder 2 in the decoding unit 2 etc., all need to transmit the forward path metric parameter of corresponding separately input code section boundary and back to the path metric parameter.
In Fig. 5, input-buffer is used to store list entries, this list entries comprises channel information and verification sequence, according to demand different code blocks is input in the various flows waterline by the RAM chip selection logic, and the code block that will import a streamline is divided in a plurality of decoding units that are input to respectively after a plurality of yards sections in this streamline.Sign indicating number section after each decoding unit will be deciphered outputs to the hard decision unit; The hard decision unit is used to receive the sign indicating number section after the decoding of M decoding unit output, carries out outputing in the output buffers behind the hard decision to deciphering back sign indicating number section.The hard decision mode here is same as the prior art, is meant each information is all carried out following judgement: be output as 1 if this information, is then adjudicated corresponding positions more than or equal to 0; If this information, is then adjudicated corresponding positions less than 0 and is output as 0.
In structure shown in Figure 5, key issue is, how the external information that the upper level streamline is produced passes to its next stage streamline as the prior information of its next stage streamline.Below with R=2, promptly the code translator of two level production lines is that example is illustrated this problem: with total iterations be 12 times be example, first order streamline is when component decoder 2 computings that the 6th iteration handling the n-1 code block is far away in calculating, still according to the read-write of the address after interweaving RAM; After this second level streamline begins the 7th interative computation of n-1 code block, and first order streamline begins next code block, i.e. the 1st of the n code block the interative computation.When second level streamline begins to carry out the 7th interative computation of n-1 code block, carry out 1 computing of component decoder earlier, read external information Le among the RAM of the address that need increase progressively in order from first order streamline; At this moment, first order streamline is being handled component decoder 1 calculating process in the 1st interative computation of n code block, need read and write Le in proper order from above-mentioned address ram.Read but write always to be slower than, therefore, the sub-decoder in the streamline of the second level has time enough to obtain Le information (in fact reading Le with first order pipeline synchronization) from the RAM of first order streamline.Because the front and back stages sub-decoder all is the address read Le information that increases progressively in order, and the Le information that the one-level sub-decoder writes back always is slower than the process of reading, and therefore, can guarantee that the Le information in the first order streamline can correctly pass to second level streamline.Le before the data selector MUX in the second level streamline among Fig. 5 represents from the next external information of first order streamline transmission.
Each SISO decoder among Fig. 5 all comprises basic structure shown in Figure 2, and promptly each SISO decoder all comprises two component decoders and corresponding interleaver and deinterleaver.These two component decoders represent that with component decoder 1 and component decoder 2 component decoder 1 and component decoder 2 are finished interative computation jointly one time, and its algorithm function of carrying out separately is identical with two component decoders among Fig. 2.Also comprise corresponding address calculation module in each SISO decoder in addition.Between component decoder 1 operational stage, the address that address calculation module increases progressively in order is by the corresponding RAM of data access bus switches read-write; Between component decoder 2 operational stages, address calculation module is according to the RAM of the address after interweaving by data access bus switches read-write correspondence.
Fig. 6 is the partial interior structural representation of the SISO decoder in the embodiment of the invention.As shown in Figure 6, the one-component decoder in the SISO decoder and the structural representation of address calculation module have been illustrated.Here, because the internal structure of two component decoders in the SISO decoder is identical, therefore only illustrated the internal structure of one of them component decoder.
As shown in Figure 6, the one-component decoder in the SISO decoder comprises: branch metric calculation module, state update module, memory module, control module and external information computing module;
Wherein, the branch metric calculation module is used for according to input information Branch Computed metric parameter, and sends to state update module and external information computing module; The input information here comprises channel information and prior information, and channel information comprises channel observation sequence and verification sequence;
The state update module is used for going out the forward path metric parameter according to the branch metric calculation of parameter that receives, and sends to memory module and preserve, and goes out the back to the path metric parameter according to the branch metric calculation of parameter that receives, and sends to the external information computing module;
Memory module is used to preserve the forward path metric parameter from the state update module;
The external information computing module is used for calculating external information and output according to input information, from the branch metric parameter of branch metric calculation module, back to the path metric parameter and from the forward path metric parameter of memory module from the state update module;
Control module is used for branch metric calculation module, state update module, memory module and external information computing module are carried out sequencing control.
As shown in Figure 6, address calculation module comprises a data selector MUX and an interleaving address computing unit; Wherein, first of MUX is input as the original address information that increases progressively of order, and second is input as interweave address information after calculating of original address information via interleaving address computing unit; When the component decoder in the SISO decoder 1 carried out computing, as OPADD, when component decoder 2 carried out computing, address calculation module was selected second input as OPADD with first input in the address calculation module selection.
Each component decoder each quantity of state of Log-MAP algorithm computation γ, α, β and information bit log-likelihood ratio LLR and external information Le in the SISO unit.For the process of calculating γ, α, β, LLR and Le, need be divided into two step priorities and finish, be i.e. forward recursion process and backward recursion process.The forward recursion process need consumes L/M (L is the length of code block, and M is the number of decoding unit) individual computation of Period γ and α, and storage α value.Then carry out the backward recursion computing, consume L/M computation of Period γ, β, LLR and Le.The above-mentioned forward recursion and the process of backward recursion are successively to carry out, because the recursion of α and β all needs to use the γ value, and the γ value can only select forward direction or back to computing in once calculating.In order to further specify technical scheme of the present invention, provide the overall sequential operation figure of the Turbo code translator in the embodiment of the invention below.
Fig. 7 is the overall sequential operation figure of the Turbo code translator in the embodiment of the invention.As shown in Figure 7, with R level production line sub-decoder structure is example, R is any natural number, be each decoding unit by one-level sub-decoder, secondary sub-decoder ..., when R level sub-decoder constitutes, all one-level sub-decoders of formation first order streamline are finished preceding It/R the iteration of each code block and are far calculated, all secondary sub-decoders of formation second level streamline are finished second It/R iteration of each code block and are far calculated, ..., all R level sub-decoders that constitute the R level production line are finished the It/R time last interative computation of each code block.Here, It is total iterations.The far away calculation of each iteration comprises that 1 computing of component decoder and 2 computings of component decoder, each component decoder computing comprise forward recursion γ and α computing and back first recursion γ, β, LLR and Le computing again.
In sum, the present invention is this to be divided into a plurality of code blocks with list entries, input to a plurality of decoding units respectively after each code block being divided into a plurality of yards sections of equal in length, decipher the sign indicating number section of input separately is parallel according to the Log-MAP algorithm by a plurality of decoding units, wherein, in decode procedure, transmit the forward path metric parameter and the technical scheme of back of respectively answering the sign indicating number section boundary between the adjacent decoding unit to the path metric parameter, a plurality of decoding units are parallel to be deciphered owing to have, therefore improved decoding speed greatly, and the quantity of decoding unit is many more, and decoding speed is fast more.In addition, also further decoding unit is designed to the structure of multi-stage pipeline among the present invention, has further improved decoding speed, and the progression of streamline is many more, decoding speed is fast more.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being made, is equal to replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (10)

1. a Turbo code interpretation method is characterized in that, this method comprises:
For each code block in the list entries, this code block is divided into M sign indicating number section, M sign indicating number section inputed to M decoding unit respectively, decipher input code section separately is parallel according to log-domain maximum a posteriori probability Log-MAP algorithm by M decoding unit, and the sign indicating number section after the output decoding;
Wherein, pairing two decoding units of two sign indicating number sections that any front and back are adjacent are called first decoding unit and second decoding unit, then in decode procedure, first decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section to second decoding unit, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; And second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
2. method according to claim 1 is characterized in that, each decoding unit comprises R sub-decoder, and R is a natural number;
A described M decoding unit comprises according to parallel decoding of sign indicating number section of Log-MAP algorithm to input separately: for each decoding unit, finish jointly according to the Log-MAP algorithm by the sub-decoder of the R in this decoding unit the input code section is deciphered required It the interative computation of finishing, and each sub-decoder is finished It/R interative computation wherein; Wherein, It and It/R are natural number;
Described first decoding unit is to the forward path metric parameter of the ending boundary point of second decoding unit transmission self sign indicating number section, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section: the corresponding sub-decoder of each sub-decoder in first decoding unit in second decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section, so that the corresponding sub-decoder in second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section;
Described second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates the back of ending boundary point of self sign indicating number section: the corresponding sub-decoder of each sub-decoder in second decoding unit in first decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section, so that the corresponding sub-decoder in first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
3. method according to claim 1 and 2 is characterized in that, the equal in length of described M the sign indicating number section that marks off;
And/or,
This method further comprises: the sign indicating number section after the decoding is carried out hard decision.
4. a Turbo code code translator is characterized in that, this device comprises: M decoding unit, M are the natural number greater than 1;
Each code block in the list entries inputs to a described M decoding unit respectively after being divided into M sign indicating number section;
Each decoding unit is used to receive the sign indicating number section of input, according to the Log-MAP algorithm input code section is deciphered, with the sign indicating number section output after the decoding;
Wherein, pairing two decoding units of two sign indicating number sections that any front and back are adjacent are called first decoding unit and second decoding unit, then in decode procedure: first decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section to second decoding unit, so that second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; And second decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section to first decoding unit, so that first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
5. device according to claim 4 is characterized in that, each decoding unit comprises R sub-decoder of concatenated in order, is followed successively by: one-level sub-decoder, secondary sub-decoder ..., R level sub-decoder; Wherein, R is a natural number;
Finish jointly according to the Log-MAP algorithm by this R sub-decoder the input code section is deciphered required It the interative computation of finishing, and each sub-decoder is finished It/R interative computation wherein; It and It/R are natural number.
Wherein, in decode procedure: the corresponding sub-decoder of each sub-decoder in first decoding unit in second decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section, so that the corresponding sub-decoder in second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; The corresponding sub-decoder of each sub-decoder in second decoding unit in first decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section, so that the corresponding sub-decoder in first decoding unit calculates ending boundary point back to the path metric parameter of self sign indicating number section.
6. according to claim 4 or 5 described devices, it is characterized in that this device further comprises: the hard decision unit;
Described each decoding unit is used for the sign indicating number section after the decoding is outputed to the hard decision unit;
Described hard decision unit is used to receive the sign indicating number section after the decoding of M decoding unit output, carries out exporting behind the hard decision to deciphering back sign indicating number section.
7. device according to claim 5 is characterized in that, each sub-decoder comprises: data selector, soft inputting and soft output SIS O decoder and external information memory;
Data selector is used for when interative computation first initial external information being exported to the SISO decoder, after interative computation the time will export to the SISO decoder from the external information that the external information memory obtains; Wherein, the data selector in the one-level sub-decoder is 0 as initial external information, and the external information that the data selector in other sub-decoders obtains the last interative computation of upper level sub-decoder is as initial external information;
The SISO decoder utilizes the data and the input code section of data selector output to carry out interative computation It/R time, and the external information that each interative computation obtains is sent to the external information memory;
The external information memory is used to store the external information that the each interative computation of SISO decoder is produced, and provides this external information to data selector, carries out next time interative computation so that data selector sends to the SISO unit with this external information;
Wherein, in decode procedure: SISO decoder in the corresponding sub-decoder of the SISO decoder in each sub-decoder in first decoding unit in second decoding unit transmits the forward path metric parameter of the ending boundary point of self sign indicating number section, so that the SISO decoder in the corresponding sub-decoder in second decoding unit calculates the forward path metric parameter of the initial boundary point of self sign indicating number section; SISO decoder in the corresponding sub-decoder of SISO decoder in each sub-decoder in second decoding unit in first decoding unit transmits initial boundary point back to the path metric parameter of self sign indicating number section, so that the SISO decoder calculates ending boundary point back to the path metric parameter of self sign indicating number section in the corresponding sub-decoder in first decoding unit.
8. device according to claim 7 is characterized in that, further comprises the data access bus switches at each sub-decoder;
In each sub-decoder, the SISO decoder is read and write by the external information-storing device of data switch bus.
9. device according to claim 7 is characterized in that, each SISO decoder comprises: the component decoder of address calculation module and two cascades; The component decoder of described two cascades is the first component decoder and second component decoder;
The first component decoder and second component decoder are finished interative computation jointly one time;
Between the first component decoder operational stage, the corresponding external information memory of address read-write that address calculation module increases progressively in order;
Between second component decoder operational stage, address calculation module is according to the corresponding external information memory of the read-write of the address after interweaving.
10. device according to claim 9 is characterized in that, each component decoder comprises: branch metric calculation module, state update module, memory module, control module and external information computing module;
The branch metric calculation module is used for according to input information Branch Computed metric parameter, and sends to state update module and external information computing module;
The state update module is used for going out the forward path metric parameter according to the branch metric calculation of parameter that receives, and sends to memory module and preserve, and goes out the back to the path metric parameter according to the branch metric calculation of parameter that receives, and sends to the external information computing module;
Memory module is used to preserve the forward path metric parameter from the state update module;
The external information computing module is used for calculating external information and output according to input information, from the branch metric parameter of branch metric calculation module, back to the path metric parameter and from the forward path metric parameter of memory module from the state update module;
Control module is used for branch metric calculation module, state update module, memory module and external information computing module are carried out sequencing control.
CN201010003408.6A 2010-01-11 2010-01-11 Method and device for decoding Turbo codes Active CN101777924B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201010003408.6A CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes
PCT/CN2010/001528 WO2011082509A1 (en) 2010-01-11 2010-09-29 Method and device for decoding turbo code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010003408.6A CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes

Publications (2)

Publication Number Publication Date
CN101777924A true CN101777924A (en) 2010-07-14
CN101777924B CN101777924B (en) 2014-02-19

Family

ID=42514273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010003408.6A Active CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes

Country Status (2)

Country Link
CN (1) CN101777924B (en)
WO (1) WO2011082509A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102064838A (en) * 2010-12-07 2011-05-18 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method
WO2011082509A1 (en) * 2010-01-11 2011-07-14 新邮通信设备有限公司 Method and device for decoding turbo code
CN102611464A (en) * 2012-03-30 2012-07-25 电子科技大学 Turbo decoder based on external information parallel update
CN102710366A (en) * 2012-03-21 2012-10-03 华为技术有限公司 Method and device for data decoding
CN102723958A (en) * 2012-06-28 2012-10-10 电子科技大学 Turbo parallel decoding method based on multi-core digital signal processor (DSP)
CN103546167A (en) * 2013-07-25 2014-01-29 上海数字电视国家工程研究中心有限公司 Decoding device and method for decoding analysis data
CN103916142A (en) * 2013-01-04 2014-07-09 联想(北京)有限公司 Channel decoder and decoding method
CN104038234A (en) * 2013-03-07 2014-09-10 华为技术有限公司 Decoding method of polar code and decoder
WO2016000321A1 (en) * 2014-06-30 2016-01-07 深圳市中兴微电子技术有限公司 Map algorithm-based turbo coding method and apparatus, and computer storage medium
CN105790775A (en) * 2016-05-19 2016-07-20 电子科技大学 Probability calculation unit based on probability Turbo encoder
CN105915235A (en) * 2016-04-08 2016-08-31 东南大学 Intel CPU-based parallel Turbo decoding method
CN105933090A (en) * 2016-04-14 2016-09-07 电子科技大学 Multi-core parallel SCMA decoding system
CN103513961B (en) * 2012-06-18 2017-07-11 中兴通讯股份有限公司 On-chip buffering method and device
US9762352B2 (en) 2013-12-24 2017-09-12 Huawei Technologies Co., Ltd. Decoding method and receiving apparatus in wireless communication system
CN107565983A (en) * 2017-09-08 2018-01-09 广东工业大学 The interpretation method and device of a kind of Turbo code
CN108880569A (en) * 2018-07-24 2018-11-23 暨南大学 A kind of rate-compatible coding method based on feedback packet Markov supercomposed coding
WO2019218130A1 (en) * 2018-05-15 2019-11-21 深圳市大疆创新科技有限公司 Turbo encoding method, turbo encoder and unmanned aerial vehicle
CN111641417A (en) * 2020-06-09 2020-09-08 电子科技大学 FPGA-based device for finishing matrix array permutation interleaving
CN113258940A (en) * 2021-06-15 2021-08-13 成都星联芯通科技有限公司 turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2559616A (en) 2017-02-13 2018-08-15 Accelercomm Ltd Detection circuit, receiver, communications device and method of detecting
CN110896309B (en) 2018-09-12 2022-11-15 中兴通讯股份有限公司 Decoding method, device, decoder and computer storage medium for Turbo product code

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136282A1 (en) * 2001-03-26 2002-09-26 Quang Nguyen Optimum UMTS modem
CN1645751A (en) * 2004-01-21 2005-07-27 日本电气株式会社 Turbo decoder and turbo decoding method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1133276C (en) * 1999-11-12 2003-12-31 深圳市中兴通讯股份有限公司 Decoding method and decoder for high-speed parallel cascade codes
CN100508405C (en) * 2005-11-11 2009-07-01 清华大学 Parallel decoding method and device for raising Turbo decoding speed
CN101373978B (en) * 2007-08-20 2011-06-15 华为技术有限公司 Method and apparatus for decoding Turbo code
CN101777924B (en) * 2010-01-11 2014-02-19 新邮通信设备有限公司 Method and device for decoding Turbo codes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136282A1 (en) * 2001-03-26 2002-09-26 Quang Nguyen Optimum UMTS modem
CN1645751A (en) * 2004-01-21 2005-07-27 日本电气株式会社 Turbo decoder and turbo decoding method

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011082509A1 (en) * 2010-01-11 2011-07-14 新邮通信设备有限公司 Method and device for decoding turbo code
CN102064838B (en) * 2010-12-07 2014-01-15 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method
CN102064838A (en) * 2010-12-07 2011-05-18 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method
EP2642678B1 (en) * 2012-03-21 2019-10-09 Huawei Technologies Co., Ltd. Data decoding method and apparatus
CN102710366A (en) * 2012-03-21 2012-10-03 华为技术有限公司 Method and device for data decoding
CN102611464A (en) * 2012-03-30 2012-07-25 电子科技大学 Turbo decoder based on external information parallel update
CN102611464B (en) * 2012-03-30 2015-01-28 电子科技大学 Turbo decoder based on external information parallel update
CN103513961B (en) * 2012-06-18 2017-07-11 中兴通讯股份有限公司 On-chip buffering method and device
CN102723958A (en) * 2012-06-28 2012-10-10 电子科技大学 Turbo parallel decoding method based on multi-core digital signal processor (DSP)
CN102723958B (en) * 2012-06-28 2015-02-25 电子科技大学 Turbo parallel decoding method based on multi-core digital signal processor (DSP)
CN103916142A (en) * 2013-01-04 2014-07-09 联想(北京)有限公司 Channel decoder and decoding method
CN104038234A (en) * 2013-03-07 2014-09-10 华为技术有限公司 Decoding method of polar code and decoder
US10270470B2 (en) 2013-03-07 2019-04-23 Huawei Technologies Co., Ltd. Polar code decoding method and decoder
CN104038234B (en) * 2013-03-07 2017-09-29 华为技术有限公司 The interpretation method and decoder of polar code
CN103546167B (en) * 2013-07-25 2016-12-28 上海数字电视国家工程研究中心有限公司 Code translator and the method that parsing data are decoded
CN103546167A (en) * 2013-07-25 2014-01-29 上海数字电视国家工程研究中心有限公司 Decoding device and method for decoding analysis data
US9762352B2 (en) 2013-12-24 2017-09-12 Huawei Technologies Co., Ltd. Decoding method and receiving apparatus in wireless communication system
CN105306076A (en) * 2014-06-30 2016-02-03 深圳市中兴微电子技术有限公司 MAP algorithm based Turbo decoding method and device
US9866240B2 (en) 2014-06-30 2018-01-09 Sanechips Technology Co., Ltd. Map algorithm-based turbo decoding method and apparatus, and computer storage medium
WO2016000321A1 (en) * 2014-06-30 2016-01-07 深圳市中兴微电子技术有限公司 Map algorithm-based turbo coding method and apparatus, and computer storage medium
CN105915235A (en) * 2016-04-08 2016-08-31 东南大学 Intel CPU-based parallel Turbo decoding method
CN105915235B (en) * 2016-04-08 2019-03-05 东南大学 A kind of parallel Turbo decoding method based on Intel CPU
CN105933090B (en) * 2016-04-14 2019-07-16 电子科技大学 A kind of multi-core parallel concurrent SCMA decoding system
CN105933090A (en) * 2016-04-14 2016-09-07 电子科技大学 Multi-core parallel SCMA decoding system
CN105790775A (en) * 2016-05-19 2016-07-20 电子科技大学 Probability calculation unit based on probability Turbo encoder
CN105790775B (en) * 2016-05-19 2019-01-29 电子科技大学 A kind of probability calculation unit based on probability Turbo decoder
CN107565983A (en) * 2017-09-08 2018-01-09 广东工业大学 The interpretation method and device of a kind of Turbo code
CN107565983B (en) * 2017-09-08 2020-08-11 广东工业大学 Turbo code decoding method and device
WO2019218130A1 (en) * 2018-05-15 2019-11-21 深圳市大疆创新科技有限公司 Turbo encoding method, turbo encoder and unmanned aerial vehicle
CN110710112A (en) * 2018-05-15 2020-01-17 深圳市大疆创新科技有限公司 Turbo coding method, Turbo encoder and unmanned aerial vehicle
CN108880569A (en) * 2018-07-24 2018-11-23 暨南大学 A kind of rate-compatible coding method based on feedback packet Markov supercomposed coding
CN108880569B (en) * 2018-07-24 2021-11-09 暨南大学 Rate compatible coding method based on feedback grouping Markov superposition coding
CN111641417A (en) * 2020-06-09 2020-09-08 电子科技大学 FPGA-based device for finishing matrix array permutation interleaving
CN111641417B (en) * 2020-06-09 2023-03-31 电子科技大学 FPGA-based device for finishing matrix array permutation interleaving
CN113258940A (en) * 2021-06-15 2021-08-13 成都星联芯通科技有限公司 turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium

Also Published As

Publication number Publication date
CN101777924B (en) 2014-02-19
WO2011082509A1 (en) 2011-07-14

Similar Documents

Publication Publication Date Title
CN101777924B (en) Method and device for decoding Turbo codes
CN101867379B (en) Cyclic redundancy check-assisted convolutional code decoding method
CN100536359C (en) Interleaving / deinterleaving device and method for communication system
CN100425000C (en) Double-turbine structure low-density odd-even check code decoder
CN101079641B (en) 2-dimensional interleaving apparatus and method
CN105634508B (en) A kind of implementation method of the Turbo decoder of the nearly performance limit of low complex degree
GB2365727A (en) Turbo-code decoding unit and turbo-code encoding/decoding unit
CN104092470B (en) A kind of Turbo code code translator and method
CA3069482A1 (en) Blockwise parallel frozen bit generation for polar codes
CN100546207C (en) A kind of dual-binary Turbo code encoding method based on the DVB-RCS standard
CN101267212A (en) Group bit interleaver and its method
CN101373978B (en) Method and apparatus for decoding Turbo code
CN100508405C (en) Parallel decoding method and device for raising Turbo decoding speed
CN102130696A (en) Interleaving/de-interleaving method, soft-in/soft-out decoding method and error correction code encoder and decoder utilizing the same
CN102355331B (en) Universal multi-mode decoding device
CN101090274A (en) Viterbi decoder and its backtrack decoding method and device
CN100546206C (en) A kind of circuit and method of decoding of realizing
CN102130747A (en) Dynamic allocation method for decoding iteration of transmission block of topological code of long term evolution (LTE) system
CN107196666A (en) A kind of general Turbo coders fast verification method
CN111900999B (en) High-performance polarization coding method and coder for satellite discontinuous communication
CN1142629C (en) Decoding method and decoder for Tebo code
CN101667839B (en) Interleaving method
CN102751994B (en) Short code length block code decoder device based on two finite group symbols
CN106209117A (en) The multiparameter of a kind of low consumption of resources can configure Viterbi decoder
CN101373977A (en) Apparatus and method for removing interleave of parallel maximum posteriori probability decoding interleave

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170901

Address after: 100070, No. 188, building 25, No. eighteen, South Fourth Ring Road, Fengtai District, Beijing, 1, 101

Patentee after: Beijing Haiyun Technology Co. Ltd.

Address before: 510663, No. 3, color road, Science City, Guangzhou Development Zone, Guangdong

Patentee before: New Post Communication Equipment Co., Ltd.