CN101777924B - Method and device for decoding Turbo codes - Google Patents

Method and device for decoding Turbo codes Download PDF

Info

Publication number
CN101777924B
CN101777924B CN201010003408.6A CN201010003408A CN101777924B CN 101777924 B CN101777924 B CN 101777924B CN 201010003408 A CN201010003408 A CN 201010003408A CN 101777924 B CN101777924 B CN 101777924B
Authority
CN
China
Prior art keywords
decoding unit
decoder
code section
decoding
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201010003408.6A
Other languages
Chinese (zh)
Other versions
CN101777924A (en
Inventor
赵训威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyun Technology Co. Ltd.
Original Assignee
New Postcom Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Postcom Equipment Co Ltd filed Critical New Postcom Equipment Co Ltd
Priority to CN201010003408.6A priority Critical patent/CN101777924B/en
Publication of CN101777924A publication Critical patent/CN101777924A/en
Priority to PCT/CN2010/001528 priority patent/WO2011082509A1/en
Application granted granted Critical
Publication of CN101777924B publication Critical patent/CN101777924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6522Intended application, e.g. transmission or communication standard
    • H03M13/65253GPP LTE including E-UTRA
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2739Permutation polynomial interleaver, e.g. quadratic permutation polynomial [QPP] interleaver and quadratic congruence interleaver
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2771Internal interleaver for turbo codes
    • H03M13/2775Contention or collision free turbo code internal interleaver
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3972Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using sliding window techniques or parallel windows

Abstract

The invention discloses a method and a device for decoding Turbo codes. The method comprises the following steps: as for each code block in a input sequence, dividing each code block into M numbered code segments, inputting the M numbered code segments into M numbered decoding units respectively, decoding the code segments respectively input by the M numbered decoding units in a parallel manner according to the Log-MAP algorithm, and outputting the decoded code segments, wherein M is a natural number larger than 1; in the decoding process, the forward path metric parameters and backward path metric parameters of the boundary of respective corresponding code segment are transferred between decoding units corresponding to adjacent code segments. The technical scheme of the invention can improve the Turbo decoding speed.

Description

A kind of Turbo code interpretation method and device
Technical field
The present invention relates to communication system technology, particularly relate to a kind of Turbo code interpretation method and device.
Background technology
Due to the outstanding error correcting capability of nearly Shannon circle of Turbo code, Long Term Evolution (LTE, Long Term Evolution) system selects Turbo code as the channel coding schemes of high-speed data service.
Fig. 1 is the schematic diagram of the Turbo encoder in existing LTE system.As shown in Figure 1, LTE adopts traditional by two parallel component coders and a Turbo encoder that interleaver forms.Two component coders are respectively component coder 1 and component coder 2.Wherein, each component coder adopts identical structure with WCDMA system, comprises three registers, and state number is 8.And interleaver has adopted twice replaced polynomial (QPP, Quadratic Permutation Polynomial) interleaver.Suppose the bit stream c of input interleaver klength be K, this bit stream is c 0, c 1..., c k-1' after interweaving output bit stream be c ' 0, c ' 1..., c ' k-1, they meet corresponding relation c ' so i=c П (i), the corresponding relation of the element sequence number before and after interweaving meets quadratic polynomial: П (i)=(f 1i+f 2i 2) modK, i=0,1 ..., K-1.Form with list in existing standard has provided under various weaving lengths, quadratic polynomial parameter f 1and f 2corresponding numerical value.The code check of the encoder shown in Fig. 1 is 1/3, exports 3 component (x k, z k, z ' k), wherein, x kfor the data of input channel, z kand z ' kfor verification sequence, owing to being subject to the Turbo code impact of 12 tail bits altogether, the length of each component code is D=K+4.
Turbo code decoding adopts the maximum a posteriori probability (MAP of soft inputting and soft output (SISO), Maximum A Posteriori) algorithm, this algorithm is the in the situation that of given channel observation sequence, to calculate the posterior probability of each state transitions, message bit and the coded identification of markoff process, as long as calculate all possible posterior probability of this tittle, can get the value with maximum a posteriori probability by hard decision is estimated value.MAP algorithm is the optimal algorithm of realizing Turbo iterative decoding.
Log-domain maximum a posteriori probability (Log-MAP) algorithm is that the log-domain of MAP algorithm is realized.The calculation procedure of Log-MAP algorithm is as follows:
(a) from k=0, according to following formula (1) Branch Computed metric
Figure GSB00001091125900021
D k i , m = ln γ k i , m = ln p ( d k = i ) + 2 σ 2 x k i + 2 σ 2 y k p i , m Formula (1)
Wherein, claim that γ is branch metric parameter, k is time index, and m is state subscript, and σ is constant, x kfor channel observation sequence, y kfor verification sequence, p is
Figure GSB00001091125900023
prior information, when initial, the value of p can get 0, gets afterwards the external information in last iterative process.
(b) when k=0, initialization forward path tolerance A, and utilize according to following formula (2)
Figure GSB00001091125900024
from k=0 to k=N-1, calculate and store forward path tolerance
Figure GSB00001091125900025
A A m = ln α k m = ln ( Σ j = 0 1 α k - 1 b ( j , m ) · γ k j , b ( j , m ) ) = max j * ( A k - 1 b ( j , m ) + D k j , b ( j , m ) ) Formula (2)
Here, claim that α is forward path metric parameter.
(c) when k=N-1, the backward path metric B of initialization, and utilize according to following formula (3) from k=N-2 to k=0, calculate and store backward path metric
Figure GSB00001091125900028
B k m = ln β k m = ln ( Σ j = 0 1 β k - 1 f ( j , m ) · γ k j , m ) = max j * ( B k + 1 f ( j , m ) + D k j , m ) Formula (3)
Here, claim that β is forward path metric parameter.
(d) according to following formula (4) from k=0 to k=N-1 computing information bit log-likelihood ratio LLR:
L ( d k | Y 1 N ) = ln ( Σ m α k - 1 m · γ k 1 , m β k f ( 1 , m ) Σ m α k - 1 m · γ k 0 , m β k f ( 0 , m ) )
= max m * ( A k - 1 m + D k 1 , m + B k f ( 1 , m ) - max m * ) - max m * ( A k - 1 m + D k 0 , m + B k f ( 0 , m ) ) Formula (4)
According to LLR, calculate external information L e:
L e ( d k ) = L ( d k | Y 1 N ) - [ L a ( d k ) + l c x k ] Formula (5)
(e) during using external information as next iteration, calculate
Figure GSB00001091125900031
prior information, loop iteration computing said process, until reach maximum iterations It, and make corresponding judgement output according to the LLR in last iterative process.
Wherein, max *(x, y)=ln (e x+ e y)=max (x, y)+ln (1+e -| x-y|), comprise maximizing computing and correction function f (x)=ln (1+e -x) computing, function f (x) can adopt look-up tables'implementation.
The basic structure of the Turbo decoder of realizing based on above-mentioned Log-MAP algorithm as shown in Figure 2.
Fig. 2 is the basic block diagram of existing Turbo decoder.As shown in Figure 2, soft inputting and soft output (SISO) decoder of realizing Turbo decoder is comprised of two component decoder cascades, be respectively component decoder l and component decoder 2, interleaver is identical with the interleaver using in Turbo encoder in Fig. 1, is QPP interleaver.Being input as of component decoder 1: the log-likelihood ratio of channel observation sequence the log-likelihood ratio of the verification sequence of component coder 1 output in figure l
Figure GSB00001091125900033
and the prior information L being extracted by the output of component decoder 2 a1(d k).The output of component decoder 1 is log-likelihood ratio L e1(x).L e1(x) deduct the log-likelihood ratio of channel observation sequence
Figure GSB00001091125900034
and the prior information L of input component decoder l a1(d k), obtain the external information L that component decoder 1 is exported 1e(d k).To L 1e(d k) interweaving obtains L a2(d k).Being input as of component decoder 2: the log-likelihood ratio of channel observation sequence
Figure GSB00001091125900035
sequence after interweaving, the log-likelihood ratio of the checking sequence of component coder 2 outputs in Fig. 1
Figure GSB00001091125900036
and the prior information L being extracted by the output of component decoder 1 a2(d k).The output of component decoder 2 is log-likelihood ratio L e2(x).L e2(x) the prior information L that deducts the channel observation sequence log-likelihood ratio after interweaving and input component decoder 2 a2(d k), obtain the external information L that component decoder 2 is exported 2e(d k).L 2e(d k) through reciprocal cross, knit and obtain the prior information that next iteration is inputted component decoder 1.Like this, through iteration repeatedly, the external information that component decoder 1 and component decoder 2 produce tends towards stability, and posterior probability ratio approaches the maximum-likelihood decoding to whole code gradually.
Existing communication system, as universal mobile telecommunications system (UMTS, Universal Mobile Telecommunication system) in, select Turbo code as the channel coding schemes of high-speed data service, the data rate of its requirement is about 2Mbit/s.Be that the decoding speed of existing Turbo decoder is in 2Mbit/s left and right.
But the specific design index request peak rate of LTE system reaches up 50Mbit/s, descending 100Mbit/s.The decoding speed that this means the Turbo decoder in LTE system must meet decoding output speed and be greater than 100Mbit/s.
Therefore, the decoding speed of existing Turbo decoder has much room for improvement.
Summary of the invention
The invention provides a kind of Turbo code interpretation method, the method can improve Turbo code decoding speed.
The present invention also provides a kind of Turbo code code translator, and this device has improved Turbo code decoding speed.
For achieving the above object, technical scheme of the present invention is achieved in that
The invention discloses a kind of Turbo code interpretation method, the method comprises:
For each code block in list entries, this code block is divided into M code section, M is greater than 1 natural number, M code section inputed to respectively to M decoding unit, by M decoding unit, according to log-domain maximum a posteriori probability Log-MAP algorithm, to input code section is separately parallel, carry out decoding, and exporting the code section after decoding, M is greater than 1 natural number;
Wherein, corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of former code section, be called the first decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of rear code section, be called the second decoding unit; In decode procedure, the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the second decoding unit, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; And the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the first decoding unit, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section;
Each decoding unit comprises R sub-decoder of concatenated in order, and R is natural number;
A described M decoding unit carries out decoding and comprises the code section of input is separately parallel according to Log-MAP algorithm: for each decoding unit, R sub-decoder in this each decoding unit jointly completes according to Log-MAP algorithm input code section carried out to required It the interative computation completing of decoding, and each sub-decoder completes It/R interative computation wherein; Wherein, It and It/R are natural number.
The invention also discloses a kind of Turbo code code translator, this device comprises: M decoding unit, and M is greater than 1 natural number;
Each code block in list entries inputs to respectively a described M decoding unit after being divided into M code section;
Each decoding unit, for receiving the code section of input, carries out decoding according to log-domain maximum a posteriori probability Log-MAP algorithm to input code section, by the code section output after decoding;
Wherein, corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of former code section, be called the first decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of rear code section, be called the second decoding unit; In decode procedure, the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the second decoding unit, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; And the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the first decoding unit, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section;
Each decoding unit comprises R sub-decoder of concatenated in order, is followed successively by: one-level sub-decoder, secondary sub-decoder ..., R level sub-decoder; Wherein, R is natural number;
By this R sub-decoder, jointly complete according to Log-MAP algorithm input code section is carried out to required It the interative computation completing of decoding, and each sub-decoder completes It/R interative computation wherein; It and It/R are natural number.
As seen from the above technical solution, the present invention is this inputs to respectively a plurality of decoding units after each code block in list entries is divided into a plurality of yards of sections, by a plurality of decoding units, according to Log-MAP algorithm, the code section of input is separately walked abreast and carries out decoding, wherein, in decode procedure, between the corresponding decoding unit of adjacent code section, transmit the forward path metric parameter of correspondence code section boundary and the technical scheme of backward path metric parameter, owing to there being a plurality of decoding units to walk abreast, carry out decoding, therefore greatly improved decoding speed, and the quantity of decoding unit is more, decoding speed is faster.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the Turbo encoder in existing LTE system;
Fig. 2 is the basic block diagram of existing Turbo decoder;
Fig. 3 is the composition structured flowchart of the Turbo code code translator in the embodiment of the present invention;
Fig. 4 is the transmission schematic diagram of boundary condition value between the adjacent decoding unit in the Turbo decoder in the embodiment of the present invention;
Fig. 5 is the composition structural representation of the code translator that comprises multi-stage pipeline sub-decoder in the embodiment of the present invention;
Fig. 6 is the partial interior structural representation of the SISO decoder in the embodiment of the present invention;
Fig. 7 is the overall sequential operation figure of the Turbo code translator in the embodiment of the present invention.
Embodiment
Because existing Turbo code code translator only includes a decoding unit that comprises basic structure shown in Fig. 2, by this unique decoding unit, each code block of list entries (being sequence to be decoded) is carried out to decoding, decoding speed is limited to the decoding efficiency of this unique decoding unit, therefore decoding speed is low, can not meet the high-speed data service demand of the systems such as LTE.
Based on this, core concept of the present invention is: by a plurality of decoding units, adopt Log-MAP algorithm to walk abreast and carry out Turbo code decoding the different code sections of each code block in list entries, and according to the needs of Log-MAP algorithm, in decode procedure, between adjacent decoding unit, need to transmit forward path metric parameter and the backward path metric parameter of correspondence code section boundary separately.
The parallel scheme of carrying out decoding of this plurality of decoding unit can improve the speed of Turbo code decoding exponentially, and parallel decoding unit quantity of carrying out decoding is more, and decoding speed is faster.Therefore, can be according to the number of actual decoding speed requirements set decoding unit in the middle of reality.
Fig. 3 is the composition structured flowchart of the Turbo code code translator in the embodiment of the present invention.As shown in Figure 3, the Turbo code code translator in the present embodiment comprises: input-buffer, output buffer memory, interleaving/deinterleaving memory and M parallel decoding unit, M is greater than 1 random natural number, such as 4,8,16 etc.
In Fig. 3, input-buffer and output buffer memory are for going here and there also and parallel-serial conversion, and input-buffer is also realized ping-pong operation to improve throughput.The ping-pong operation here refer to input-buffer by input data selection export to different decoding units, for example, first yard of section of a certain code block sent to decoding unit 1, second code section of this code block sent to decoding unit 2, by that analogy.Interleaving/deinterleaving memory interweaves for the process of storing each decoding unit and carrying out Turbo code decoding computing or the data of deinterleaving computing.M decoding unit, for according to Log-MAP algorithm, input code section (input code section comprises channel observation sequence and verification sequence) being carried out to decoding, and outputs to output buffer memory by the code section after decoding.Each decoding unit here completes the decoding algorithm function that the basic structure shown in Fig. 2 completes, and each decoding unit comprises two component decoders and corresponding interleaver and the deinterleaver of cascade.
Known according to the formula (2) in the log-MAP algorithm of introducing in background technology and formula (3), corresponding to the calculating of the forward path metric parameter α of certain point, depend on the forward path metric parameter of its previous point, and the calculating of its backward path metric parameter beta depends on the backward path metric parameter of a point thereafter.Therefore, because the calculating of Log-MAP algorithm need to, code translator shown in Fig. 3 is in decode procedure, between the corresponding decoding unit of each adjacent code section, transmit forward path metric parameter and the backward path metric parameter of correspondence code section boundary separately, be specially: corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in decode procedure, the first decoding unit is to the forward path metric parameter of the ending boundary point of the second decoding unit transmission self code section, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section, and the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the first decoding unit, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section.
For example, when the list entries of 80000 need to carry out Turbo code decoding, first (mode that list entries is divided into a plurality of code blocks is same as the prior art by these 80000, to be divided into a plurality of code blocks, in the present invention, do not limit, such as dividing according to the size of input-buffer etc.), here hypothesis is divided into 10 code blocks, i.e. each code block 8000 point; For each code block, this code block is divided into M code section, here establish M and equal 8, preferably be divided into M=8 equal in length code section, i.e. each yard of section 1000 points, input to these 8 code sections respectively that 8 decoding units are parallel carries out decoding, be that in 1 pair of this code block of decoding unit 1~1000 carries out decoding, 2 pairs 1001~2000 of decoding units carry out decoding ..., 8 pairs 7001~8000 of decoding units carry out decoding.Before mention, known according to the formula (2) in the log-MAP algorithm of introducing in background technology and formula (3), corresponding to the calculating of the forward path metric parameter α of certain point, depend on the forward path metric parameter of its previous point, and the calculating of its backward path metric parameter beta depends on the backward path metric parameter of a point thereafter.Therefore decoding unit 1 need to pass to decoding unit 2 by the forward path metric parameter of the 1000th point, so that decoding unit 2 calculates the forward path metric parameter of the 1001st point; Correspondingly, decoding unit 2 need to pass to decoding unit 1 by the backward path metric parameter of the 1001st point, so that decoding unit 1 calculates the backward path metric parameter of the 1000th point.Between same other adjacent decoding unit, as between decoding unit 2 and decoding unit 3, between decoding unit 3 and decoding unit 4 etc., all in decode procedure, transmit the forward path metric parameter of correspondence code section boundary and rear to path metric parameter.
Fig. 4 is the transmission schematic diagram of boundary condition value between the adjacent decoding unit in the Turbo decoder in the embodiment of the present invention.The boundary condition value here refers to forward path metric parameter α value or the backward path metric parameter beta value of the corresponding code of each decoding unit section boundary point.Referring to Fig. 4, between corresponding component decoder in each adjacent decoding unit, mutually transmit α value or the β value of correspondence code segment boundary point, for example, for adjacent decoding unit 1 and decoding unit 2, between its component decoder 1 separately, transmit α value or the β value of correspondence code segment boundary point, between its component decoder 2 separately, transmit α value or the β value of correspondence code segment boundary point.
In the present invention, the number M of the decoding unit comprising in Turbo code code translator can determine according to actual conditions, for example, in LTE system, according to the number of actual LTE decoding rate Location of requirement decoding unit.Therefore this code block is divided into a plurality of yards of sections, by the parallel scheme of carrying out decoding of a plurality of decoding units, has greatly improved the throughput of Turbo code code translator and reduced decoding latency.According to the design criterion of the Turbo code interleaver in LTE system, can guarantee code translator interweave and deinterleaving during, between each decoding unit, can there is not memory access conflict problem, thereby guarantee the reliability of above-mentioned parallel organization design.
In order further to improve the decoding speed of Turbo code code translator, in one embodiment of the present of invention, each decoding unit in Turbo code code translator is designed to the structure of multi-stage pipeline sub-decoder, be a plurality of sub-decoders that each decoding unit comprises concatenated in order, the plurality of sub-decoder completes It interative computation (It is for carrying out the required total iterations completing of decoding according to Log-MAP algorithm to input code section) jointly, and each sub-decoder completes It/R interative computation wherein, and It and It/R are natural number.For example, total iterations 12, if each decoding unit comprises two-stage sub-decoder, first order sub-decoder completes front 6 interative computations of a code section, second level sub-decoder completes rear 6 interative computations of this yard of section; Like this after first order sub-decoder completes front 6 interative computations of the corresponding code section in a certain code block, by second level sub-decoder, proceeded the 7th iteration of this yard of section, and now the first sub-decoder can carry out interative computation the 1st time to the corresponding code section in next code block, so decoding rate is equivalent to be doubled.Again for example, total iterations is still 12 o'clock, if each decoding unit comprises three grades of sub-decoders, first order sub-decoder completes front 4 interative computations of code section, second level sub-decoder completes 4 interative computations of the centre of this yard of section, and third level sub-decoder completes rear 4 interative computations of this yard of section; In this case, decoding rate is equivalent to improve twice.By that analogy, the number of the cascade sub-decoder arranging in decoding unit is more, and pipeline series is more, and decoding rate is faster.
Fig. 5 is the composition structural representation of the code translator that comprises multi-stage pipeline sub-decoder in the embodiment of the present invention.As shown in Figure 5, in the present embodiment with M decoding unit, R level production line sub-decoder is that example describes, here R is greater than 1 random natural number, all one-level sub-decoders in M decoding unit form first order streamline, all secondary sub-decoders in M decoding unit form second level streamline ..., all R level sub-decoders in M decoding unit form R level production line.
In Fig. 5, illustrated the internal structure of first order streamline and second level streamline, the structure of each level production line afterwards (from the third level to R level, R is more than or equal at 3 o'clock) is identical with the structure of second level streamline, therefore omit.
In Fig. 5, with MUX, represent data selector, with SISO, represent the SISO decoder of soft inputting and soft output, with ram table, show external information memory, with Le, represent external information.Corresponding MUX, the SISO in MUX11, SISO11, RAM11, MUX21, SISO21, RAM21 and subsequent pipeline and RAM device form decoding unit 1, wherein, MUX11, SISO11, RAM11 form the one-level sub-decoder in decoding unit 1, MUX21, SISO21 and RAM21 form the secondary sub-decoder in decoding unit 1, by that analogy.Equally, corresponding MUX, SISO in MUX12, SISO12, RAM12, MUX22, SISO22, RAM22 and subsequent pipeline and RAM device form decoding unit 2, wherein, MUX21, SISO21, RAM21 form the one-level sub-decoder in decoding unit 2, MUX22, SISO22 and RAM22 form the secondary sub-decoder in decoding unit 2, by that analogy.The formation of follow-up decoding unit also by that analogy.In each sub-decoder, SISO decoder is read and write corresponding RAM by data switch bus.
In Fig. 5, each data selector MUX, for initial external information being exported to corresponding SISO decoder when the interative computation first, after interative computation time external information that the RAM from corresponding is obtained export to corresponding SISO decoder; Wherein, the MUX in one-level sub-decoder is using 0 as initial external information, the MUX in other each sub-decoders except one-level sub-decoder, and the external information that the last interative computation of its upper level sub-decoder is obtained is as initial external information.Each SISO decoder, utilize data and the input code section of MUX output to carry out interative computation It/R time, and the external information that each interative computation is obtained sends to corresponding RAM; Each RAM is used for storing the external information that the each interative computation of SISO decoder produces, and provides this external information to corresponding MUX, so that MUX sends to SISO decoder to carry out interative computation next time this external information.
In Fig. 5, corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in decode procedure: each sub-decoder in the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the corresponding sub-decoder in the second decoding unit, so that the corresponding sub-decoder in the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; Each sub-decoder in the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the corresponding sub-decoder in the first decoding unit, so that the corresponding sub-decoder in the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section.For example, between SISO13 in sub-decoder 1 between SISO11 and the SISO12 in the sub-decoder 1 in decoding unit 2 in sub-decoder 1 in decoding unit 1, in the sub-decoder 1 in decoding unit 2 in SISO12 and decoding unit 3, between the SISO21 in the sub-decoder 2 in decoding unit 1 and the SISO22 in the sub-decoder 2 in decoding unit 2 etc., all need to transmit forward path metric parameter and the backward path metric parameter of corresponding input code section boundary separately.
In Fig. 5, input-buffer is used for storing list entries, this list entries comprises channel information and verification sequence, by RAM chip selection logic, according to demand different code blocks is input in various flows waterline, and is input to respectively in a plurality of decoding units in this streamline after the code block of a streamline of input is divided into a plurality of yards of sections.Each decoding unit outputs to hard decision unit by the code section after decoding; Hard decision unit, for receiving the code section after the decoding of M decoding unit output, carries out outputing in output buffer memory after hard decision to code section after decoding.The hard decision mode is here same as the prior art, refers to each information is all carried out to following judgement: if this information is more than or equal to 0, adjudicates corresponding positions and be output as 1; If this information is less than 0, adjudicates corresponding positions and be output as 0.
In the structure shown in Fig. 5, key issue is, the external information how upper level streamline being produced passes to its next stage streamline as the prior information of its next stage streamline.Below with R=2, the code translator of two level production lines is that example is explained this problem: take total iterations as 12 times be example, first order streamline is when component decoder 2 computing that the 6th iteration processing n-1 code block is far away in calculating, still according to the address read-write RAM after interweaving; After this second level streamline starts the 7th interative computation of n-1 code block, and first order streamline starts next code block, i.e. the 1st of n code block the interative computation.When second level streamline starts to carry out the 7th interative computation of n-1 code block, first carry out component decoder 1 computing, in the RAM of the address that need to increase progressively in order from first order streamline, read external information Le; Now, first order streamline is being processed component decoder 1 calculating process in the 1st interative computation of n code block, need to sequentially read and write Le from above-mentioned address ram.But always write, be slower than and read, therefore, the sub-decoder in the streamline of the second level has time enough from the RAM of first order streamline, to obtain Le information (in fact reading Le with first order pipeline synchronization).Because front and back stages sub-decoder is all the address read Le information increasing progressively in order, and the Le information that one-level sub-decoder writes back is always slower than the process of reading, and therefore, can guarantee that the Le information in first order streamline can correctly pass to second level streamline.Le before data selector MUX in second level streamline in Fig. 5 represents the external information of coming from the transmission of first order streamline.
Each SISO decoder in Fig. 5 comprises the basic structure shown in Fig. 2, and each SISO decoder comprises two component decoders and corresponding interleaver and deinterleaver.These two component decoders represent with component decoder 1 and component decoder 2, and component decoder 1 and component decoder 2 complete interative computation jointly one time, and its algorithm function of carrying out is separately identical with two component decoders in Fig. 2.In addition in each SISO decoder, also comprise corresponding address calculation module.Between component decoder 1 operational stage, corresponding RAM is read and write by data access bus switches in the address that address calculation module increases progressively in order; Between component decoder 2 operational stages, address calculation module is read and write corresponding RAM according to the address after interweaving by data access bus switches.
Fig. 6 is the partial interior structural representation of the SISO decoder in the embodiment of the present invention.As shown in Figure 6, one-component decoder in SISO decoder and the structural representation of address calculation module have been illustrated.Here, because the internal structure of two component decoders in SISO decoder is identical, therefore only illustrated the internal structure of one of them component decoder.
As shown in Figure 6, the one-component decoder in SISO decoder comprises: branch metric calculation module, state update module, memory module, control module and external information computing module;
Wherein, branch metric calculation module, for according to input message Branch Computed metric parameter, and sends to state update module and external information computing module; The input message here comprises channel information and prior information, and channel information comprises channel observation sequence and verification sequence;
State update module, for going out forward path metric parameter according to the branch metric calculation of parameter receiving, and sends to memory module and preserves, and goes out backward path metric parameter, and send to external information computing module according to the branch metric calculation of parameter receiving;
Memory module, for preserving the forward path metric parameter from state update module;
External information computing module, for according to input message, from the branch metric parameter of branch metric calculation module, from the backward path metric parameter of state update module and from the forward path metric parameter of memory module, calculates external information output;
Control module, for carrying out sequencing control to branch metric calculation module, state update module, memory module and external information computing module.
As shown in Figure 6, address calculation module comprises a data selector MUX and an interleaving address computing unit; Wherein, first of MUX is input as the original address information that order increases progressively, and second is input as the address information that original address information interweaves after calculating through interleaving address computing unit; When the component decoder 1 in SISO decoder carries out computing, address calculation module is selected using the first input as OPADD, and when component decoder 2 carries out computing, address calculation module is selected using the second input as OPADD.
In SISO decoder, each component decoder calculates each quantity of state γ, α, β and information bit log-likelihood ratio LLR and external information Le with Log-MAP algorithm.Process for calculating γ, α, β, LLR and Le, need to be divided into two steps and successively complete, be i.e. forward recursion process and backward recursion process.Forward recursion process need consumes L/M (length that L is code block, the number that M is decoding unit) individual computation of Period γ and α, and stores α value.Then carry out backward recursion computing, consume L/M computation of Period γ, β, LLR and Le.Above-mentioned forward recursion and the process of backward recursion are successively to carry out because the recursion of α and β all needs to use γ value, and γ value in calculating once, can only select forward direction or after to computing.In order to further illustrate technical scheme of the present invention, provide the overall sequential operation figure of the Turbo code translator in the embodiment of the present invention below.
Fig. 7 is the overall sequential operation figure of the Turbo code translator in the embodiment of the present invention.As shown in Figure 7, the R level production line sub-decoder structure of take is example, R is random natural number, be each decoding unit by one-level sub-decoder, secondary sub-decoder ..., R level sub-decoder is when form, all one-level sub-decoders of formation first order streamline complete front It/R the iteration of each code block and far calculate, all secondary sub-decoders of formation second level streamline complete second It/R iteration of each code block and far calculate, ..., all R level sub-decoders that form R level production line complete the It/R time last interative computation of each code block.Here, It is total iterations.The far away calculation of each iteration comprises that component decoder 1 computing and component decoder 2 computings, each component decoder computing comprise again forward recursion γ and α computing and rear first recursion γ, β, LLR and Le computing.
In sum, the present invention is this is divided into a plurality of code blocks by list entries, after being divided into a plurality of yards of sections equal in length, each code block inputs to respectively a plurality of decoding units, by a plurality of decoding units, according to Log-MAP algorithm, the code section of input is separately walked abreast and carries out decoding, wherein, in decode procedure, between adjacent decoding unit, transmit and respectively answer the code forward path metric parameter of section boundary and the technical scheme of backward path metric parameter, owing to there being a plurality of decoding units to walk abreast, carry out decoding, therefore greatly improved decoding speed, and the quantity of decoding unit is more, decoding speed is faster.In addition, also further decoding unit is designed to the structure of multi-stage pipeline in the present invention, has further improved decoding speed, and the progression of streamline is more, decoding speed is faster.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of making, be equal to replacement, improvement etc., within all should being included in the scope of protection of the invention.

Claims (10)

1. a Turbo code interpretation method, is characterized in that, the method comprises:
For each code block in list entries, this code block is divided into M code section, M code section inputed to respectively to M decoding unit, by M decoding unit, according to log-domain maximum a posteriori probability Log-MAP algorithm, to input code section is separately parallel, carry out decoding, and exporting the code section after decoding, M is greater than 1 natural number;
Wherein, corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of former code section, be called the first decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of rear code section, be called the second decoding unit; In decode procedure, the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the second decoding unit, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; And the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the first decoding unit, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section;
Each decoding unit comprises R sub-decoder of concatenated in order, and R is natural number;
A described M decoding unit carries out decoding and comprises the code section of input is separately parallel according to Log-MAP algorithm: for each decoding unit, R sub-decoder in this each decoding unit jointly completes according to Log-MAP algorithm input code section carried out to required It the interative computation completing of decoding, and each sub-decoder completes It/R interative computation wherein; Wherein, It and It/R are natural number.
2. method according to claim 1, is characterized in that,
Described the first decoding unit is to the forward path metric parameter of the ending boundary point of the second decoding unit transmission self code section, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section: each sub-decoder in the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the corresponding sub-decoder in the second decoding unit, so that the corresponding sub-decoder in the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section;
Described the second decoding unit is to the backward path metric parameter of the initial boundary point of the first decoding unit transmission self code section, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section: each sub-decoder in the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the corresponding sub-decoder in the first decoding unit, so that the corresponding sub-decoder in the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section.
3. method according to claim 1 and 2, is characterized in that, mark off described M code section equal in length;
And/or,
The method further comprises: the code section after decoding is carried out to hard decision.
4. a Turbo code code translator, is characterized in that, this device comprises: M decoding unit, and M is greater than 1 natural number;
Each code block in list entries inputs to respectively a described M decoding unit after being divided into M code section;
Each decoding unit, for receiving the code section of input, carries out decoding according to Log-MAP algorithm to input code section, by the code section output after decoding;
Wherein, corresponding two decoding units of two code sections that any front and back are adjacent are called the first decoding unit and the second decoding unit, in decode procedure: the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the second decoding unit, so that the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; And the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the first decoding unit, so that the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section; In adjacent two the code sections in described front and back, at the corresponding decoding unit of former code section, be called the first decoding unit, in adjacent two the code sections in described front and back, at the corresponding decoding unit of rear code section, be called the second decoding unit;
Each decoding unit comprises R sub-decoder of concatenated in order, is followed successively by: one-level sub-decoder, secondary sub-decoder ..., R level sub-decoder; Wherein, R is natural number;
By this R sub-decoder, jointly complete according to Log-MAP algorithm input code section is carried out to required It the interative computation completing of decoding, and each sub-decoder completes It/R interative computation wherein; It and It/R are natural number.
5. device according to claim 4, is characterized in that,
Each sub-decoder in the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the corresponding sub-decoder in the second decoding unit, so that the corresponding sub-decoder in the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section;
Each sub-decoder in the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the corresponding sub-decoder in the first decoding unit, so that the corresponding sub-decoder in the first decoding unit calculates the backward path metric parameter of the ending boundary point of self code section.
6. according to the device described in claim 4 or 5, it is characterized in that, this device further comprises: hard decision unit;
Described each decoding unit, for outputing to hard decision unit by the code section after decoding;
Described hard decision unit, for receiving the code section after the decoding of M decoding unit output, carries out exporting after hard decision to code section after decoding.
7. device according to claim 5, is characterized in that, each sub-decoder comprises: data selector, soft inputting and soft output SISO decoder and external information memory;
Data selector, for initial external information being exported to SISO decoder when the interative computation first, after interative computation time the external information that will obtain from external information memory export to SISO decoder; Wherein, the data selector in one-level sub-decoder is using 0 as initial external information, and the external information that the data selector in other sub-decoders obtains the last interative computation of upper level sub-decoder is as initial external information;
SISO decoder, utilize data and the input code section of data selector output to carry out interative computation It/R time, and the external information that each interative computation is obtained sends to external information memory;
External information memory, for storing the external information that the each interative computation of SISO decoder produces, and provide this external information producing to data selector, so that the external information that data selector produces this sends to SISO decoder to carry out interative computation next time;
Wherein, in decode procedure: the SISO decoder in each sub-decoder in the first decoding unit transmits the forward path metric parameter of the ending boundary point of self code section to the SISO decoder in the corresponding sub-decoder in the second decoding unit, so that the SISO decoder in the corresponding sub-decoder in the second decoding unit calculates the forward path metric parameter of the initial boundary point of self code section; SISO decoder in each sub-decoder in the second decoding unit transmits the backward path metric parameter of the initial boundary point of self code section to the SISO decoder in the corresponding sub-decoder in the first decoding unit, so that SISO decoder calculates the backward path metric parameter of the ending boundary point of self code section in the corresponding sub-decoder in the first decoding unit.
8. device according to claim 7, is characterized in that, each sub-decoder further comprises data access bus switches;
In each sub-decoder, SISO decoder is read and write by the external information-storing device of data access bus switches.
9. device according to claim 7, is characterized in that, each SISO decoder comprises: the component decoder of address calculation module and two cascades; The component decoder of described two cascades is the first component decoder and second component decoder;
The first component decoder and second component decoder complete interative computation jointly one time;
Between the first component decoder operational stage, corresponding external information memory is read and write in the address that address calculation module increases progressively in order;
Between second component decoder operational stage, address calculation module is read and write corresponding external information memory according to the address after interweaving.
10. device according to claim 9, is characterized in that, each component decoder comprises: branch metric calculation module, state update module, memory module, control module and external information computing module;
Branch metric calculation module, for according to input message Branch Computed metric parameter, and sends to state update module and external information computing module;
State update module, for going out forward path metric parameter according to the branch metric calculation of parameter receiving, and sends to memory module and preserves, and goes out backward path metric parameter, and send to external information computing module according to the branch metric calculation of parameter receiving;
Memory module, for preserving the forward path metric parameter from state update module;
External information computing module, for according to input message, from the branch metric parameter of branch metric calculation module, from the backward path metric parameter of state update module and from the forward path metric parameter of memory module, calculates external information output;
Control module, for carrying out sequencing control to branch metric calculation module, state update module, memory module and external information computing module.
CN201010003408.6A 2010-01-11 2010-01-11 Method and device for decoding Turbo codes Active CN101777924B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201010003408.6A CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes
PCT/CN2010/001528 WO2011082509A1 (en) 2010-01-11 2010-09-29 Method and device for decoding turbo code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010003408.6A CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes

Publications (2)

Publication Number Publication Date
CN101777924A CN101777924A (en) 2010-07-14
CN101777924B true CN101777924B (en) 2014-02-19

Family

ID=42514273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010003408.6A Active CN101777924B (en) 2010-01-11 2010-01-11 Method and device for decoding Turbo codes

Country Status (2)

Country Link
CN (1) CN101777924B (en)
WO (1) WO2011082509A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777924B (en) * 2010-01-11 2014-02-19 新邮通信设备有限公司 Method and device for decoding Turbo codes
CN102064838B (en) * 2010-12-07 2014-01-15 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method
CN102710366B (en) * 2012-03-21 2016-06-22 华为技术有限公司 The method of data decoding and device
CN102611464B (en) * 2012-03-30 2015-01-28 电子科技大学 Turbo decoder based on external information parallel update
CN103513961B (en) * 2012-06-18 2017-07-11 中兴通讯股份有限公司 On-chip buffering method and device
CN102723958B (en) * 2012-06-28 2015-02-25 电子科技大学 Turbo parallel decoding method based on multi-core digital signal processor (DSP)
CN103916142A (en) * 2013-01-04 2014-07-09 联想(北京)有限公司 Channel decoder and decoding method
CN104038234B (en) * 2013-03-07 2017-09-29 华为技术有限公司 The interpretation method and decoder of polar code
CN103546167B (en) * 2013-07-25 2016-12-28 上海数字电视国家工程研究中心有限公司 Code translator and the method that parsing data are decoded
RU2649957C2 (en) 2013-12-24 2018-04-05 Хуавей Текнолоджиз Ко., Лтд. Polar code decoding method and decoding device
CN105306076A (en) 2014-06-30 2016-02-03 深圳市中兴微电子技术有限公司 MAP algorithm based Turbo decoding method and device
CN105915235B (en) * 2016-04-08 2019-03-05 东南大学 A kind of parallel Turbo decoding method based on Intel CPU
CN105933090B (en) * 2016-04-14 2019-07-16 电子科技大学 A kind of multi-core parallel concurrent SCMA decoding system
CN105790775B (en) * 2016-05-19 2019-01-29 电子科技大学 A kind of probability calculation unit based on probability Turbo decoder
GB2559616A (en) 2017-02-13 2018-08-15 Accelercomm Ltd Detection circuit, receiver, communications device and method of detecting
CN107565983B (en) * 2017-09-08 2020-08-11 广东工业大学 Turbo code decoding method and device
WO2019218130A1 (en) * 2018-05-15 2019-11-21 深圳市大疆创新科技有限公司 Turbo encoding method, turbo encoder and unmanned aerial vehicle
CN108880569B (en) * 2018-07-24 2021-11-09 暨南大学 Rate compatible coding method based on feedback grouping Markov superposition coding
CN110896309B (en) * 2018-09-12 2022-11-15 中兴通讯股份有限公司 Decoding method, device, decoder and computer storage medium for Turbo product code
CN111641417B (en) * 2020-06-09 2023-03-31 电子科技大学 FPGA-based device for finishing matrix array permutation interleaving
CN113258940B (en) * 2021-06-15 2021-10-08 成都星联芯通科技有限公司 turbo decoding method, turbo decoding device, turbo decoding apparatus, and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1645751A (en) * 2004-01-21 2005-07-27 日本电气株式会社 Turbo decoder and turbo decoding method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1133276C (en) * 1999-11-12 2003-12-31 深圳市中兴通讯股份有限公司 Decoding method and decoder for high-speed parallel cascade codes
US20020136282A1 (en) * 2001-03-26 2002-09-26 Quang Nguyen Optimum UMTS modem
CN100508405C (en) * 2005-11-11 2009-07-01 清华大学 Parallel decoding method and device for raising Turbo decoding speed
CN101373978B (en) * 2007-08-20 2011-06-15 华为技术有限公司 Method and apparatus for decoding Turbo code
CN101777924B (en) * 2010-01-11 2014-02-19 新邮通信设备有限公司 Method and device for decoding Turbo codes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1645751A (en) * 2004-01-21 2005-07-27 日本电气株式会社 Turbo decoder and turbo decoding method

Also Published As

Publication number Publication date
WO2011082509A1 (en) 2011-07-14
CN101777924A (en) 2010-07-14

Similar Documents

Publication Publication Date Title
CN101777924B (en) Method and device for decoding Turbo codes
JP5479580B2 (en) Method and apparatus for parallel TURBO decoding in LTE
CN101867379B (en) Cyclic redundancy check-assisted convolutional code decoding method
CN100425000C (en) Double-turbine structure low-density odd-even check code decoder
CN100517984C (en) Unified viterbi/turbo decoder for mobile communication systems
CN105634508B (en) A kind of implementation method of the Turbo decoder of the nearly performance limit of low complex degree
GB2365727A (en) Turbo-code decoding unit and turbo-code encoding/decoding unit
CN104092470B (en) A kind of Turbo code code translator and method
CN110999095A (en) Block-wise parallel frozen bit generation for polar codes
CN100361397C (en) Turbo decoding apparatus and method
CN101373978B (en) Method and apparatus for decoding Turbo code
CN100546207C (en) A kind of dual-binary Turbo code encoding method based on the DVB-RCS standard
CN100508405C (en) Parallel decoding method and device for raising Turbo decoding speed
CN102158235B (en) The method and device of turbo decoding
CN102340320A (en) Bidirectional and parallel decoding method of convolutional Turbo code
CN103986557A (en) LTE Turbo code parallel block decoding method with low path delay
CN102130696A (en) Interleaving/de-interleaving method, soft-in/soft-out decoding method and error correction code encoder and decoder utilizing the same
CN100546206C (en) A kind of circuit and method of decoding of realizing
US7035356B1 (en) Efficient method for traceback decoding of trellis (Viterbi) codes
CN103595424A (en) Component decoding method, decoder, Turbo decoding method and Turbo decoding device
CN206099947U (en) Low resource consumption's multi -parameter can dispose viterbi decoder
CN1129257C (en) Maximum-likelihood decode method f serial backtracking and decoder using said method
CN1142629C (en) Decoding method and decoder for Tebo code
CN111900999B (en) High-performance polarization coding method and coder for satellite discontinuous communication
CN106209117A (en) The multiparameter of a kind of low consumption of resources can configure Viterbi decoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170901

Address after: 100070, No. 188, building 25, No. eighteen, South Fourth Ring Road, Fengtai District, Beijing, 1, 101

Patentee after: Beijing Haiyun Technology Co. Ltd.

Address before: 510663, No. 3, color road, Science City, Guangzhou Development Zone, Guangdong

Patentee before: New Post Communication Equipment Co., Ltd.

TR01 Transfer of patent right