CN105915235A - Intel CPU-based parallel Turbo decoding method - Google Patents

Intel CPU-based parallel Turbo decoding method Download PDF

Info

Publication number
CN105915235A
CN105915235A CN201610218721.9A CN201610218721A CN105915235A CN 105915235 A CN105915235 A CN 105915235A CN 201610218721 A CN201610218721 A CN 201610218721A CN 105915235 A CN105915235 A CN 105915235A
Authority
CN
China
Prior art keywords
code block
sequential
decoder
likelihood ratio
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610218721.9A
Other languages
Chinese (zh)
Other versions
CN105915235B (en
Inventor
王捷
毕明勇
范鹏博
李磊
粟勇
王东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201610218721.9A priority Critical patent/CN105915235B/en
Publication of CN105915235A publication Critical patent/CN105915235A/en
Application granted granted Critical
Publication of CN105915235B publication Critical patent/CN105915235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding

Abstract

The invention discloses an Intel CPU-based parallel Turbo decoding method comprising the following steps: (1) Turbo decoding operation is accelerated via a single instruction multiple data stream instruction; in the single instruction multiple data stream instruction, 128 bits are distributed to each code block, the number of parallel code blocks is determined according to an instruction bit width supported by a CPU, pertinent codes are written, and internal operation of each code block is enabled to be as same as single code block coding operation; (2) in the single instruction multiple data stream instruction, a forward direction state measurement value alpha and a reverse direction state measurement value beta are calculated at the same time in a log-likelihood ratio mode, and two intermediate vectors respectively having a positive state and a negative state are generated in calculating processes; when a time sequence is k, a sequence number of alpha is set as k, and a sequence number of beta is code length N-1-K. When k reaches or exceeds a half of N, alpha and beta of a time sequence N-1-K are loaded into vectors and positions are switched, the vectors and the two intermediate vectors in the time sequence k are subjected to calculating operation, and log-likelihood ratio information can be obtained and output.

Description

A kind of parallel Turbo decoding method based on Intel CPU
Technical field
The invention belongs to field of data transmission, relate to Turbo interpretation method in a kind of transmitter physical layer.
Background technology
In a communications system, in order to make channel capacity approach shannon limit, a lot of in the case of can use Turbo coding and decoding skill Art.Turbo coding step is relatively simple, decodes the most relative complex.In order to meet the two-forty of communication system requirements, need to the greatest extent The handling capacity of Turbo decoding may be improved.
The Turbo decoding of mobile communication in the past is carried out often in specific DSP platform.Refer to along with Intel CPU is mono- Making the development of multiple data stream, its computing capability is increasingly stronger, and Turbo based on Intel CPU decoding also begins to be occurred and continuous Improve.Turbo based on Intel CPU decoding in the past unrealized parallel decoding under single-instruction multiple-data stream (SIMD) instructs.
Summary of the invention
Technical problem: the present invention provides a kind of and uses single-instruction multiple-data stream (SIMD) instruction to realize many code blocks parallel Turbo decoding Method.
Technical scheme: the Turbo interpretation method based on Intel CPU of the present invention, specifically includes following steps:
(1) in single-instruction multiple-data stream (SIMD) instructs, each code block distributes 128, sets parallel code block number and supports as CPU Instruction bit wide divided by 128;Iteration indicator index=0 is set;
(2) each code block carrying out solution respectively to cut out, each code block obtains two string system informations Lsys0, Lsys1With two string schools Test information Lp0, Lp1, described system information Lsys1By system information Lsys0Intertexture obtains, described Lsys0、Lp0By the component in system Decoder 0 processes, described Lsys1、Lp1Processed by the component decoder 1 in system;
(3) with branched measurement value that each sequential in each code block of log-likelihood ratio form calculus component decoder 0 is corresponding γ11、γ10, i.e. γ11=0.5 (Lsys+Lp+La), γ10=0.5 (Lsys-Lp+La), wherein LaFor prior information, LsysFor Lsys0With Lsys1General symbol(s), LpFor Lp0And Lp1General symbol(s);
(4) in component decoder 0, in sequential k, θ is builtk+1, particularly as follows: according to the convolutional code generator matrix of each code block List bifurcation state metric γ11 k、γ10 kCorresponding sequence, by γ11 k、γ10 kBy described corresponding sequence construct vector γk
Single-instruction multiple-data stream (SIMD) instruction is used to carry out vector calculus, i.e. θk+kk, θk-kk, then according to volume Long-pending code generator matrix corresponding states position changes θk+, θk-Internal data rearranges;Seek θk+And θk-The maximum of corresponding data It is worth the vector theta as next sequentialk+1, then it is normalized;
Wherein, θkProduced by a upper sequential, θkIn comprise eight forward state metric values α of parallel code blockj kAnti-with eight To state measurement value βj m, j is number of state indexes, and N is system code length, as k=0, and θ0Internal data is: α0 0=-127, β0 N-1=- 127, αj≠0 0=0, βj≠0 N-1=0;θk+、θk-It is respectively positive and negative middle vector, m=N-k-1 for state;
(5), in component decoder 0, when sequential k is less than the half of N, makes k=k+1, and return step (4), be otherwise loaded into Vector thetam, the α internal by itj mAnd βj kReversing of position, calculates L according to following formulak+And Lk-:
Lk+mk+, Lk-mk-,
Wherein Lk+And LK-The state of being respectively is vector in the middle of positive and negative output log-likelihood ratio;Described Lk+And Lk-Each arrow The output log-likelihood ratio intermediate value of internal eight the k sequential having parallel code block respectively of amount and the output logarithm of eight m sequential are seemingly So than intermediate value;
Obtain L respectivelyk+The maximum l of the output log-likelihood ratio intermediate value of middle k sequential and m sequentialk+And lm+, Lk-During middle k The maximum l of the output log-likelihood ratio intermediate value of sequence and m sequentialk-And lm-, then calculate lk+-lk-It is i.e. the output in k moment Log-likelihood ratio information, lm+-lm-It is i.e. the output log-likelihood ratio information in m moment;If k=N-1, then enter step (6), no Then make k=k+1, and return step (4);
(6) according to the processing method of component decoder 0 in described step (3) to (5) and flow process, in component decoder 1 Carry out component decoding, obtain exporting log-likelihood ratio information, then by formula Le=Lo-Lsys-LaObtain external information Le, currently When decoder is decoder 0, by Le0Interweave and obtain prior information L of decoder 1a1, when current decoder is decoder 1, by Le1 Deinterleave prior information L obtaining decoder 0a0, wherein Le0、La0For external information and the prior information of decoder 0, Le1、La1 For external information and the prior information of decoder 1, LeIt is Le0、Le1General symbol(s), LaIt is La0、La1General symbol(s);
(7) iteration indicator index accumulates once, and such as index < 6, then returns step (3), as index=6, and decoding Terminate.
Further, in the inventive method, in described step (1), cpu instruction collection bit wide is 256;Parallel code block is set Number is 2, and the computing in described step (3) to (6) uses AVX2 instruction set to process.
Further, in the inventive method, in described step (1), cpu instruction collection bit wide is 512, arranges parallel code block Number is 4, and the computing in described step (3) to (6) uses AVX512 instruction set to process.
Further, in the inventive method, described step (4) neutralizes and uses single-instruction multiple-data stream (SIMD) instruction to realize in (5) Forward state metric value and the parallel computation of reverse state metric value.
Further, in the inventive method, the normalized in described step (4) is: θk+1Internal 8 states αk+1、βm-1All deduct 0 state value α of this metric0 k+1、β0 m-1
The present invention is compared with existing solid size block decoding method, and the method processes multiple code block number within the same instructions cycle According to, the multiple that improved efficiency is identical with parallel code block number.Compared with the method that existing forward respectively calculates α backwards calculation β, The method completes the former forward and reverse calculating, improved efficiency one times within the same instructions cycle.
Beneficial effect: the present invention compared with prior art, has the advantage that
, better performances the most perfect with the srsLTE project on github in current Turbo decoder implementation, this Scheme uses AVX instruction set, 128 senior vector registers, 16 fixed points, solid size block decoding, and order calculates the side of α, β Method.The present invention, compared with this scheme, has a characteristic that
(1) when the instruction set that single-instruction multiple-data stream (SIMD) instruction set is AVX2, AVX512 or renewal that CPU supports, this Bright corresponding increase parallel code block number also uses corresponding instruction to make full use of the bit wide that instruction set is supported during decoding; Not for AVX2 or the method for the instruction set of renewal in srsLTE, it is impossible to give full play to processor performance;Due to AVX, The instruction cycle of AVX2 and AVX512 is identical, but the data volume processed is the most double, then use AVX2 and AVX512 real Existing parallel decoding is identical with using AVX solid size block decoding required time, when instruction set is AVX2, and the vector that this method is corresponding Computing handling capacity is the twice of vector calculus handling capacity in srsLTE, when instruction set is AVX512, and the vector that this method is corresponding Computing handling capacity is four times of vector calculus handling capacity in srsLTE;
(2) method of parallel computation α, β that the present invention uses is to represent fixed-point number with 8, arranges 8 shapes in 128 The α of state and the β of 8 states, and srsLTE represents fixed-point number with 16,128 can only arrange 8 16 fixed-point numbers, because of In this same instructions cycle, this method can be with parallel computation α, β, and the method in srsLTE can only serial computing α, β;Additionally exist When calculating output log-likelihood ratio, due to the parallel arranged of α, β, two output log-likelihood ratio information can be calculated every time, And the method in srsLTE can only calculate one;Use 8 fixed points compare with 16 fixed points the bit error rate of increase 0.3dB with In.
Accompanying drawing explanation
Fig. 1 is parallel Turbo decoding structure chart.
Fig. 2 is generator matrix [1 011;110 1] corresponding states position variation diagram.
Fig. 3 is α, β Vector operation flow chart.
Fig. 4 is output log-likelihood ratio Vector operation flow chart.
Detailed description of the invention
Below in conjunction with embodiment and Figure of description, the present invention is further illustrated.
Embodiment 1: a kind of parallel Turbo decoding method based on Intel CPU, in this embodiment, hardware platform uses Intel CPU Core i7-4790, supports AVX2 instruction set, is furnished with 16 256 senior vector registers, dominant frequency 3.6GHz; Turbo code generator matrix is [1 011;110 1], data acquisition is with 8 fixed-point representations.Code block 1 during dicode block parallel decoding Low 128, code block 2 is high 128, the most mutual, as shown in Figure 1 between code block.The Turbo code information of carrying is system letter Breath Lsys0, check information Lp0And Lp1, Lsys1By Lsys0Obtain through intertexture.It is described below the decoding process of a code block:
(1) code-aiming block carries out solution and cuts out, and obtains two string system informations Lsys0, Lsys1With two string check information Lp0, Lp1, described System information Lsys1By system information Lsys0Intertexture obtains, described Lsys0、Lp0Processed by the component decoder 0 in system, described Lsys1、Lp1Processed by the component decoder 1 in system;
(3) with branched measurement value that each sequential in each code block of log-likelihood ratio form calculus component decoder 0 is corresponding γ11、γ10
According to bifurcation state metric calculation formula:
&gamma; ( s i k , s j ( k + 1 ) ) = 0.5 ( L s y s k + L a k ) u k + 0.5 ( L p k p k )
Lsys, Lp, La, LeIt is respectively system information, check information, prior information and external information;uk, pkRepresent system respectively System code and check code;si kRepresent that k walks i-th mode bit;
Obtain γ11=0.5 (Lsys+Lp+La), γ10=0.5 (Lsys-Lp+La), wherein LaFor prior information, LsysFor Lsys0 And Lsys1General symbol(s), LpFor Lp0And Lp1General symbol(s);
(4) in component decoder 0, in sequential k, θ is builtk+1, particularly as follows: list according to the convolutional code generator matrix of code block Branched measurement value γ11 k、γ10 kCorresponding sequence, as in figure 2 it is shown, by γ11 k、γ10 kBy described corresponding sequence construct vector γk; Vector thetakInternal fixed-point number arrangement is β7 m, β6 m, β5 m, β4 m, β3 m, β2 m, β1 m, β0 m, α7 k, α6 k, α5 k, α4 k, α3 k, α2 k, α1 k, α0 k, Therefore corresponding from a high position to low level in this example γ is arranged as γ11 m, γ10 m, γ10 m, γ11 m, γ11 m, γ10 m, γ10 m, γ11 m, γ11 k, γ11 k, γ10 k, γ10 k, γ10 k, γ10 k, γ11 k, γ11 k, m is the reverse sequential of k, and m=N-k-1, N are system code length Degree;
According to forward state metric calculation formula calculating α:
&alpha; j ( k + 1 ) = max i &Element; F * { &alpha; i k + &gamma; ( s i k , s j ( k + 1 ) ) }
Set F contain k step in sj (k+1)2 relevant mode bits, α0=0 ,-∞ ,-∞ ,-∞ ,-∞ ,-∞ ,- ∞,-∞};
According to backward state metric calculation formula calculating β:
&beta; j ( k - 1 ) = max i &Element; B * { &beta; i k + &gamma; ( s j ( k - 1 ) , s i k ) }
Set B comprises and state sj (k-1)The mode bit of relevant k step, βN-1=0 ,-∞ ,-∞ ,-∞ ,-∞ ,-∞ ,- ∞,-∞};
Single-instruction multiple-data stream (SIMD) instruction is used to carry out vector calculus, as it is shown on figure 3, i.e. θk+kk, θk-kk, Then change θ according to convolutional code generator matrix corresponding states positionk+, θk-Internal data rearranges;Seek θk+And θk-Corresponding The maximum of data is as the vector theta of next sequentialk+1;When using single-instruction multiple-data stream (SIMD) instruction to calculate, owing to 8 fixed The restriction of point, obtains θk+1After to be normalized, the α of i.e. internal 8 statesk+1、βm-1All deduct 0 shape of this metric State value α0 k+1、β0 m-1, i.e. after normalization, internal arrangement is β7 m-10 m-1, β6 m-10 m-1, β5 m-10 m-1, β4 m-10 m-1, β3 m-1- β0 m-1, β2 m-10 m-1, β1 m-10 m-1, 0, α7 k+10 k+1, α6 k+10 k+1, α5 k+10 k+1, α4 k+10 k+1, α3 k+10 k+1, α2 k+1- α0 k+1, α1 k+10 k+1, 0;
Wherein, as k=0, θ0Portion's data are: α0 0=-127, β0 N-1=-127, αj≠0 0=0, βj≠0 N-1=0;θk+、θk- It is respectively positive and negative middle vector, θ for statekProduced by a upper sequential, θkIn comprise eight forward-facing state degree of parallel code block Value αj kWith eight reverse state metric values βj m, j is number of state indexes;
In Vector operation involved by this step, each code block takies 128 within vector, involved Vector operation In, low 64 calculate forward state metric value α, and high 64 calculate reverse state metric value β, will not between high 64 and low 64 Data are had to exchange;
(5), in component decoder 0, when sequential k is less than the half of N, makes k=k+1, and return step (4), otherwise basis Below equation calculates output log-likelihood ratio Lo:
L o k = max { s k , s ( k + 1 ) } &Element; U 1 * { &alpha; i k + &beta; j ( k + 1 ) + &gamma; ( s i k , s j ( k + 1 ) ) } - max { s k , s ( k + 1 ) } &Element; U - 1 * { &alpha; i k + &beta; j ( k + 1 ) + &gamma; ( s i k , s j ( k + 1 ) ) }
Set U1And U-1Contain and work as ukMode bit when respectively 1 and-1;
As shown in Figure 4, it is loaded into vector thetam, the α internal by itj mAnd βj kReversing of position, calculates L according to following formulak+And Lk-:
Lk+mk+, Lk-mk-, wherein Lk+And Lk-The state of being respectively is to vow in the middle of positive and negative output log-likelihood ratio Amount;Described Lk+And LK-Have respectively inside each vector parallel code block eight k sequential output log-likelihood ratio intermediate value and The output log-likelihood ratio intermediate value of eight m sequential;
Obtain L respectivelyk+The maximum l of the output log-likelihood ratio intermediate value of middle k sequential and m sequentialk+And lm+, Lk-During middle k The maximum l of the output log-likelihood ratio intermediate value of sequence and m sequentialk-And lm-, then calculate lk+-lk-It is i.e. the output in k moment Log-likelihood ratio information, lm+-lm-It is i.e. the output log-likelihood ratio information in m moment, succinct for view, l in figurek+、lk-Use l table Show, and lm+、lm-Representing with x, state is labeled on the right side of block diagram;If k=N-1, perform next step operation, otherwise make k=k+1, and Return step (4);
In Vector operation involved by this step, each code block takies 128 within vector, when low 64 calculating are current The output log-likelihood ratio of sequence k, high 64 output log-likelihood ratios calculating sequential N-k-1, except by vector thetaN-k-1Internal αj N-k-1And βj kOutside during reversing of position, high 64 bit data and low 64 bit data are exchanged, high 64 and low 64 in remaining Vector operation Between do not have data exchange;
(6) according to the processing method of component decoder 0 in described step (3) to (5) and flow process, in component decoder 1 Carry out component decoding, obtain exporting log-likelihood ratio information, then by formula Le=Lo-Lsys-LaObtain external information Le, currently When decoder is decoder 0, by Le0Interweave and obtain prior information L of decoder 1a1, when current decoder is decoder 1, by Le1 Deinterleave prior information L obtaining decoder 0a0, wherein Le0、La0For external information and the prior information of decoder 0, Le1、La1 For external information and the prior information of decoder 1, LeIt is Le0、Le1General symbol(s), LaIt is La0、La1General symbol(s);
(7) iteration indicator index accumulates once, and such as index < 6, then returns step (3), as index=6, and decoding Terminate.
This parallel Turbo decoding method major advantage based on Intel CPU is that handling capacity is high, and bit error rate performance Lose limited.As shown in Table 1, matched group is the Turbo solid size block that the order in srsLTE calculates α, β to the result of embodiment 1 Decoding scheme.The comparison other of bit-error rate penalty is to use double-precision floating points to carry out the bit error rate of Turbo decoding under MATLAB.
Form 1 solid size block and dicode block parallel Turbo decoding Performance comparision
Above-described embodiment is only the preferred embodiment of the present invention, it should be pointed out that: for the ordinary skill of the art For personnel, under the premise without departing from the principles of the invention, it is also possible to making some improvement and equivalent, these are to this Bright claim improve with equivalent after technical scheme, each fall within protection scope of the present invention.

Claims (5)

1. a parallel Turbo decoding method based on Intel CPU, it is characterised in that the method comprises the steps:
(1) in single-instruction multiple-data stream (SIMD) instructs, each code block distributes 128, sets the finger that parallel code block number is supported as CPU Make bit wide divided by 128;Iteration indicator index=0 is set;
(2) each code block carrying out solution respectively to cut out, each code block obtains two string system informations Lsys0, Lsys1With two string verification letters Breath Lp0, Lp1, described system information Lsys1By system information Lsys0Intertexture obtains, described Lsys0、Lp0Decoded by the component in system Device 0 processes, described Lsys1、Lp1Processed by the component decoder 1 in system;
(3) with branched measurement value γ that each sequential in each code block of log-likelihood ratio form calculus component decoder 0 is corresponding11、 γ10, i.e. γ11=0.5 (Lsys+Lp+La), γ10=0.5 (Lsys-Lp+La), wherein LaFor prior information, LsysFor Lsys0And Lsys1 General symbol(s), LpFor Lp0And Lp1General symbol(s);
(4) in component decoder 0, in sequential k, θ is builtk+1, particularly as follows: list according to the convolutional code generator matrix of each code block Bifurcation state metric γ11 k、γ10 kCorresponding sequence, by γ11 k、γ10 kBy described corresponding sequence construct vector γk
Single-instruction multiple-data stream (SIMD) instruction is used to carry out vector calculus, i.e. θk+kk, θk-kk, then according to convolutional code Generator matrix corresponding states position changes θk+, θk-Internal data rearranges;Seek θk+And θk-The maximum of corresponding data is made Vector theta for next sequentialk+1, then it is normalized;
Wherein, θkProduced by a upper sequential, θkIn comprise eight forward state metric values α of parallel code blockj kWith eight reverse state Metric βj m, j is number of state indexes, and N is system code length, as k=0, and θ0Internal data is: α0 0=-127, β0 N-1=-127, αj≠0 0=0, βj≠0 N-1=0;θk+、θk-It is respectively positive and negative middle vector, m=N-k-1 for state;
(5), in component decoder 0, when sequential k is less than the half of N, makes k=k+1, and return step (4), be otherwise loaded into vector θm, the α internal by itj mAnd βj kReversing of position, calculates L according to following formulak+And Lk-:
Lk+mk+, Lk-mk-,
Wherein Lk+And LK-The state of being respectively is vector in the middle of positive and negative output log-likelihood ratio;Described Lk+And Lk-In each vector Portion has output log-likelihood ratio intermediate value and the output log-likelihood ratio of eight m sequential of eight k sequential of parallel code block respectively Intermediate value;
Obtain L respectivelyk+The maximum l of the output log-likelihood ratio intermediate value of middle k sequential and m sequentialk+And lm+, Lk-Middle k sequential and The maximum l of the output log-likelihood ratio intermediate value of m sequentialk-And lm-, then calculate lk+-lk-It it is i.e. the output logarithm in k moment Likelihood ratio information, lm+-lm-It is i.e. the output log-likelihood ratio information in m moment;If k=N-1, then enter step (6), otherwise make k =k+1, and return step (4);
(6) according to the processing method of component decoder 0 in described step (3) to (5) and flow process, carry out in component decoder 1 Component decodes, and obtains exporting log-likelihood ratio information, then by formula Le=Lo-Lsys-LaObtain external information Le, currently decode When device is decoder 0, by Le0Interweave and obtain prior information L of decoder 1a1, when current decoder is decoder 1, by Le1Solve and hand over Knit prior information L obtaining decoder 0a0, wherein Le0、La0For external information and the prior information of decoder 0, Le1、La1For translating The external information of code device 1 and prior information, LeIt is Le0、Le1General symbol(s), LaIt is La0、La1General symbol(s);
(7) iteration indicator index accumulates once, and such as index < 6, then returns step (3), and as index=6, decoding terminates.
Parallel Turbo decoding method based on Intel CPU the most according to claim 1, it is characterised in that described step (1) in, cpu instruction collection bit wide is 256;Arranging parallel code block number is 2, and the computing in described step (3) to (6) uses AVX2 instruction set processes.
Parallel Turbo decoding method based on Intel CPU the most according to claim 1, it is characterised in that described step (1) in, cpu instruction collection bit wide is 512, and arranging parallel code block number is 4, and the computing in described step (3) to (6) uses AVX512 instruction set processes.
4. according to the parallel Turbo decoding method based on Intel CPU described in claim 1,2 or 3, it is characterised in that institute State step (4) to neutralize and (5) use single-instruction multiple-data stream (SIMD) instruction realize forward state metric value and reverse state metric value Parallel computation.
5. according to the parallel Turbo decoding method based on Intel CPU described in claim 1,2 or 3, it is characterised in that institute The normalized stated in step (4) is: θk+1The α of internal 8 statesk+1、βm-1All deduct 0 state value α of this metric0 k+1、 β0 m-1
CN201610218721.9A 2016-04-08 2016-04-08 A kind of parallel Turbo decoding method based on Intel CPU Active CN105915235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610218721.9A CN105915235B (en) 2016-04-08 2016-04-08 A kind of parallel Turbo decoding method based on Intel CPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610218721.9A CN105915235B (en) 2016-04-08 2016-04-08 A kind of parallel Turbo decoding method based on Intel CPU

Publications (2)

Publication Number Publication Date
CN105915235A true CN105915235A (en) 2016-08-31
CN105915235B CN105915235B (en) 2019-03-05

Family

ID=56745755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610218721.9A Active CN105915235B (en) 2016-04-08 2016-04-08 A kind of parallel Turbo decoding method based on Intel CPU

Country Status (1)

Country Link
CN (1) CN105915235B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089556A1 (en) * 2002-12-18 2009-04-02 Texas Instruments Incorporated High-Speed Add-Compare-Select (ACS) Circuit
US20090172502A1 (en) * 2007-12-31 2009-07-02 Industrial Technology Research Institute Method and apparatus for turbo code decoding
CN101777924A (en) * 2010-01-11 2010-07-14 新邮通信设备有限公司 Method and device for decoding Turbo codes
CN102064838A (en) * 2010-12-07 2011-05-18 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089556A1 (en) * 2002-12-18 2009-04-02 Texas Instruments Incorporated High-Speed Add-Compare-Select (ACS) Circuit
US20090172502A1 (en) * 2007-12-31 2009-07-02 Industrial Technology Research Institute Method and apparatus for turbo code decoding
CN101777924A (en) * 2010-01-11 2010-07-14 新邮通信设备有限公司 Method and device for decoding Turbo codes
CN102064838A (en) * 2010-12-07 2011-05-18 西安电子科技大学 Novel conflict-free interleaver-based low delay parallel Turbo decoding method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHRISTOPH STUDER等: ""Design and Implementation of a Parallel Turbo-Decoder ASIC for 3GPP-LTE"", 《IEEE JOURNAL OF SOLID-STATE CIRCUITS》 *
沈玮等: ""一种用于cdma2000 的低复杂度Turbo 码译码器"", 《无线通信技术》 *

Also Published As

Publication number Publication date
CN105915235B (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN101777924B (en) Method and device for decoding Turbo codes
CN1328851C (en) Error correction code decoding method and programme and device thereof
CN101388674B (en) Decoding method, decoder and Turbo code decoder
CN104092470B (en) A kind of Turbo code code translator and method
CN103905067B (en) More weighted current D/A decoder implementation methods and device
CN110999095A (en) Block-wise parallel frozen bit generation for polar codes
CN109428607A (en) Interpretation method, decoder and the decoding equipment of polarization code
CN103986557B (en) The parallel block-wise decoding method of LTE Turbo codes in low path delay
CN103152057B (en) A kind of ldpc decoder and interpretation method based on double normalization modifying factors
CN106027200A (en) Convolutional code high-speed parallel decoding method and decoder based on GPU
CN103957016B (en) Turbo code encoder with low storage capacity and design method of Turbo code encoder
Halim et al. Software-based turbo decoder implementation on low power multi-processor system-on-chip for Internet of Things
CN105915235B (en) A kind of parallel Turbo decoding method based on Intel CPU
CN101662293A (en) Method and device for decoding
CN106301394A (en) A kind of parallel Turbo decoding method based on Intel CPU
CN103595424A (en) Component decoding method, decoder, Turbo decoding method and Turbo decoding device
US8775914B2 (en) Radix-4 viterbi forward error correction decoding
CN103475380A (en) Parallel Turbo decoding method for image processor
CN101882933B (en) Method for Turbo decoding in LTE (Long Term Evolution) and Turbo decoder
JP2010130271A (en) Decoder and decoding method
CN108809485A (en) A kind of method and apparatus of coding
CN102832951B (en) Realizing method for LDPC (Low Density Parity Check) coding formula based on probability calculation
WO2019137231A1 (en) Decoding method and device
CN103905066B (en) Turbo code code translator and method
Mandwale et al. Implementation of High Speed Viterbi Decoder using FPGA

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant