CN106487392A - Down-sampled interpretation method and device - Google Patents

Down-sampled interpretation method and device Download PDF

Info

Publication number
CN106487392A
CN106487392A CN201510523515.4A CN201510523515A CN106487392A CN 106487392 A CN106487392 A CN 106487392A CN 201510523515 A CN201510523515 A CN 201510523515A CN 106487392 A CN106487392 A CN 106487392A
Authority
CN
China
Prior art keywords
path
sampled
contended
maximum likelihood
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510523515.4A
Other languages
Chinese (zh)
Other versions
CN106487392B (en
Inventor
肖强
黄勤
王祖林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201510523515.4A priority Critical patent/CN106487392B/en
Priority to PCT/CN2016/095699 priority patent/WO2017032255A1/en
Publication of CN106487392A publication Critical patent/CN106487392A/en
Application granted granted Critical
Publication of CN106487392B publication Critical patent/CN106487392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4107Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing add, compare, select [ACS] operations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4138Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4138Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions
    • H03M13/4146Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors soft-output Viterbi algorithm based decoding, i.e. Viterbi decoding with weighted decisions soft-output Viterbi decoding according to Battail and Hagenauer in which the soft-output is determined using path metric differences along the maximum-likelihood path, i.e. "SOVA" decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • H03M13/658Scaling by multiplication or division

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present invention relates to a kind of down-sampled interpretation method, including step S1:Input receiving sequence and/or prior information, grid chart is generated, and maximum likelihood path is searched on the grid chart, is obtained the decision value in the maximum likelihood path, the contended path in the maximum likelihood path is marked, and calculates the metric difference in maximum likelihood path and contended path;Also include step S2:From all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result.The down-sampled interpretation method provided by the present invention, reduces computation complexity, provides high-quality LLR, improves decoding efficiency.

Description

Down-sampled interpretation method and device
Technical field
The present invention relates to communication technical field, more particularly to a kind of down-sampled interpretation method and device.
Background technology
Viterbi algorithm (VA) is a kind of optimum interpretation method of convolutional code.It above searches for maximum likelihood path using the thought of Dynamic Programming in the grid chart (trellis figure) of convolutional code.In each moment, VA relatively converges at the path metric of all state nodes, then selects that to have the path of maximum path metric as survivor path.Every survivor path all attaches corresponding bit decision information, and such bit decision information becomes path decision bits.Final survivor path is exactly maximum likelihood path (ML path), and VA can finally export the path decision bits in ML path.Therefore, the computation complexity of VA is O (SL), wherein S=2mIt is the status number of convolutional code, m is register capacity, and L is code length.It is true that the decoding complexity of convolutional code and need not the exponential increase with the growth of register capacity, especially in the case of high s/n ratio.Before VA, sequential decoding has been widely used in the decoding of convolutional code.Their first searches those most possibly become the branch in maximum likelihood path, but most sequential decodings do not ensure that the path that searches out is exactly ML path.
Feldman et al. improves sequential decoding by normalizing branch metric and introducing Priority Queues (PQ).He proposes lazy viterbi algorithm (Lazy VA), and the method can ensure that Search Results are exactly ML path.The method step is as follows:1:The possibility branch metric in all moment is calculated using the trellis figure of channel receiving sequence and encoder, and the branch metric of synchronization is normalized to non-positive number.2:The start node of trellis figure is inserted PQ.3:The head node of PQ is ejected as present node.4:If certain has same moment value and the node of same condition value to be ejected with present node, then ignore present node, the 3rd step is returned.5:The path metric of the descendant node of present node is calculated, and these descendant nodes are inserted in PQ. 6:If the head node of PQ is not the end-node of a trellis figure, the 3rd step is returned.7:Backtracking ML path, exports corresponding path decision bits.
It is one of most important mutation of VA that prior art also discloses a kind of soft-output coding viterbi algorithm (SOVA).It can be realized demodulating with the performance of suboptimum and relatively low complexity, decode, equilibrium etc..Therefore, it is widely used in Modern Communication System and storage system.SOVA contained for two megastages.First stage is to search for maximum likelihood (ML) path by viterbi algorithm;Second stage be by recalling to the contended path in maximum likelihood path, to calculate the log-likelihood ratio (LLR) of each information bit.The quality of the complexity of algorithm and LLR is affected larger by second stage.Assume that traceback length is path metric that t terminates at state s for δ, m (s, t), cm (s, t) is the path metric of corresponding contended path, and corresponding path metric difference is Δ (s, t)=m (s, t)-cm (s, t).Then the second stage step of SOVA is as follows:1:All of LLR is initialized for ∞, moment variable t=L is set.2:Start backtracking from the node of the t in ML path.The path decision bits in corresponding contended path and ML path in relatively traceback length δ, if decision bits are identical, the LLR value in the moment corresponding to holding decision bits is constant, otherwise, this LLR is updated with the minimum of a value of the LLR value in the moment corresponding to decision bits and Δ (s, t).3:T=t-1.If t>0, return to step 2.4:Symbol according to the sign modification LLR of ML path decision bits.Definition step 2 is a back tracking operation, and its complexity is O (δ).Therefore, SOVA is O (SL δ) in the complexity of second stage.The computation complexity of SOVA about (the 3+ δ) of VA/3 times, δ is traceback length.Although some researchers reduce the time complexity of SISO algorithm, but seldom researcher notices deficiency of the SOVA on computation complexity.
The performance of SOVA is lower than the MAP class algorithm based on bcjr algorithm.Researcher is devoted to improve the performance of SOVA in the past twenty years.1998, Marc et al. improved SOVA, and demonstrates the equivalence of equivalent soft-output coding viterbi algorithm (M-SOVA) after improving and log-domain max log approximate maximum a posteriori algorithm (Max-Log-MAP).2000, old et al. propose two-way SOVA, the performance of the algorithm has slightly surmounted Max-Log-MAP algorithm, but but result in the twice SOVA of complexity.Huang et al. employs two Dynamic gene and is adjusted the performance that factor soft-output coding viterbi algorithm (S-SOVA) improves SOVA further.
It can be seen that, the performance of SOVA is low, and has redundant computation.
Content of the invention
The technical problem to be solved is how to provide a kind of computation complexity that can reduce SOVA while performance is not less than the interpretation method of the SOVA mutation of existing optimum and device.
For this purpose it is proposed, the present invention proposes a kind of down-sampled interpretation method, including step S1:Input receiving sequence and/or prior information, grid chart is generated, and maximum likelihood path is searched on the grid chart, is obtained the decision value in the maximum likelihood path, the contended path in the maximum likelihood path is marked, and calculates the metric difference in maximum likelihood path and contended path;Also include step S2:From all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result.
Wherein more preferably, the log-likelihood ratio is estimated by the interior information of any time and/or the function of external information.
Wherein more preferably, the interior information of any time and/or the function of external information are obtained by the log-likelihood ratio in the receiving sequence, prior information and/or other moment.
Wherein more preferably, the function adjusts the value of the receiving sequence, external information and/or interior information by using Dynamic gene.
Wherein more preferably, size of the contended path of the selection by the maximum likelihood path with the metric difference of contended path determines.
Wherein more preferably, the path metric difference is obtained as follows:
S11:For arbitrary node in network, if having identical state value and other nodes of moment value to be ejected by Priority Queues with which, then the metric using the arbitrary node and other nodes is calculating corresponding path metric difference.
On the other hand, present invention also offers a kind of down-sampled code translator, including:Search computing unit, for being input into receiving sequence and/or prior information, generate grid chart, maximum likelihood path is searched on the grid chart, obtain the decision value in the maximum likelihood path, mark the contended path in the maximum likelihood path, and the metric difference in maximum likelihood path and contended path is calculated, also include:Down-sampled unit, for, from all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result.
Wherein more preferably, the log-likelihood ratio is estimated by the interior information of any time and/or the function of external information.
Wherein more preferably, the interior information of any time and/or the function of external information are obtained by the log-likelihood ratio in the receiving sequence, prior information and/or other moment.
Wherein more preferably, the function adjusts the value of the receiving sequence, external information and/or interior information by using Dynamic gene.
By using interpretation method provided by the present invention and device, reduce decoding complexity, especially reduce the decoding complexity in trace-back process, while the present invention can also provide the performance of the mutation of the SOVA of the optimum for being not less than existing, high-quality decoding result is provided, improves the efficiency of the decoders such as convolutional code, Turbo code.
Description of the drawings
The features and advantages of the present invention can be more clearly understood from by reference to accompanying drawing, accompanying drawing is schematic and should not be construed as carrying out the present invention any restriction, in the accompanying drawings:
Fig. 1 shows the process schematic of the down-sampled interpretation method of the present invention;
Fig. 2 shows the external information contrast schematic diagram of the present invention and other interpretation methods;
The comparison schematic diagram of bit error rate when Fig. 3 shows that the present invention and other interpretation methods are applied to yard I;
The comparison schematic diagram of bit error rate when Fig. 4 shows that the present invention and other interpretation methods are applied to yard II;
Average back tracking operation schematic diagram when Fig. 5 shows that the present invention and SOVA interpretation method are applied to yard I;
Average back tracking operation schematic diagram when Fig. 6 shows that the present invention and SOVA interpretation method are applied to yard II;
Complexity comparison schematic diagram when Fig. 7 shows that the present invention and SOVA interpretation method are applied to yard I;
Complexity comparison schematic diagram when Fig. 8 shows that the present invention and SOVA interpretation method are applied to yard II;
Specific embodiment
With reference to the accompanying drawings and examples, the specific embodiment to the present invention is described in further detail.Following examples are used for the present invention to be described, but are not limited to the scope of the present invention.
As shown in figure 1, the invention provides a kind of down-sampled interpretation method, including step S1:Input receiving sequence and/or prior information, grid chart is generated, and maximum likelihood path is searched on the grid chart, is obtained the decision value in the maximum likelihood path, the contended path in the maximum likelihood path is marked, and calculates the metric difference in maximum likelihood path and contended path;Also include step S2:From all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result.Wherein, the decoding result is the calculated log-likelihood ratio.
Below the interpretation method that the present invention is provided is launched to describe in detail.
The present invention is illustrated by taking the component code of standard Turbo code-systematic recursive convolutional code (RSC) as an example.
Step S1:Input receiving sequence and/or prior information, grid chart is generated, and maximum likelihood path is searched on the grid chart, is obtained the decision value in the maximum likelihood path, the contended path in the maximum likelihood path is marked, and calculates the metric difference in maximum likelihood path and contended path.Wherein it is possible to pass through Lazy VA algorithm search maximum likelihood path, Lazy VA algorithm is illustrated in the introduction, be will not be described here.Wherein, path metric difference obtains S11 as follows:For arbitrary node in network, if having identical state value and other nodes of moment value to be ejected by Priority Queues with which, then the metric using the arbitrary node and other nodes is calculating corresponding path metric difference.Viterbi algorithm (VA) can mark all of contended path, because after obtaining maximum likelihood path, it is possible to obtain automatically these contended path according to grid chart;And amended lazyness viterbi algorithm can also mark part contended path.
SOVA is regarded as a signal processing system, signal processing system can reduce computation complexity by down-sampled, then the contended path in maximum likelihood path and corresponding path metric difference are exactly L sample point.Each sample point can be related to a back tracking operation, and sample point is more, and back tracking operation is also more, and in other words, the computation complexity of SOVA is greatly affected by the number of sample point.Therefore the present invention these sample points are carried out down-sampled, only the sample point of 1/M can be used for backtracking, can than SOVA decline a decimation factor M, the present invention by sample point is carried out down-sampled to reduce back tracking operation the step of be:
Step S2:From all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result, if the labeled contended path number in the maximum likelihood path is more than L/M, then these contended path are carried out down-sampled, the contended path for selecting L/M paths metric difference minimum from the contended path carries out back tracking operation and obtains LLR ratio;Wherein, L is sample point, and M is decimation factor;The back tracking operation process belongs to prior art, is introduced in the introduction, will not be described here in SOVA algorithm second stage step.The step of second stage of SOVA algorithm 2, is defined as back tracking operation, and the effect of back tracking operation is the value for updating LLR so which becomes more and more reliable.S2 just with the LLR sequence of output information sequence, but can be because employing down-sampled in fact, and the Partial Elements (partial value) of LLR sequence may be just infinite, so directly not exporting, and adopt S3 to compensate.
Because the disappearance of sample point, the LLR for having partial information bit will be unable to calculate, it is likely to result in the performance loss that cannot ignore, therefore the present invention enters row interpolation using the approximate of approximate and external information of interior information to LLR, the impact of information in one of consideration, the impact of another consideration external information, the log-likelihood ratio are estimated by the interior information of any time and/or the function of external information.The function of the interior information and/or external information of any time is obtained by the log-likelihood ratio in the receiving sequence, prior information and/or other moment.The function adjusts the value of the receiving sequence, external information and/or interior information by using Dynamic gene.Size of the contended path of the selection by the maximum likelihood path with the metric difference of contended path determines.
Specifically, by step S3:The LLR ratio of acquisition of each moment is checked, if the LLR ratio is infinity, makes the approximate sum of the approximate and external information that the LLR ratio is interior information;
Specifically, in calculating, the approximation of information is as follows:
The hard-decision values in hypothesis l moment are ul=+1,
According to the definition of LLR, have
It is approximate using max-log,
Ln (x+y) ≌ max (ln (x), ln (y)) (2)
Can obtain
According to markovian property, the equal sign right-hand component of (3) is divided into 3 parts:
lnp(r|ul=+1, mls (l))
=lnp (rt l|ul=+1, mls (l))
(4)
+lnp(rt l|ul=+1, mls (l))
+lnp(rt = l|ul=+1, mls (l))
Due to rt<lIn given ulIn the case of conditional sampling in mls (l), therefore the Part I on the right of the equal sign of (4) can be written as:
lnp(rt l|ul=+1, mls (l))
=lnp (rt l|ul=-1, mls (l))
=lnp (rt l|mls(l)) (5)
(4) the Part II on the right of equal sign is:
lnp(rt l|ul=+1, mls (l))
=lnp (rt l|sl+1)
=ln βl+1(sl+1) (6)
It is true that (6) are actually the backward tolerance of Log-MAP algorithm.(4) Part III on the right of equal sign is:
lnp(rl|ul=+1, mls (l))=lnp (rl|cl)=0.5Lc rl·cl(7)
Therefore the LLR in l moment is:
LLR(l)
≈0.5Lc·(rl·cl(ul=+1)=rl·cl(ul=-1))+La (l)+d (β) (8)
Wherein, d (β)=ln βl+1(mls(l+1))-lnβl+1(s'l+1), represent backward metric difference.Here, only consideration carrys out the impact of self-channel and prior information.Interior information approximately ignore d (β) and given over to external information approximately processing.Therefore, being approximately of interior information:
|LLR(l)|≈|0.5Lc·(rl·cl(ul=+1)-rl·cl(ul=-1))+La (l) | (9)
Wherein, LLR (l) is the LLR ratio in l moment, and Lc is channel confidence factor, rlFor the receiving sequence in l moment, ulFor the information bit in l moment, clFor corresponding code word, La (l) is bit prior information, slFor l moment trellis state, mls (l) is the state on l moment maximum likelihood path.
Specifically, from the LLR of adjacent bits, for RSC, the contended path of same state node is different with the decision bits of survivor path for the approximate function of external information.
For Lazy VA, the relatively early node ejected by PQ has bigger path metric than later node.
Assume that contended path of the ML path in the l moment has been labeled, but the contended path in l+1 moment is not labeled.Using the property of RSC, we can obtain the l moment and the LLR value in l+1 moment is:
With
Wherein, 2≤i≤δ, 2≤k≤δ, δ are traceback length, mdiff delegated path metric difference Δ, and mls (l) is the state on l moment maximum likelihood path
Here, entering row interpolation using LLR (l) to LLR (l+1).For the first situation in (10), | LLR (l) |=Δ (mls (l+1), l+1), which imply that contended path of the ML path in the l+i moment has been labeled.While we have Δ (mls (l+1), l+1)≤Δ (mls (l+i), l+i).LLR (l+1) is considered further that, we can obtain LLR (l+1)≤Δ (mls (l+2), l+2) from (11).Meanwhile, Δ (mls (l+2), l+2) is rewritten as:
Wherein, m is path metric, and cm is the path metric of corresponding contended path, cm (mls (l+2), l+2) necessarily it is later than cm (mls (l+i), l+i) to be ejected by PQ, because the contended path in l+1 moment is not labeled.Using the property of Lazy VA, we can obtain
cm(mls(l+2),l+2)≤cm(mls(l+i),l+i) (13)
In addition, code word c is in the corresponding branch metric always non-positive number of any t, therefore bms (c, t)≤0.Therefore Δ (mls (l+2), l+2) is not less than Δ (mls (l+i), l+i), so we have Δ (mls (l+2), l+2) >=| LLR (l) |.
For the first situation of (10), can be proved with similar method
mdiff(mls(l+2),l+2)≥|LLR(l)| (14)
Therefore, we can enter row interpolation with LLR (l) to LLR (l+1).Similarly, (14) can be expanded to other situations by us,
mdiff(mls(l+1),l+1)≥|LLR(l-i)| (15)
Wherein, contended path of the ML path in the l-i moment has been labeled, 1≤i≤δ, but the contended path in the l moment is not labeled.
Therefore, in the case of only consideration external information.We can use, and | LLR (l-i) | carrys out approximate | LLR (l) |.So external information is approximately,
|LLR(l)|≈|LLR(l-i)| (16)
Wherein, contended path of the ML path in the l-i moment has been labeled, 1≤i≤δ, but the contended path in the l moment is not labeled.In the present invention, we select LLR not carry out interpolation for minimum i during ∞, to reduce computation complexity.
The approximate function of the approximate function of interior information and external information therefore, it can the value for combining preferably to estimate LLR (9) and (16) to obtain decoding result respectively from interior information and external information.
S4:The sign of the LLR ratio is updated according to the sign of the decision value in the maximum likelihood path;
Specifically, when VA algorithm (or Lazy VA algorithm) terminates, it becomes possible to obtain the estimation u^ of information sequence, each of information sequence is all 0 or 1, in the algorithm of soft-decision, we can do following BPSK mapping:0->-1,1->+ 1, the new sequence obtained after mapping is ML path decision value.The decision value symbol (sign) of each in ML path is the symbol of the LLR of each bit corresponding.Assume u^=(10110), LLR=(3.6,4.5,3.8,7.8,1.7), the sequence that u^ is obtained after BPSK mapping is (1, -1,1,1, -1), then LLR is changed into LLR=(3.6, -4.5,3.8,7.8, -1.7) after the renewal of step S4.
S5:The value of output external information.Wherein, in order to improve the performance of decoding further, we can utilize two Dynamic gene θ 1 and θ 2, adjust the value of external information.
The value of the external information of the output may be calculated as:
Le (l)=θ 2 (1 LLR (l)-Li (l) of θ) (17)
Wherein, Le (l) is l moment external information value, and θ 1 and θ 2 is Dynamic gene, and LLR (l) is the LLR ratio in l moment, can be calculated by (9) and (16), Li (l) is the value of information in the l moment.In iterative decoding, in the decoding of such as Turbo code and the cascade of RS code and convolutional code decoding, external information is just available to the prior information of another component code decoder, can improve constantly the performance (i.e. accuracy rate) of decoder using external information loop iteration.
On the other hand, present invention also offers a kind of down-sampled code translator, including:Search computing unit, for being input into receiving sequence and/or prior information, generate grid chart, maximum likelihood path is searched on the grid chart, obtain the decision value in the maximum likelihood path, mark the contended path in the maximum likelihood path, and the metric difference in maximum likelihood path and contended path is calculated, also include:Down-sampled unit, for, from all contended path in maximum likelihood path, selected section contended path calculates log-likelihood ratio and obtains decoding result.
Wherein, the log-likelihood ratio is estimated by the interior information of any time and/or the function of external information.The function of the interior information and/or external information of any time is obtained by the log-likelihood ratio in the receiving sequence, prior information and/or other moment.The function adjusts the value of the receiving sequence, external information and/or interior information by using Dynamic gene.
Show the higher performance of the present invention and lower complexity as a example by the present invention is applied in Turbo code decoding, be illustrated by using two kinds of different Turbo codes.Code I is the standard code of 3GPP2 (CDMA2000), and its register capacity is 3, and information bit length is 1146, and it is (13,15) that code check is the multinomial of 1/3, RSC.Code II is the Turbo code that register capacity is 4, and its information bit length is 378, and code check is (21,37) for the multinomial of 1/3, RSC.Analogue system employs BPSK modulation and awgn channel.For all of decoding scheme, greatest iteration number is arranged to 10.The traceback length of all mutation of SOVA is set as the register capacity of 8 times of RSC.Simulation result shows, present invention produces high-quality LLR, and performance also can be close to S-SOVA in the case that down-sampled rate is higher.Importantly, the complexity only 1/M of M-SOVA and S-SOVA of the present invention in back tracking operation.
A.LLR quality analysis
We analyze the quality of LLR by as a example by code I, and code II has similar result with code I.As shown in Fig. 2 M-SOVA is given, and S-SOVA, the external information contrast of the present invention and log-domain maximal posterior probability algorithm (Log-MAP), produce after the first iterative decoding for per the external information for opening figure being all the Turbo code in SNR=2.0dB.As a result show that the external information of M-SOVA has bigger absolute value than Log-MAP, and the external information of the external information of S-SOVA and Log-MAP is closely.As M=2 and M=8, the external information of the present invention has a small amount of deviation with Log-MAP.If sample rate brings up to 16, the deviation of the external information of the present invention and Log-MAP in the less scope of LLR absolute value becomes much larger.When sample rate becomes very huge, such as M=64, the difference of external information become all the more substantially, and this is likely to cause performance loss.
B. performance evaluation
As shown in Figure 3 and Figure 4, the bit error rate (BER) of Log-MAP, M-SOVA, S-SOVA and DS-SOVA is compared.When down-sampled rate is more suitable, such as M=2,8,16, performance gap of the present invention with respect to S-SOVA is within 0.25dB.However, as M=64, performance gap has reached 1dB.Performance Simulation Results and LLR quality analysis perfectly in harmony.
C. analysis of complexity
As shown in Figure 5 and Figure 6, SOVA and the average back tracking operation number of the present invention are compared.For code I and code II, the back tracking operation number of the present invention is only the 1/M of SOVA.In addition, as shown in Figure 7 and Figure 8, compare the overall complexity of SOVA and the present invention.Herein, it will be assumed that an add operation is identical with the computation complexity for once comparing operation, therefore overall complexity is directly proportional to the number of times of add operation.For code I, as SNR=7.0dB, the overall complexity of the present invention is the 1/5 of SOVA.For code II, the overall complexity of the present invention is declined faster with the lifting of signal to noise ratio, and as signal to noise ratio snr=7.0dB, its computation complexity only has the 1/18 of SOVA.Therefore, the present invention can more efficiently calculate LLR.
D. performance summary
Compared with SOVA and its other mutation, the present invention will recall complexity and have dropped 1/M, and almost without performance loss, decoding efficiency is high.Our analysis also shows that the present invention can provide high-quality log-likelihood ratio (decoding result) by interpolation method.
By using interpretation method provided by the present invention and device, reduce decoding complexity, especially reduce the decoding complexity in trace-back process, while the present invention can also provide the performance of the mutation of the SOVA of the optimum for being not less than existing, high-quality decoding result is provided, improves the efficiency of the decoders such as convolutional code, Turbo code
Although it has been described in conjunction with the accompanying embodiments of the present invention, but those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention, within the scope of such modification and modification each fall within and are defined by the appended claims.

Claims (10)

1. a kind of down-sampled interpretation method, including step S1:Input receiving sequence and/or priori letter Breath, generates grid chart, searches for maximum likelihood path on the grid chart, obtains described very big The decision value of likelihood path, marks the contended path in the maximum likelihood path, and calculates greatly Likelihood path and the metric difference of contended path;Characterized in that, also including step S2:From very big In all contended path of likelihood path, selected section contended path calculates log-likelihood ratio and obtains Decoding result.
2. a kind of down-sampled interpretation method according to claim 1, it is characterised in that institute State log-likelihood ratio to be estimated by the interior information of any time and/or the function of external information.
3. a kind of down-sampled interpretation method according to claim 1 and 2, it is characterised in that The function of the interior information and/or external information of any time is by the receiving sequence, priori The log-likelihood ratio in information and/or other moment is obtained.
4. a kind of down-sampled interpretation method according to claim 3, it is characterised in that institute Stating function, the receiving sequence, external information and/or interior information is adjusted by using Dynamic gene Value.
5. a kind of down-sampled interpretation method according to claim 1, it is characterised in that institute The contended path of selection is stated by the maximum likelihood path and the size of the metric difference of contended path Determine.
6. a kind of down-sampled interpretation method according to claim 1, it is characterised in that institute State path metric difference to obtain as follows:
S11:For arbitrary node in network, if with its have identical state value and when Quarter, other nodes of value were ejected by Priority Queues, then using the arbitrary node and described The metric of other nodes is calculating corresponding path metric difference.
7. a kind of down-sampled code translator, including:Search computing unit, receives for being input into Sequence and/or prior information, generate grid chart, search for maximum likelihood road on the grid chart Footpath, obtains the decision value in the maximum likelihood path, marks the competition in the maximum likelihood path Path, and calculate the metric difference in maximum likelihood path and contended path, it is characterised in that also wrap Include:Down-sampled unit, for from all contended path in maximum likelihood path, selected section Contended path calculates log-likelihood ratio and obtains decoding result.
8. a kind of down-sampled code translator according to claim 7, it is characterised in that institute State log-likelihood ratio to be estimated by the interior information of any time and/or the function of external information.
9. a kind of down-sampled code translator according to claim 7 or 8, it is characterised in that The function of the interior information and/or external information of any time is by the receiving sequence, priori The log-likelihood ratio in information and/or other moment is obtained.
10. a kind of down-sampled code translator according to claim 9, it is characterised in that The function adjusts the receiving sequence, external information and/or interior letter by using Dynamic gene The value of breath.
CN201510523515.4A 2015-08-24 2015-08-24 Down-sampled interpretation method and device Active CN106487392B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510523515.4A CN106487392B (en) 2015-08-24 2015-08-24 Down-sampled interpretation method and device
PCT/CN2016/095699 WO2017032255A1 (en) 2015-08-24 2016-08-17 System and method for data decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510523515.4A CN106487392B (en) 2015-08-24 2015-08-24 Down-sampled interpretation method and device

Publications (2)

Publication Number Publication Date
CN106487392A true CN106487392A (en) 2017-03-08
CN106487392B CN106487392B (en) 2019-11-08

Family

ID=58099596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510523515.4A Active CN106487392B (en) 2015-08-24 2015-08-24 Down-sampled interpretation method and device

Country Status (2)

Country Link
CN (1) CN106487392B (en)
WO (1) WO2017032255A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108923887A (en) * 2018-06-26 2018-11-30 中国人民解放军国防科技大学 Soft decision decoder structure of multi-system partial response CPM signal
CN113497672A (en) * 2020-04-01 2021-10-12 智原科技股份有限公司 Interleaving code modulation decoder and decoding method applied in receiver

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214070A1 (en) 2017-05-24 2018-11-29 华为技术有限公司 Decoding method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101395669A (en) * 2007-02-21 2009-03-25 松下电器产业株式会社 Maximum likelihood decoder and information reproducing device
CN102340317A (en) * 2010-07-21 2012-02-01 中国科学院微电子研究所 High-throughput rate decoder structure of structuring LDPC code and decoding method thereof
CN103548084A (en) * 2011-06-17 2014-01-29 日立民用电子株式会社 Optical information reproduction device and method for reproducing optical information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4660612B2 (en) * 2009-07-09 2011-03-30 株式会社東芝 Information reproducing apparatus and information reproducing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101395669A (en) * 2007-02-21 2009-03-25 松下电器产业株式会社 Maximum likelihood decoder and information reproducing device
CN102340317A (en) * 2010-07-21 2012-02-01 中国科学院微电子研究所 High-throughput rate decoder structure of structuring LDPC code and decoding method thereof
CN103548084A (en) * 2011-06-17 2014-01-29 日立民用电子株式会社 Optical information reproduction device and method for reproducing optical information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108923887A (en) * 2018-06-26 2018-11-30 中国人民解放军国防科技大学 Soft decision decoder structure of multi-system partial response CPM signal
CN113497672A (en) * 2020-04-01 2021-10-12 智原科技股份有限公司 Interleaving code modulation decoder and decoding method applied in receiver
CN113497672B (en) * 2020-04-01 2023-11-07 智原科技股份有限公司 Interleaved code modulation decoder and decoding method applied to receiver

Also Published As

Publication number Publication date
CN106487392B (en) 2019-11-08
WO2017032255A1 (en) 2017-03-02

Similar Documents

Publication Publication Date Title
US7209527B2 (en) Turbo decoder employing max and max* map decoding
Franz et al. Concatenated decoding with a reduced-search BCJR algorithm
US6581182B1 (en) Iterative decoding with post-processing of detected encoded data
US6901119B2 (en) Method and apparatus for implementing soft-input/soft-output iterative detectors/decoders
KR100512668B1 (en) Iteration terminating using quality index criteria of turbo codes
US8122327B2 (en) Symbol-level soft output viterbi algorithm (SOVA) and a simplification on SOVA
EP1135877A1 (en) Component decoder and method thereof in mobile communication system
CN106487392A (en) Down-sampled interpretation method and device
EP2353242A1 (en) Systematic and parity bit soft estimation method and apparatus
Wei et al. Comments on" A New Parity-Check Stopping Criterion for Turbo Decoding"
US6633615B1 (en) Trellis transition-probability calculation with threshold normalization
US9021342B2 (en) Methods to improve ACS performance
Muller et al. Spc05-3: On the parallelism of convolutional turbo decoding and interleaving interference
Atluri et al. Low Power VLSI Implementation of the MAP decoder for Turbo Codes through forward recursive calculation of Reverse State Metrics
Sivasankaran et al. Twin-stack decoding of recursive systematic convolutional codes
Papaharalabos et al. A new method of improving SOVA turbo decoding for AWGN, rayleigh and rician fading channels
Bai et al. Novel algorithm for continuous decoding of turbo codes
CN103973319B (en) All-integer turbo code iterative-decoding method and system
Sun et al. Quasi-reduced-state soft-output Viterbi detector for magnetic recording read channel
Han et al. Implementation of an efficient two-step SOVA turbo decoder for wireless communication systems
Ghrayeb et al. Performance of high rate turbo codes employing the soft-output Viterbi algorithm (SOVA)
Liu et al. Study on the GA-Based Decoding Algorithm for Convolutional Turbo Codes
YASMINE et al. RTL Design and Implementation of TCM Decoders using Viterbi Decoder
Jia et al. Error restricted fast MAP decoding of VLC
Nordman Application of the Berrou SOVA algorithm in decoding of a Turbo Code

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant