WO2018171110A1 - Maximum likelihood decoding algorithm for tail-biting convolutional code - Google Patents

Maximum likelihood decoding algorithm for tail-biting convolutional code Download PDF

Info

Publication number
WO2018171110A1
WO2018171110A1 PCT/CN2017/097667 CN2017097667W WO2018171110A1 WO 2018171110 A1 WO2018171110 A1 WO 2018171110A1 CN 2017097667 W CN2017097667 W CN 2017097667W WO 2018171110 A1 WO2018171110 A1 WO 2018171110A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
algorithm
backward
maximum likelihood
convolutional code
Prior art date
Application number
PCT/CN2017/097667
Other languages
French (fr)
Chinese (zh)
Inventor
韩永祥
吴庭伊
陈伯宁
瓦悉尼星巴
Original Assignee
东莞理工学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东莞理工学院 filed Critical 东莞理工学院
Publication of WO2018171110A1 publication Critical patent/WO2018171110A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/413Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors tail biting Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes

Definitions

  • the present invention relates to the field of data processing, and in particular to a novel maximum likelihood decoding algorithm MLWAVA for tailing convolutional codes.
  • convolutional codes Since the invention of convolutional codes, they have been widely used to provide effective error prevention capabilities in digital communications.
  • a certain number of zeros are typically appended to the end of the sequence of information bits to clear the contents of the shift register so that the encoding process of the next sequence of information can be performed directly without initialization.
  • These zero tail bits can enhance the error prevention capability of the convolutional code. For sufficiently long sequences of information, the rate loss due to these zero tail bits is almost negligible; however, when the information sequence is short, these zero tail bits introduce significant loss of code rate.
  • the decoding of the tail convolutional code is performed on the grid, and the codeword through the grid now corresponds to the path starting and ending in the same (not necessarily all-zero) state.
  • a path having the same initial state and final state on the trailer convolutional code grid is referred to as a trailing path. Since there is a one-to-one correspondence between "codewords" and "tailing paths", these two terms are used interchangeably herein.
  • the mesh can be decomposed into N s sub-grids with the same initial state and final state. According to the previous naming convention, these sub-grids are called the tail sub-grid, or if there is no ambiguity caused by the abbreviation, it can be called a simple sub-grid.
  • the convolutional code representing a xianwei grid and which represents a k-th sub-cell by T k with T. It can be clearly seen that the decoding complexity of the trailer convolutional code will be multiplied by a similarly sized zero-tail convolutional code because all trailing paths in each sub-grid must be examined.
  • WAVA Viterbi algorithm
  • WAVA Likelihood
  • BEAST bidirectional efficient algorithm
  • CMLDA Creative Maximum Likelihood Decoding Algorithm
  • the present invention provides a novel maximum likelihood decoding algorithm for tailing convolutional codes, which solves the problem of high maximum decoding complexity in the prior art.
  • a novel maximum likelihood decoding algorithm for designing and manufacturing a novel tailing convolutional code, comprising the following steps: (A) performing a Viterbi algorithm VA on a backward surrounding grid; The information retained by the VA round of the Viterbi algorithm before and after; (B) The priority-first search algorithm is applied to the forward direction of all sub-grids.
  • the backward surround Viterbi algorithm WAVA is applied to the trailing convolutional code grid T, and the trailing path is checked and the auxiliary super code is checked. All paths in .
  • the auxiliary super code By grid All the paths on it, among them, Representing a (n, 1, m) trailer convolutional code having L information bits, the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
  • the metric of the path in the grid T is set as follows: Let l be a fixed integer satisfying 0 ⁇ l ⁇ L, for a binary label Path; it ends at level l in grid T, the path associated with it
  • the metric is defined as among them
  • the so-called cumulative metric for the path is the sum of the pre-specified initial metric and the associated path metrics associated therewith.
  • step (A) obtaining the maximum likelihood ML judgment is performed in the following manner: at the end of the first iteration, if the optimal backward survival path Is a trailing path, then it is the ML decision, for the remaining iterations other than the first iteration, if
  • step (B) two data structures are used for the priority-first search algorithm, the two data structures are an open stack and a closed table; the open stack is stored by the priority-first search algorithm so far.
  • the accessed path, the closed table tracks those paths that were previously at the top of the open stack.
  • the plurality of sub-grids backward survival paths obtained from the step (A) are sorted according to the ascending order of their cumulative metrics, and the maximum likelihood ML judgment is obtained and the algorithm is stopped.
  • an effective early stop criterion is employed to reduce decoding complexity.
  • the beneficial effects of the present invention are: using an effective early stopping criterion to reduce decoding complexity; in the additive white Gaussian noise channel, the simulation involving the (2, 1, 6) trailer convolutional code has a maximum decoding complexity and Variances have shown significant savings beyond BEAST and CMLDA.
  • Figure 1 is used for [24, 12, 8] extended Golog code and [96, 48, 10] block code WAVA (2) and ML decoder (such as MLWAVA, TDMLA, CMDDA or BEAST) word error rate (WER );
  • WAVA (2) and ML decoder such as MLWAVA, TDMLA, CMDDA or BEAST word error rate (WER );
  • Figure 2 is a variance of the number of calculations per information bit branch metric for the [24, 12, 8] extended Golay code decoding algorithm shown in Table I;
  • Figure 3 is a variance of the number of calculations per information bit branch metric for the [96, 48, 10] block code decoding algorithm shown in Table I;
  • WERs ML decoder word error rate
  • Figure 5 is an average calculated number of information bit branch metrics for the [24, 12, 8] extended Golay code decoding algorithm shown in Table I;
  • Figure 6 is an average calculated number of information bit branch metrics for the [96, 48, 10] block code decoding algorithm shown in Table I;
  • Figure 7 is an average calculated number of information bit branch metrics for the [192, 96, 10] block code decoding algorithm shown in Table II;
  • Figure 8 is the average number of calculations per information bit branch metric for the [96, 48, 16] block code decoding algorithm shown in Table II.
  • a novel maximum likelihood decoding algorithm for tail-end convolutional codes includes the following steps: (A) performing a Viterbi algorithm VA on a backward-surrounded grid; acquiring information retained by a previous-to-after Viterbi algorithm VA round (B) Apply the priority-first search algorithm to the forward direction of all sub-grids.
  • step (A) the backward surround Viterbi algorithm WAVA is applied to the trailing convolutional code grid T, and the trailing path is checked and the auxiliary super code is checked. All paths in .
  • the auxiliary super code By grid All the paths on it, among them, Representing a (n, 1, m) trailer convolutional code having L information bits, the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
  • the metric of the path in the grid T is set as follows: Let l be a fixed integer satisfying 0 ⁇ l ⁇ L, for a binary label path of;
  • the so-called cumulative metric for the path is the sum of the pre-specified initial metric and the associated path metrics associated therewith.
  • step (A) obtaining the maximum likelihood ML judgment is performed in the following manner: at the end of the first iteration, if the optimal backward survival path Is a trailing path, then it is the ML decision, for the remaining iterations other than the first iteration, if
  • two data structures are used to perform a priority-first search algorithm, two data structures are an open stack and a closed table; the open stack stores a path that has been accessed so far by a priority-first search algorithm, and the closed table Track those paths that were previously at the top of the open stack.
  • the plurality of sub-grids backward survival paths obtained from the step (A) are sorted according to the ascending order of the cumulative metrics, and the maximum likelihood ML judgment is obtained and the algorithm is stopped.
  • the present invention proposes a novel maximum likelihood WAVA (MLWAVA) decoding algorithm for tailing convolutional codes.
  • MLWAVA's super mesh is conceptually formed by connecting meshes in a backward fashion.
  • MLWAVA first performs VA on the backward-surrounded grid, and then applies the priority-first search algorithm to the forward direction of all sub-grids based on the information retained from the previous to the VA round.
  • the priority-first search algorithm is a simplified version of algorithm A*.
  • MLWAVA can also be regarded as a two-stage decoding algorithm, in which the backward execution of VA and the forward execution of the priority-first search algorithm are regarded as the first and second stages, respectively.
  • this paper designs a new ML decoding metric and a new evaluation function for the first and second phases respectively.
  • it proposes an effective early stop criterion for each of the two phases to further reduce decoding complexity.
  • the decoding complexity where SNR b represents the signal to noise ratio per information bit.
  • SNR b represents the signal to noise ratio per information bit.
  • the average decoding complexity of BEAST at high SNR is better than MLWAVA
  • the optimal MLWAVA is lower in both the average and maximum decoding complexity.
  • An (n, 1, m) trailer convolutional code having L information bits is represented, wherein for simplicity only the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
  • Such a system may also be referred to as a [nL, L, d min ] block code. 1 based on this setting, the end convolutional code
  • the metric of the path in the grid T is thus defined as follows.
  • Definition 1 Let l be a fixed integer satisfying 0 ⁇ l ⁇ L. For a binary tag Path; it ends at level l in grid T, defining the path metric associated with it as
  • the so-called cumulative metric for this path is the sum of the pre-specified initial metrics and the above-mentioned path metrics associated with it. Note that the initial metric is zero for the first WAVA iteration and is set to a specific value based on the previous iteration of the iteration (except for the first iteration).
  • the definition of the definition of the backward VA can be obtained in a straightforward manner. For example, with tags The metric associated with the backward path is given by
  • the cumulative metric for the backward path is the sum of the initial metric specified for the starting state at level L and the associated metric given above. It should be emphasized that a binary tag can uniquely determine the path and its start and end states.
  • the proposed algorithm can be divided into two phases.
  • the backward WAVA is applied to the grid T, which not only checks the trailing path but also checks All paths in .
  • each N s state at level l in the ith WAVA iteration has a VA backward survival path, and if the backward survival path (at level l) ends, Then the associated cumulative metric can be Said.
  • Will be set to the initial metric of the state at level L at the beginning of the (i+1)th WAVA iteration; therefore, the label is the backward of x [ln] (x N-1 ,...,x ln ) Survival path will have cumulative metrics
  • s are the initial state of the backward survival path at the level L and the end state at the level l, respectively. All metrics (where s ⁇ S, 0 ⁇ l ⁇ L and 1 ⁇ i ⁇ I) are reserved for future use in the second phase. Here, I is the maximum number of WAVA iterations performed.
  • a N s backward survival path at level 0 (instead of level L) will be generated. Again, these backward survival paths are only guaranteed to exist in Medium but may not be The code word in .
  • the label of the best backward survival path is the smallest cumulative path metric in all backward survival paths at the end of the i-th WAVA iteration, Said.
  • Optimal backward survival path Associated cumulative metric Said.
  • the best end-of-life path corresponding to the end of the ith WAVA iteration exists, use To express it and use To represent its associated cumulative path metric. make with Separately with
  • the labels with the smallest cumulative metrics, and their cumulative path metrics are represented as with .
  • the following theorem is a modified version of the theorem in [2], which provides an early stop criterion to WAVA after iteration.
  • Theorem 1 ([2]): at the end of the first iteration, if the best backward survival path Is a trailing path, then it is the ML decision. For the remaining iterations other than the first iteration, if
  • Suitable for each among them Is the set of all end states of the trailing survival path (at level 0) that is followed by the WAVA until the i-th iteration, then the best end-of-life path It is the ML decision.
  • the ML decision may be found before the backward WAVA reaches the maximum number of iterations; therefore, the decoding complexity can be reduced.
  • the following algorithm steps are proposed to the WAVA, which is basically the same as the WAVA. The different aspects are: it performs in a backward manner and needs to record the path metric. Where s ⁇ S, 0 ⁇ l ⁇ L and 1 ⁇ i ⁇ I.
  • Step 1 Initialize for each state s s ⁇ S
  • Step 3 If Then The output is an ML decision and the algorithm is stopped.
  • Step 6 Initialize for each state s ⁇ S
  • Step 8 If Meet the stopping criteria in (3), then The output is an ML decision and the algorithm is stopped.
  • the trailing path with the smallest f-function value is the trailing path with the smallest ML metric in (2).
  • the priority-first search algorithm requires two data structures.
  • the first is called an open stack, which stores the paths that have been accessed so far by a priority-first search algorithm.
  • the other is called a closed table, which tracks those paths that were previously at the top of the open stack at some point in time. The reason they are so named is that the path in the open stack may be further extended and thus kept open in the future, while the paths in the closed table can no longer be expanded and therefore closed for future expansion.
  • the priority-first search algorithm for the N s sub-grid is summarized below.
  • Step 1 Sort the N s backward survival paths obtained from the first stage according to the ascending order of their cumulative metrics. If the backward survival path with the smallest cumulative metric is also a trailing path (starting at the same state), it is output as the final ML decision and the algorithm is stopped.
  • Step 3 Load the initial zero length forward path of the subgrid into the open stack, which is consistent with the initial state of level 0 and any end state of the remaining backward survival paths. Arrange them in ascending order of the f-function values of these zero-length paths in the open stack.
  • Step 4 If the open stack is empty, output x UB as the final ML decision and stop the algorithm. 2
  • Step 5 If the current top path in the open stack reaches level L in its corresponding subgrid, the path is output as the final ML decision and the algorithm is stopped.
  • Step 6 If the current top path in the open stack has been recorded in the closed list, discard it from the open table and go to step 4 otherwise; record information about the top path in the closed list.
  • Step 7 Calculate the f function value of the trailing path of the top path in the open stack. Then, remove the top path from the open stack. Delete those subsequent paths where the f function value ⁇ c UB .
  • Step 9 Insert the remaining subsequent paths into the open stack and reorder the open stack based on the ascending f function values. Go to step 4.
  • step 8 replaces x UB with the first successor x k, (Ln-1) , which reaches level L and the open stack must not be empty before the replacement. Therefore, when the open stack is forcibly cleared by deleting the path of the f function value ⁇ c UB , x UB will never be empty.
  • the open stack is similar to the way the stack operates in the conventional sequential decoding algorithm.
  • the introduction of closed tables is to eliminate the top path that ends in a state that was accessed at some previous time. It will be shown later in (8) that these top paths have worse f function values than the top paths that were previously accessed and ended in the same state; therefore, they can be eliminated directly to speed up the decoding process.
  • the proposed MLWAVA can be applied to a generic (n, k, m) (where 1 ⁇ k ⁇ n) trailer convolutional codes by employing its corresponding super-mesh and sub-mesh.
  • Priority search Considering that the decoder typically tends to list the output code bits in a forward manner, it is chosen to perform a priority-first search algorithm in a forward manner in the second phase.
  • Lemma 2 l to end in a horizontal path to the state s x k T k to the grid, (ln-1),
  • the backward survival path must be obtained during the i1th WAVA iteration and must end at state s on level l.
  • the f function value of the path x k, ((l+1)n-1) can be calculated as follows:
  • step 5 of the second phase can be modified accordingly by adding "if the value of the h function of the top path is determined by a backward survival path of the initial state of the top path (on the horizontal L) and the initial state of the top path, Then the combined path is output as an ML decision and the algorithm is stopped" to speed up the decoding algorithm.
  • a top path ending at a level between 0 and m-1 does not necessarily have a immediately following element along the first stage backward survival path that satisfies the top path. This is because not all N s states are available at levels 0 through m-1 in the subgrid. Therefore, for a top path ending at a level less than m, the original priority first search step should still be performed.
  • the original priority first search step should still be performed.
  • when a path reaches the horizontal Lm it has a unique trajectory along its corresponding sub-grid to level L; therefore, it can be extended directly to the horizontal L to form a length N-tailed path and pass Compare the f function value with the f function value in the open stack to test if the resulting end trail is the final ML decision.
  • the second-stage process can be simplified to a depth-first algorithm, and the decoding complexity is greatly reduced.
  • the seventh step of the second stage algorithm can be modified accordingly by the following additional conditions: if all remaining back elements end at a level less than m, then go to the next step (ie step 8); otherwise, do the following:
  • the post-element sequentially extends to the horizontal Lm along the first-stage backward survival path, and sets the f-function value of the extended-length element of length (Lm)n to be just mn with the length.
  • the values calculated by the original non-expanded elements are the same.
  • the proposed two-stage decoding algorithm can alternatively be implemented as a forward iterative WAVA followed by a priority-first search algorithm. While this alternative implementation is also optimal in performance, it may result in different decoding complexity for a given receive vector; on average, the decoding complexity of both algorithms should still be the same.
  • the computational complexity of a sequential search algorithm includes not only the evaluation of the decoding metric, but also the amount of work involved in searching and reordering the stack elements.
  • a priority queue data structure [15] The latter's workload can be comparable to the former.
  • the MLWAVA with the largest number of iterations I and the parameter ⁇ (which determines the starting position l* by (14)) is represented by MLWAVA ⁇ (I) and has A
  • the innovative ML decoding algorithm [9] and the bidirectional threshold based sequential decoding algorithm [8] are expressed as WAVA (I), TDMLDA, respectively.
  • CMDDA and BEAST are ML decoders. For all simulations, ensure that at least 100 word errors occur so that there is no bias in the simulation results.
  • the WER performance of WAVA (2) and ML decoders is first shown in Figure 1 according to the comparison.
  • BEAST has the smallest average decoding complexity among the five decoding algorithms, and also has the smallest average decoding for long-tailed convolutional codes with SNR b ⁇ 3 dB. the complexity.
  • the maximum number of information bit branch metric calculations required by MLWAVA 6 (1) is the lowest of all ML decoding algorithms, and its reduction is compared to other ML decoding algorithms. It is remarkable.
  • the maximum decoding complexity of MLWAVA 6 (1) is 4 times, 4 times and 8 times smaller than the maximum decoding complexity of TDMLDA, CMDDA and BEAST, respectively. Times. This improvement is of practical importance because the decoding delay is primarily determined by the worst-case complexity.
  • the stopping criteria in Theorem 1 have been implemented in WAVA (2) and MLWAVA 6 (1). In order to easily find the best value, the smallest number in each column has been shown in bold.
  • the variance of the decoding complexity of the five decoding algorithms is shown in Table I, and the results obtained are summarized in Figures 2 and 3. Both figures show that MLWAVA has significantly smaller variance than the other four decoders listed in Table I. Especially for the [96, 48, 10] block code, the variance of the MLWAVA decoding complexity and the decoding complexity of the other four decoding algorithms are at least two orders of magnitude worse. When only BEAST and MLWAVA are tested, Table I and the results shown in Figures 2 and 3 together show that even though BEAST is superior in average decoding complexity for many values of SNR, its variance is much higher than MLWAVA. .
  • the MLWAVA's decoding complexity variance is 7,234 times lower than BEAST for the [96, 48, 10] block code. This again shows that MLWAVA is a better choice between the two when decoding delay practices are of particular interest in practical applications.
  • Table II shows that when comparing the average decoding complexity per bit of information for the [96, 48, 10] block code in Table I, the average decoding per bit of information for MLWAVA is complex for the new double length tailing convolutional code. The degree remains the same or decreases slightly; nevertheless, when the length of the information is doubled, the average and maximum decoding complexity of BEAST is greatly increased. This indicates that the decoding complexity of MLWAVA is highly stable for varying codeword lengths, and the decoding complexity of BEAST may increase significantly as the codeword length increases.
  • code constraint length Another factor that may affect decoding complexity is the code constraint length. This effect of this factor on MLWAVA and BEAST has been tested and is also summarized in Table II.
  • the code used in this experiment is a (2, 1, 12) trailer convolutional code [17] with generators 5135, 14477 (octal). Its message length is 48, which is equivalent to the [96, 48, 16] block code.
  • the stopping criterion in Theorem 1 has been implemented for MLWAVA ⁇ (I). In order to easily find the best value, the smaller number in each column has been shown in bold.
  • the workload for searching and reordering stack elements can be significantly mitigated by employing a priority queue data structure [15] or even a hardware-based stack structure [16].
  • the cost of the BEAST node check process can be mitigated by storing the state of all extended nodes in an array of appropriate structures, where the matching nodes can be located in one step of the storage singularity.

Abstract

A maximum likelihood decoding algorithm for a tail-biting convolutional code, said algorithm comprising the following steps: (A) executing a Viterbi algorithm (VA) on a backward surrounding grid; obtaining information retained from a previous backward Viterbi algorithm (VA); (B) applying a priority search algorithm to a forward direction of all sub-grids. The advantageous effects are: an effective early stop standard is used, in order to reduce decoding complexity; a simulation of a (2,1,6) tail-biting convolutional code in an additive white Gaussian noise channel exhibits significant savings exceeding BEAST and CMLDA in terms of decoding complexity maximum value and variance.

Description

新型衔尾卷积码用最大似然解码算法Maximum Likelihood Decoding Algorithm for New Type-End Convolutional Codes 【技术领域】[Technical Field]
本发明涉及数据处理领域,尤其涉及一种新型衔尾卷积码用最大似然解码算法MLWAVA。The present invention relates to the field of data processing, and in particular to a novel maximum likelihood decoding algorithm MLWAVA for tailing convolutional codes.
【背景技术】【Background technique】
自从卷积码发明以来它们便被广泛用于在数位通信中提供有效的出错防止能力。在卷积编码器的实际执行中,通常在信息位序列的末尾附加一定数量的零,以清除移位寄存器的内容,使得下一个信息序列的编码过程可以直接进行而无需进行初始化。这些零尾位可以增强卷积码的出错防止能力。对于足够长的信息序列,归因于这些零尾位的码率损失几乎可以忽略不计;然而,当信息序列较短时,这些零尾位引入了显著的码率损失。Since the invention of convolutional codes, they have been widely used to provide effective error prevention capabilities in digital communications. In the actual implementation of a convolutional encoder, a certain number of zeros are typically appended to the end of the sequence of information bits to clear the contents of the shift register so that the encoding process of the next sequence of information can be performed directly without initialization. These zero tail bits can enhance the error prevention capability of the convolutional code. For sufficiently long sequences of information, the rate loss due to these zero tail bits is almost negligible; however, when the information sequence is short, these zero tail bits introduce significant loss of code rate.
在文献中,已经提出了几种方法来减轻上述(短长度)零尾卷积码的码率损失,例如直接截断[1]和扰散[1]及衔尾[2]、[3]、[4]。具体来说,衔尾卷积码以直接的方式克服了码率的损失,并且引起较小的性能下降。文献[1]中已经证实了这一点,其中展示了衔尾卷积码具有比扰散及零尾卷积码更好的出错防止性能。与始终起止于全零状态的零尾卷积编码器不同,衔尾卷积编码器仅确保初始状态和最终状态相同(其中特定状态由输入数据决定)。由于任何状态都可能是衔尾卷积编码器的初始状态,解码复杂度急剧增加。In the literature, several methods have been proposed to mitigate the loss of code rate of the above (short length) zero-tailed convolutional codes, such as direct truncation [1] and spurious [1] and appendix [2], [3], [4]. In particular, the trailer convolutional code overcomes the loss of code rate in a straightforward manner and causes a small performance degradation. This has been confirmed in the literature [1], which shows that the trailer convolutional code has better error prevention performance than the spurious and zero-tailed convolutional codes. Unlike a zero-tail convolutional encoder that always starts at the all-zero state, the trailer convolutional encoder only ensures that the initial state and the final state are the same (where a particular state is determined by the input data). Since any state may be the initial state of the trailer convolutional encoder, the decoding complexity increases dramatically.
类似于零尾卷积码的解码,尾部卷积码的解码在网格上执行,而码字通过该网格现对应于起止于相同(不一定是全零)状态的路径。出于方便起见,衔尾卷积码网格上具有相同初始状态和最终状态的路径被称为衔尾路径。由于“码字”和“衔尾路径”之间存在一对一的对应关系,这两个术语在本文中可以互换使用。Similar to the decoding of the zero-tail convolutional code, the decoding of the tail convolutional code is performed on the grid, and the codeword through the grid now corresponds to the path starting and ending in the same (not necessarily all-zero) state. For convenience, a path having the same initial state and final state on the trailer convolutional code grid is referred to as a trailing path. Since there is a one-to-one correspondence between "codewords" and "tailing paths", these two terms are used interchangeably herein.
令衔尾卷积码网格的所有可能初始状态(等价于最终状态)的数量为Ns。则,网格可以分解成Ns个具有相同初始状态和最终状态的子网格。依照先前的命名约定,这些子网格被称为衔尾子网格,或者若没有缩写所产生的歧义,则可称为简单子网格。为了方便起见,将用T表示一个衔尾卷积码网格,并用Tk表示其第k个子网格。可以清楚地看到,与相似大小的零尾卷积码相比,衔尾卷积码的解码复杂度将增加多倍,这是因为每个子网格中的所有衔尾路径必须进行查验。Let the number of all possible initial states (equivalent to the final state) of the trailer convolutional code grid be N s . Then, the mesh can be decomposed into N s sub-grids with the same initial state and final state. According to the previous naming convention, these sub-grids are called the tail sub-grid, or if there is no ambiguity caused by the abbreviation, it can be called a simple sub-grid. For convenience, the convolutional code representing a xianwei grid and which represents a k-th sub-cell by T k with T. It can be clearly seen that the decoding complexity of the trailer convolutional code will be multiplied by a similarly sized zero-tail convolutional code because all trailing paths in each sub-grid must be examined.
为了降低解码复杂度,文献[2]、[3]、[5]、[6]、[7]中已经提出了用于衔尾卷积码的几种次优解码算法,在这些算法中环绕维特比算法(WAVA)的解码复杂度最小[2]。概念上而言,WAVA以环绕方式重复地将维特比算法(VA)应用于衔尾卷积码的网格上。在其执行期间,WAVA不仅检查衔尾路径,而且还检查起止于不同状态的路径;因此,它可能产生没有码字对应的路径。WAVA可以被等同地视为VA在“超级网格”上的应用,其由预先指定数量、以串联方 式彼此连接的网格所形成。通过模拟显示,最多环绕四个网格就足以获得近似最优性能[2]。In order to reduce the decoding complexity, several sub-optimal decoding algorithms for the tailing convolutional codes have been proposed in the literature [2], [3], [5], [6], [7], and surround these algorithms. The decoding complexity of the Viterbi algorithm (WAVA) is minimal [2]. Conceptually, WAVA repeatedly applies the Viterbi algorithm (VA) to the grid of the trailer convolutional code in a round-robin fashion. During its execution, the WAVA not only checks the trailing path but also the trails starting and ending in different states; therefore, it may result in a path without a codeword. WAVA can be viewed equally as an application of VA on a "supergrid" with a pre-specified number in series Formed by a grid of connected to each other. By simulation, it is enough to wrap up to four grids to obtain approximate optimal performance [2].
在需要最优解码性能的情况下,因为WAVA不能保证将总会找到最大似然(ML)衔尾路径,所以它不再是合适的选择。通过在所有衔尾子格上执行VA,ML衔尾路径可以直接获得;然而,这种强力方法却因其高计算复杂度显得不切实际。In the case where optimal decoding performance is required, WAVA is no longer a suitable choice because it cannot guarantee that the Maximum Likelihood (ML) trailing path will always be found. By performing VA on all trailing sub-lattices, the ML end-of-the-path can be obtained directly; however, this powerful method is impractical due to its high computational complexity.
在2005年,Bocharova等人提出了用于衔尾卷积码的ML解码算法,这被称为搜索树的双向有效算法(BEAST)[8]。从概念上讲,BEAST在向前和向后的方向上都会重复地及同时地对特定节点进行探索,这些节点所具有的解码度量低于每个子网格上的某一阈值。它在每个步骤中不断增加阈值,直到找到一个ML路径。[8]中所提供的模拟结果表明,BEAST具有非常低的解码复杂度并且在高信噪比(SNR)下行之有效。In 2005, Bocharova et al. proposed an ML decoding algorithm for tailing convolutional codes, which is called the bidirectional efficient algorithm (BEAST) of the search tree [8]. Conceptually, BEAST repeatedly and simultaneously explores specific nodes in both forward and backward directions, and these nodes have decoding metrics below a certain threshold on each subgrid. It increments the threshold continuously in each step until it finds an ML path. The simulation results provided in [8] show that BEAST has very low decoding complexity and is effective at high signal-to-noise ratio (SNR).
一年后,Shankar等人[9]提出了另一种用于衔尾卷积码的ML解码算法,为了方便起见称之为创造性最大似然解码算法(CMLDA)。CMLDA具有两个阶段。第一阶段将VA应用于衔尾卷积码的网格以获得某些网格信息。基于这些网格信息,算法A*会在第二阶段的所有子网格上并行地执行以产生ML判定。正如[9]所展示的那样,在不牺牲性能最优性的情况下,CMLDA将解码复杂度从强力逼近方法所需的Ns次VA执行次数降低至大约1.3倍的VA执行次数。为了在解码复杂性方面寻求CMLDA的进一步改进,[10]和[11]的作者重新定义了[9]中给出的启发式函数。前一种算法[10]提出在第一阶段应用后向VA,而不是以如[9]和[11]所述的前向方式应用VA。本文在此提供了一个对[10]的重要延伸,并第一阶段中考虑了后向VA的多重迭代。A year later, Shankar et al. [9] proposed another ML decoding algorithm for tailing convolutional codes, which is called the Creative Maximum Likelihood Decoding Algorithm (CMLDA) for convenience. CMLDA has two phases. The first stage applies VA to the mesh of the trailer convolutional code to obtain some grid information. Based on these grid information, the algorithm A* will execute in parallel on all sub-grids of the second stage to generate ML decisions. As demonstrated in [9], CMNDA reduces the decoding complexity from the number of N s VA implementations required for the robust approximation method to approximately 1.3 times the number of VA executions without sacrificing performance optimality. In order to seek further improvements in CMLDA in terms of decoding complexity, the authors of [10] and [11] have redefined the heuristic functions given in [9]. The former algorithm [10] proposes to apply the backward VA in the first phase instead of applying the VA in the forward mode as described in [9] and [11]. This article provides an important extension of [10] and considers multiple iterations of backward VA in the first phase.
最近,Wang等人提出了不使用堆栈的另一种ML解码算法[12]。它比较两次连续WAVA迭代之间的残存路径,并且每当[12]所述的关键性残存路径在非ML路径上被“捕获”时,它都会在特定的衔尾子网格上启动VA。然而,[12]的解码复杂度结果是比[9]高得多。Recently, Wang et al. proposed another ML decoding algorithm that does not use a stack [12]. It compares the survivor path between two consecutive WAVA iterations, and whenever the critical surviving path described in [12] is "captured" on the non-ML path, it will start VA on the particular tail subgrid. . However, the decoding complexity result of [12] is much higher than [9].
【发明内容】[Summary of the Invention]
为了解决现有技术中的问题,本发明提供了一种新型衔尾卷积码用最大似然解码算法,解决现有技术中存在较高的最大解码复杂度的问题。In order to solve the problems in the prior art, the present invention provides a novel maximum likelihood decoding algorithm for tailing convolutional codes, which solves the problem of high maximum decoding complexity in the prior art.
本发明是通过以下技术方案实现的:设计、制造了一种新型衔尾卷积码用最大似然解码算法,包括如下步骤:(A)后向环绕的网格上执行维特比算法VA;获取前一后向维特比算法VA轮次所保留的信息;(B)将优先级优先搜索算法应用于所有子网格的前向方向之中。The invention is realized by the following technical solutions: a novel maximum likelihood decoding algorithm for designing and manufacturing a novel tailing convolutional code, comprising the following steps: (A) performing a Viterbi algorithm VA on a backward surrounding grid; The information retained by the VA round of the Viterbi algorithm before and after; (B) The priority-first search algorithm is applied to the forward direction of all sub-grids.
作为本发明的进一步改进:所述步骤(A)中,后向环绕维特比算法WAVA被应用于衔尾卷积码网格T,并检查衔尾路径以及检查辅助超级代码
Figure PCTCN2017097667-appb-000001
中的所有路径。
As a further improvement of the present invention, in the step (A), the backward surround Viterbi algorithm WAVA is applied to the trailing convolutional code grid T, and the trailing path is checked and the auxiliary super code is checked.
Figure PCTCN2017097667-appb-000001
All paths in .
作为本发明的进一步改进:所述辅助超级代码
Figure PCTCN2017097667-appb-000002
由网格
Figure PCTCN2017097667-appb-000003
上的所有路径组成,其中,
Figure PCTCN2017097667-appb-000004
表示具有L个信息比特的(n,1,m)衔尾卷积码,目标卷积映射从1个信息比特限制为n个码位,m是存储器顺序。
As a further improvement of the present invention: the auxiliary super code
Figure PCTCN2017097667-appb-000002
By grid
Figure PCTCN2017097667-appb-000003
All the paths on it, among them,
Figure PCTCN2017097667-appb-000004
Representing a (n, 1, m) trailer convolutional code having L information bits, the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
作为本发明的进一步改进:网格T中路径的度量采用如下方式设定:令l为满足0≤l≤L的一个固定整数,对于一个二进制标签为
Figure PCTCN2017097667-appb-000005
Figure PCTCN2017097667-appb-000006
的路径;其结束于网格T中的水平l,与其相关的路径
As a further improvement of the present invention, the metric of the path in the grid T is set as follows: Let l be a fixed integer satisfying 0 ≤ l ≤ L, for a binary label
Figure PCTCN2017097667-appb-000005
Figure PCTCN2017097667-appb-000006
Path; it ends at level l in grid T, the path associated with it
度量定义为
Figure PCTCN2017097667-appb-000007
其中
Figure PCTCN2017097667-appb-000008
为对应的比特度量,该路径的所谓累积度量是预先指定的初始度量及与之相关的上述路径度量的总和。
The metric is defined as
Figure PCTCN2017097667-appb-000007
among them
Figure PCTCN2017097667-appb-000008
For the corresponding bit metric, the so-called cumulative metric for the path is the sum of the pre-specified initial metric and the associated path metrics associated therewith.
作为本发明的进一步改进:所述步骤(A)中,获得最大似然ML判断按以下方式进行:在第一次迭代结束时,若最佳的后向存活路径
Figure PCTCN2017097667-appb-000009
是一个衔尾路径,则它就是ML判定,对于首个迭代以外剩余的迭代,若
As a further improvement of the present invention, in the step (A), obtaining the maximum likelihood ML judgment is performed in the following manner: at the end of the first iteration, if the optimal backward survival path
Figure PCTCN2017097667-appb-000009
Is a trailing path, then it is the ML decision, for the remaining iterations other than the first iteration, if
Figure PCTCN2017097667-appb-000010
Figure PCTCN2017097667-appb-000010
适用于每个
Figure PCTCN2017097667-appb-000011
其中
Figure PCTCN2017097667-appb-000012
是后向WAVA直到第i次迭代所遇到的衔尾存活路径所有结束状态的集合,则最佳衔尾存活路径
Figure PCTCN2017097667-appb-000013
就是ML判定。
Suitable for each
Figure PCTCN2017097667-appb-000011
among them
Figure PCTCN2017097667-appb-000012
Is the best end-of-life path for the end-to-WAVA until the end of the end-of-life path encountered by the i-th iteration.
Figure PCTCN2017097667-appb-000013
It is the ML decision.
作为本发明的进一步改进:所述步骤(B)中,采用两个数据结构进行优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。As a further improvement of the present invention, in the step (B), two data structures are used for the priority-first search algorithm, the two data structures are an open stack and a closed table; the open stack is stored by the priority-first search algorithm so far. The accessed path, the closed table tracks those paths that were previously at the top of the open stack.
作为本发明的进一步改进:根据其累积度量的升序对从步骤(A)中获得的多个子网格后向存活路径进行排序,并获得最大似然ML判断后停止算法。As a further improvement of the present invention, the plurality of sub-grids backward survival paths obtained from the step (A) are sorted according to the ascending order of their cumulative metrics, and the maximum likelihood ML judgment is obtained and the algorithm is stopped.
作为本发明的进一步改进:采用有效早期停止标准以减少解码复杂度。As a further improvement of the present invention: an effective early stop criterion is employed to reduce decoding complexity.
本发明的有益效果是:采用有效早期停止标准,以减少解码复杂度;在加性白高斯噪声信道中涉及(2,1,6)衔尾卷积码的模拟在解码复杂度的最大值和方差方面都显示出超越BEAST和CMLDA的显著节省。The beneficial effects of the present invention are: using an effective early stopping criterion to reduce decoding complexity; in the additive white Gaussian noise channel, the simulation involving the (2, 1, 6) trailer convolutional code has a maximum decoding complexity and Variances have shown significant savings beyond BEAST and CMLDA.
【附图说明】[Description of the Drawings]
图1用于[24,12,8]扩展戈洛码和[96,48,10]分组码的WAVA(2)和ML解码器(如MLWAVA,TDMLDA,CMLDA或BEAST)的字出错率(WER);Figure 1 is used for [24, 12, 8] extended Golog code and [96, 48, 10] block code WAVA (2) and ML decoder (such as MLWAVA, TDMLA, CMDDA or BEAST) word error rate (WER );
图2是表I所展示用于[24,12,8]扩展戈雷码解码算法的每信息位分支度量计算数量的方差;Figure 2 is a variance of the number of calculations per information bit branch metric for the [24, 12, 8] extended Golay code decoding algorithm shown in Table I;
图3是表I所展示用于[96,48,10]分组码解码算法的每信息位分支度量计算数量的方差;Figure 3 is a variance of the number of calculations per information bit branch metric for the [96, 48, 10] block code decoding algorithm shown in Table I;
图4是[192,96,10]和[96,48,16]分组码的ML解码器字出错率(WERs); 4 is an ML decoder word error rate (WERs) of [192, 96, 10] and [96, 48, 16] block codes;
图5是表I所展示用于[24,12,8]扩展戈雷码解码算法的每信息比特分支度量平均计算数量;Figure 5 is an average calculated number of information bit branch metrics for the [24, 12, 8] extended Golay code decoding algorithm shown in Table I;
图6是表I所展示用于[96,48,10]分组码解码算法的每信息比特分支度量平均计算数量;Figure 6 is an average calculated number of information bit branch metrics for the [96, 48, 10] block code decoding algorithm shown in Table I;
图7是表II所展示用于[192,96,10]分组码解码算法的每信息比特分支度量平均计算数量;Figure 7 is an average calculated number of information bit branch metrics for the [192, 96, 10] block code decoding algorithm shown in Table II;
图8是表II所展示用于[96,48,16]分组码解码算法的每信息比特分支度量平均计算数量。Figure 8 is the average number of calculations per information bit branch metric for the [96, 48, 16] block code decoding algorithm shown in Table II.
【具体实施方式】【detailed description】
下面结合附图说明及具体实施方式对本发明进一步说明。The invention will now be further described with reference to the drawings and specific embodiments.
一种新型衔尾卷积码用最大似然解码算法,包括如下步骤:(A)后向环绕的网格上执行维特比算法VA;获取前一后向维特比算法VA轮次所保留的信息;(B)将优先级优先搜索算法应用于所有子网格的前向方向之中。A novel maximum likelihood decoding algorithm for tail-end convolutional codes includes the following steps: (A) performing a Viterbi algorithm VA on a backward-surrounded grid; acquiring information retained by a previous-to-after Viterbi algorithm VA round (B) Apply the priority-first search algorithm to the forward direction of all sub-grids.
所述步骤(A)中,后向环绕维特比算法WAVA被应用于衔尾卷积码网格T,并检查衔尾路径以及检查辅助超级代码
Figure PCTCN2017097667-appb-000014
中的所有路径。
In the step (A), the backward surround Viterbi algorithm WAVA is applied to the trailing convolutional code grid T, and the trailing path is checked and the auxiliary super code is checked.
Figure PCTCN2017097667-appb-000014
All paths in .
所述辅助超级代码
Figure PCTCN2017097667-appb-000015
由网格
Figure PCTCN2017097667-appb-000016
上的所有路径组成,其中,
Figure PCTCN2017097667-appb-000017
表示具有L个信息比特的(n,1,m)衔尾卷积码,目标卷积映射从1个信息比特限制为n个码位,m是存储器顺序。
The auxiliary super code
Figure PCTCN2017097667-appb-000015
By grid
Figure PCTCN2017097667-appb-000016
All the paths on it, among them,
Figure PCTCN2017097667-appb-000017
Representing a (n, 1, m) trailer convolutional code having L information bits, the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
网格T中路径的度量采用如下方式设定:令l为满足0≤l≤L的一个固定整数,对于一个二进制标签为
Figure PCTCN2017097667-appb-000018
的路径;
The metric of the path in the grid T is set as follows: Let l be a fixed integer satisfying 0 ≤ l ≤ L, for a binary label
Figure PCTCN2017097667-appb-000018
path of;
其结束于网格T中的水平l,与其相关的路径度量定义为It ends at level l in grid T, and its associated path metric is defined as
Figure PCTCN2017097667-appb-000019
其中
Figure PCTCN2017097667-appb-000020
为对应的比特度量,该路径的所谓累积度量是预先指定的初始度量及与之相关的上述路径度量的总和。
Figure PCTCN2017097667-appb-000019
among them
Figure PCTCN2017097667-appb-000020
For the corresponding bit metric, the so-called cumulative metric for the path is the sum of the pre-specified initial metric and the associated path metrics associated therewith.
所述步骤(A)中,获得最大似然ML判断按以下方式进行:在第一次迭代结束时,若最佳的后向存活路径
Figure PCTCN2017097667-appb-000021
是一个衔尾路径,则它就是ML判定,对于首个迭代以外剩余的迭代,若
In the step (A), obtaining the maximum likelihood ML judgment is performed in the following manner: at the end of the first iteration, if the optimal backward survival path
Figure PCTCN2017097667-appb-000021
Is a trailing path, then it is the ML decision, for the remaining iterations other than the first iteration, if
Figure PCTCN2017097667-appb-000022
Figure PCTCN2017097667-appb-000022
适用于每个
Figure PCTCN2017097667-appb-000023
其中
Figure PCTCN2017097667-appb-000024
是后向WAVA直到第i次迭代所遇到的衔尾存活路径所有结束状态的集合,则最佳衔尾存活路径
Figure PCTCN2017097667-appb-000025
就是ML判定。
Suitable for each
Figure PCTCN2017097667-appb-000023
among them
Figure PCTCN2017097667-appb-000024
Is the best end-of-life path for the end-to-WAVA until the end of the end-of-life path encountered by the i-th iteration.
Figure PCTCN2017097667-appb-000025
It is the ML decision.
所述步骤(B)中,采用两个数据结构进行优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。 In the step (B), two data structures are used to perform a priority-first search algorithm, two data structures are an open stack and a closed table; the open stack stores a path that has been accessed so far by a priority-first search algorithm, and the closed table Track those paths that were previously at the top of the open stack.
根据其累积度量的升序对从步骤(A)中获得的多个子网格后向存活路径进行排序,并获得最大似然ML判断后停止算法。The plurality of sub-grids backward survival paths obtained from the step (A) are sorted according to the ascending order of the cumulative metrics, and the maximum likelihood ML judgment is obtained and the algorithm is stopped.
采用有效早期停止标准以减少解码复杂度。Use effective early stop criteria to reduce decoding complexity.
本发明提出了一种新型衔尾卷积码用最大似然WAVA(MLWAVA)解码算法。与WAVA不同,MLWAVA的超级网格在概念上通过以后向方式连接网格来形成。具体来说,MLWAVA首先在后向环绕的网格上执行VA,然后基于从前一后向VA轮次所保留的信息,将优先级优先搜索算法应用于所有子网格的前向方向之中。请注意,优先级优先搜索算法是算法A*的简化版本。类似于[9]中的ML解码算法,MLWAVA也可以看作是一种两阶段解码算法,其中VA的后向执行和优先级优先搜索算法的前向执行分别被视为第一和第二阶段;尽管如此,本文仍然分别针对第一和第二阶段设计了一个新ML解码度量和一个新评估函数。此外,其还提出了针对两个阶段之每一阶段的有效早期停止标准,以进一步减少解码复杂度。针对信息长度为48之(2,1,6)衔尾卷积码的模拟结果表明,MLWAVA的平均解码复杂度小于[9]和[12]中的ML解码算法在SNRb=4dB下的平均解码复杂度,其中SNRb表示每信息比特的信噪比。虽然BEAST在高SNR下的平均解码复杂度优于MLWAVA,但是在SNRb=4dB时,MLWAVA的解码复杂度的方差比BEAST的低了7234倍。这使得MLWAVA在解码延迟时间为关注重点时成为两者之间的更好选择。与近乎最优的WAVA[2]相比,最优MLWAVA在平均和最大解码复杂度两方面都更低。The present invention proposes a novel maximum likelihood WAVA (MLWAVA) decoding algorithm for tailing convolutional codes. Unlike WAVA, MLWAVA's super mesh is conceptually formed by connecting meshes in a backward fashion. Specifically, MLWAVA first performs VA on the backward-surrounded grid, and then applies the priority-first search algorithm to the forward direction of all sub-grids based on the information retained from the previous to the VA round. Note that the priority-first search algorithm is a simplified version of algorithm A*. Similar to the ML decoding algorithm in [9], MLWAVA can also be regarded as a two-stage decoding algorithm, in which the backward execution of VA and the forward execution of the priority-first search algorithm are regarded as the first and second stages, respectively. Nevertheless, this paper designs a new ML decoding metric and a new evaluation function for the first and second phases respectively. In addition, it proposes an effective early stop criterion for each of the two phases to further reduce decoding complexity. The simulation results for the (2,1,6) trailer convolutional code with information length of 48 indicate that the average decoding complexity of MLWAVA is less than the average of ML decoding algorithm in [9] and [12] at SNR b = 4 dB. The decoding complexity, where SNR b represents the signal to noise ratio per information bit. Although the average decoding complexity of BEAST at high SNR is better than MLWAVA, the variance of decoding complexity of MLWAVA is 7234 times lower than that of BEAST at SNR b = 4 dB. This makes MLWAVA a better choice between the two when the decoding delay time is the focus of attention. Compared to the near-optimal WAVA [2], the optimal MLWAVA is lower in both the average and maximum decoding complexity.
Figure PCTCN2017097667-appb-000026
表示具有L个信息比特的(n,1,m)衔尾卷积码,其中为了简单起见仅将目标卷积映射从1个信息比特限制为n个码位,而m是存储器顺序。这样的系统也可以被称为[nL,L,dmin]分组码。1基于该设置,衔尾卷积码
Figure PCTCN2017097667-appb-000027
的网格T在每个水平具有Ns=2m个状态,并且具有L+1个水平。虽然只有限制自身具有相同的初始状态和最终状态的衔尾路径对应于
Figure PCTCN2017097667-appb-000028
的码字,但是引入了一个辅助超级代码
Figure PCTCN2017097667-appb-000029
其由网格
Figure PCTCN2017097667-appb-000030
上的所有路径组成,并且该网格现在可以起止于不同的状态。
make
Figure PCTCN2017097667-appb-000026
An (n, 1, m) trailer convolutional code having L information bits is represented, wherein for simplicity only the target convolution map is limited from 1 information bit to n code bits, and m is the memory order. Such a system may also be referred to as a [nL, L, d min ] block code. 1 based on this setting, the end convolutional code
Figure PCTCN2017097667-appb-000027
The grid T has N s = 2 m states at each level and has L+1 levels. Although only the restriction path that limits itself to the same initial state and final state corresponds to
Figure PCTCN2017097667-appb-000028
Codeword, but introduced an auxiliary super code
Figure PCTCN2017097667-appb-000029
Grid
Figure PCTCN2017097667-appb-000030
All the paths on it are composed, and the grid can now start and end in different states.
不妨用
Figure PCTCN2017097667-appb-000031
来表示
Figure PCTCN2017097667-appb-000032
的二进制码字,其中N=nL。将对应于接收向量r=(r0,r1,…,r N-1)的硬判定序列y=(y0,y1,…,yN-1)定义为
May wish to use
Figure PCTCN2017097667-appb-000031
To represent
Figure PCTCN2017097667-appb-000032
Binary codeword, where N=nL. A hard decision sequence y=(y 0 , y 1 , . . . , y N-1 ) corresponding to the reception vector r=(r 0 , r 1 , . . . , r N− 1 ) is defined as
Figure PCTCN2017097667-appb-000033
Figure PCTCN2017097667-appb-000033
其中among them
Figure PCTCN2017097667-appb-000034
Figure PCTCN2017097667-appb-000034
y的校正子相应地由
Figure PCTCN2017097667-appb-000035
所给出,其中
Figure PCTCN2017097667-appb-000036
Figure PCTCN2017097667-appb-000037
的等效分组码奇偶校验矩阵。令E(s)是校正子为s的所有错误模式的集合。则接收向量r的ML解码输 出
Figure PCTCN2017097667-appb-000038
等于
The y corrector is correspondingly
Figure PCTCN2017097667-appb-000035
Given, where
Figure PCTCN2017097667-appb-000036
Yes
Figure PCTCN2017097667-appb-000037
Equivalent block code parity check matrix. Let E(s) be the set of all error patterns whose syndrome is s. Then the ML decoding output of the received vector r
Figure PCTCN2017097667-appb-000038
equal
Figure PCTCN2017097667-appb-000039
Figure PCTCN2017097667-appb-000039
其中
Figure PCTCN2017097667-appb-000040
满足
among them
Figure PCTCN2017097667-appb-000040
Satisfy
Figure PCTCN2017097667-appb-000041
Figure PCTCN2017097667-appb-000041
并且
Figure PCTCN2017097667-appb-000042
为模2加法。由此对网格T中路径的度量做出如下定义。
and
Figure PCTCN2017097667-appb-000042
Addition for modulo 2. The metric of the path in the grid T is thus defined as follows.
定义1:令l为满足0≤l≤L的一个固定整数。对于一个二进制标签为
Figure PCTCN2017097667-appb-000043
的路径;其结束于网格T中的水平l,将与其相关的路径度量定义为
Definition 1: Let l be a fixed integer satisfying 0 ≤ l ≤ L. For a binary tag
Figure PCTCN2017097667-appb-000043
Path; it ends at level l in grid T, defining the path metric associated with it as
Figure PCTCN2017097667-appb-000044
Figure PCTCN2017097667-appb-000044
1因为一个具有特定长度的衔尾卷积码同样也是分组码。所以[nL,L,dmin]在全文中被用来将衔尾卷积码表示为分组码的形式,其中dmin表示该代码的最小成对汉明距离。 1 because a trailer code with a specific length is also a block code. So [nL, L, d min ] is used throughout the text to represent the trailer convolutional code as a block code form, where d min represents the minimum pairwise Hamming distance of the code.
其中
Figure PCTCN2017097667-appb-000045
为对应的比特度量。该路径的所谓累积度量是预先指定的初始度量及与之相关的上述路径度量的总和。请注意,初始度量对于第一次WAVA迭代而言为零并且根据迭代(第一次迭代除外)的先前迭代来设置成一个特定值。
among them
Figure PCTCN2017097667-appb-000045
Is the corresponding bit metric. The so-called cumulative metric for this path is the sum of the pre-specified initial metrics and the above-mentioned path metrics associated with it. Note that the initial metric is zero for the first WAVA iteration and is set to a specific value based on the previous iteration of the iteration (except for the first iteration).
因为在所提出的解码算法中执行后向VA而不是前向VA,所以定义1对后向VA的细化可通过一个直接的方式来获得。例如,与带有标签
Figure PCTCN2017097667-appb-000046
的后向路径相关联的度量由下式给出
Since the backward VA is performed instead of the forward VA in the proposed decoding algorithm, the definition of the definition of the backward VA can be obtained in a straightforward manner. For example, with tags
Figure PCTCN2017097667-appb-000046
The metric associated with the backward path is given by
Figure PCTCN2017097667-appb-000047
Figure PCTCN2017097667-appb-000047
因此,该后向路径的累积度量是针对起始状态处于水平L而指定的初始度量和上面所给出关联度量的总和。应该强调的是,二进制标签可以唯一地确定路径及其起始和结束状态。Thus, the cumulative metric for the backward path is the sum of the initial metric specified for the starting state at level L and the associated metric given above. It should be emphasized that a binary tag can uniquely determine the path and its start and end states.
如引言部分所述,提出的算法可以划分为两个阶段。在第一阶段中,后向WAVA被应用于网格T,其不仅检查衔尾路径,而且检查
Figure PCTCN2017097667-appb-000048
中的所有路径。那么将在第i个WAVA迭代中位于水平l的每个Ns状态具有一个VA向后存活路径,并且若该向后存活路径(在水平l)的结束状态为
Figure PCTCN2017097667-appb-000049
则其关联的累积度量可由
Figure PCTCN2017097667-appb-000050
表示。请注意,
Figure PCTCN2017097667-appb-000051
将会被设置为第(i+1)个WAVA迭代开始时水平L处状态的初始度量值;因此,标签为x[ln]=(xN-1,...,xln)的后向存活路径将具有累积度量
As mentioned in the introduction, the proposed algorithm can be divided into two phases. In the first phase, the backward WAVA is applied to the grid T, which not only checks the trailing path but also checks
Figure PCTCN2017097667-appb-000048
All paths in . Then each N s state at level l in the ith WAVA iteration has a VA backward survival path, and if the backward survival path (at level l) ends,
Figure PCTCN2017097667-appb-000049
Then the associated cumulative metric can be
Figure PCTCN2017097667-appb-000050
Said. Please note,
Figure PCTCN2017097667-appb-000051
Will be set to the initial metric of the state at level L at the beginning of the (i+1)th WAVA iteration; therefore, the label is the backward of x [ln] =(x N-1 ,...,x ln ) Survival path will have cumulative metrics
Figure PCTCN2017097667-appb-000052
Figure PCTCN2017097667-appb-000052
其中
Figure PCTCN2017097667-appb-000053
和s分别为该后向存活路径在水平L处的起始状态和在水平l处的结束状态。所有的度量
Figure PCTCN2017097667-appb-000054
(其中s∈S,0≤l≤L且1≤i≤I)均被保留以供将来在第二阶段中使用。在此,I为所执行WAVA迭代的最大数量。在第一阶段结束时,将产生在水平0(而不是水平L)处于某种状态的Ns后向存活路径。再次指出,这些后向存活路径仅保证存在于
Figure PCTCN2017097667-appb-000055
中,但可能不是
Figure PCTCN2017097667-appb-000056
中的码字。
among them
Figure PCTCN2017097667-appb-000053
And s are the initial state of the backward survival path at the level L and the end state at the level l, respectively. All metrics
Figure PCTCN2017097667-appb-000054
(where s ∈ S, 0 ≤ l ≤ L and 1 ≤ i ≤ I) are reserved for future use in the second phase. Here, I is the maximum number of WAVA iterations performed. At the end of the first phase, a N s backward survival path at level 0 (instead of level L) will be generated. Again, these backward survival paths are only guaranteed to exist in
Figure PCTCN2017097667-appb-000055
Medium but may not be
Figure PCTCN2017097667-appb-000056
The code word in .
一些随后在后向WAVA的算法描述中使用的符号介绍如下。最佳后向存活路径的标签在第i个WAVA迭代结束时的所有后向存活路径中最小的累积路径度量,其由
Figure PCTCN2017097667-appb-000057
表示。最佳后向存活路径
Figure PCTCN2017097667-appb-000058
的关联累积度量由
Figure PCTCN2017097667-appb-000059
表示。同样地,如果对应于第i个WAVA迭代结束时的最佳衔尾存活路径存在的话,用
Figure PCTCN2017097667-appb-000060
来表示它,并且用
Figure PCTCN2017097667-appb-000061
来表示其关联累积路径度量。令
Figure PCTCN2017097667-appb-000062
Figure PCTCN2017097667-appb-000063
分别为
Figure PCTCN2017097667-appb-000064
Figure PCTCN2017097667-appb-000065
中具有最小累积度量的标签,并且将它们的累积路径度量分别表示为
Figure PCTCN2017097667-appb-000066
Figure PCTCN2017097667-appb-000067
。以下的定理是[2]中定理的一个修改版本,其为迭代后向WAVA提供了一个早期停止标准。
Some of the symbols that are subsequently used in the description of the algorithm to the WAVA are described below. The label of the best backward survival path is the smallest cumulative path metric in all backward survival paths at the end of the i-th WAVA iteration,
Figure PCTCN2017097667-appb-000057
Said. Optimal backward survival path
Figure PCTCN2017097667-appb-000058
Associated cumulative metric
Figure PCTCN2017097667-appb-000059
Said. Similarly, if the best end-of-life path corresponding to the end of the ith WAVA iteration exists, use
Figure PCTCN2017097667-appb-000060
To express it and use
Figure PCTCN2017097667-appb-000061
To represent its associated cumulative path metric. make
Figure PCTCN2017097667-appb-000062
with
Figure PCTCN2017097667-appb-000063
Separately
Figure PCTCN2017097667-appb-000064
with
Figure PCTCN2017097667-appb-000065
The labels with the smallest cumulative metrics, and their cumulative path metrics are represented as
Figure PCTCN2017097667-appb-000066
with
Figure PCTCN2017097667-appb-000067
. The following theorem is a modified version of the theorem in [2], which provides an early stop criterion to WAVA after iteration.
定理1([2]):在第一次迭代结束时,若最佳的后向存活路径
Figure PCTCN2017097667-appb-000068
是一个衔尾路径,则它就是ML判定。对于首个迭代以外剩余的迭代,若
Theorem 1 ([2]): at the end of the first iteration, if the best backward survival path
Figure PCTCN2017097667-appb-000068
Is a trailing path, then it is the ML decision. For the remaining iterations other than the first iteration, if
Figure PCTCN2017097667-appb-000069
Figure PCTCN2017097667-appb-000069
适用于每个
Figure PCTCN2017097667-appb-000070
其中
Figure PCTCN2017097667-appb-000071
是后向WAVA直到第i次迭代所遇到的衔尾存活路径(在水平0)所有结束状态的集合,则最佳衔尾存活路径
Figure PCTCN2017097667-appb-000072
就是ML判定。
Suitable for each
Figure PCTCN2017097667-appb-000070
among them
Figure PCTCN2017097667-appb-000071
Is the set of all end states of the trailing survival path (at level 0) that is followed by the WAVA until the i-th iteration, then the best end-of-life path
Figure PCTCN2017097667-appb-000072
It is the ML decision.
由定理1,ML判定可能在后向WAVA达到最大迭代次数前被找出;因此,解码复杂度可得到降低。考虑到完整性,提出如下后向WAVA的算法步骤,其基本上与WAVA相同,不同的方面有:它以后向的方式执行并且需要记录路径度量
Figure PCTCN2017097667-appb-000073
其中s∈S,0≤l≤L且1≤i≤I。
From Theorem 1, the ML decision may be found before the backward WAVA reaches the maximum number of iterations; therefore, the decoding complexity can be reduced. Considering the integrity, the following algorithm steps are proposed to the WAVA, which is basically the same as the WAVA. The different aspects are: it performs in a backward manner and needs to record the path metric.
Figure PCTCN2017097667-appb-000073
Where s∈S, 0≤l≤L and 1≤i≤I.
<第一阶段:后向WAVA><First stage: backward WAVA>
第1步:为每个状态ss∈S初始化
Figure PCTCN2017097667-appb-000074
Step 1: Initialize for each state s s∈S
Figure PCTCN2017097667-appb-000074
第2步:将VA以后向的方式应用至网格T,即从水平L回到水平0。从l=L-1下至l=0,为位于水平l的每个状态s记录
Figure PCTCN2017097667-appb-000075
。查找
Figure PCTCN2017097667-appb-000076
并且若
Figure PCTCN2017097667-appb-000077
存在,也一并查找。
Step 2: Apply the VA backwards mode to the grid T, ie from level L back to level 0. From l=L-1 down to l=0, record for each state s at level l
Figure PCTCN2017097667-appb-000075
. Find
Figure PCTCN2017097667-appb-000076
And if
Figure PCTCN2017097667-appb-000077
Exist, also find together.
第3步:若
Figure PCTCN2017097667-appb-000078
则将
Figure PCTCN2017097667-appb-000079
输出为ML判定,并且停止该算法。
Step 3: If
Figure PCTCN2017097667-appb-000078
Then
Figure PCTCN2017097667-appb-000079
The output is an ML decision and the algorithm is stopped.
第4步:若I=1,停止该算法。 Step 4: If I=1, stop the algorithm.
第5步:令i=2。Step 5: Let i=2.
第6步:为每个状态s∈S初始化
Figure PCTCN2017097667-appb-000080
Step 6: Initialize for each state s∈S
Figure PCTCN2017097667-appb-000080
第7步:将VA以后向的方式应用至网格T,即从水平L回到水平0。从l=L-1下至l=0,为位于水平l的每个状态s记录
Figure PCTCN2017097667-appb-000081
。查找
Figure PCTCN2017097667-appb-000082
并且若
Figure PCTCN2017097667-appb-000083
存在,也一并查找。
Step 7: Apply the VA backwards mode to the grid T, ie from level L back to level 0. From l=L-1 down to l=0, record for each state s at level l
Figure PCTCN2017097667-appb-000081
. Find
Figure PCTCN2017097667-appb-000082
And if
Figure PCTCN2017097667-appb-000083
Exist, also find together.
第8步:若
Figure PCTCN2017097667-appb-000084
满足(3)中的停止标准,则将
Figure PCTCN2017097667-appb-000085
输出为ML判定,并且停止该算法。
Step 8: If
Figure PCTCN2017097667-appb-000084
Meet the stopping criteria in (3), then
Figure PCTCN2017097667-appb-000085
The output is an ML decision and the algorithm is stopped.
第9步:若i<I,则执行i=i+1并转至第6步;否则,停止该算法。Step 9: If i < I, execute i = i + 1 and go to step 6; otherwise, stop the algorithm.
在继续介绍第二阶段的解码算法之前,应该提及一个重要的事实。很显然,如果I次迭代之后后向WAVA不输出ML判定,则
Figure PCTCN2017097667-appb-000086
为ML判定的度量之上的一个上界。该事实将被用于加速第二阶段。再次强调,与第一阶段在网格T中以后向的方式运算不同,第二阶段以前向的方式将优先级优先搜索解码应用于所有Ns衔尾子网格。这保证了第二阶段的输出始终是
Figure PCTCN2017097667-appb-000087
中的码字。
Before continuing to introduce the decoding algorithm of the second stage, an important fact should be mentioned. Obviously, if the ML decision is not output to the WAVA after 1 iteration, then
Figure PCTCN2017097667-appb-000086
An upper bound above the metric determined by ML. This fact will be used to accelerate the second phase. Again, unlike the first-stage mode operation in the grid T, the second-stage forward approach applies priority-first search decoding to all N s tail sub-grids. This ensures that the output of the second phase is always
Figure PCTCN2017097667-appb-000087
The code word in .
对于子网格Tk上一个标签为xk,(ln-1)=(xk,0,xk,1,...,xk,ln-1)的路径,一个与之关联的新评估函数如下给出:For a path on the subgrid T k with a label x k,(ln-1) =(x k,0 ,x k,1 ,...,x k,ln-1 ), a new associated with it The evaluation function is given as follows:
f(xk,(ln-1))=g(xk,(ln-1))+h(xk,(ln-1)),    (4)f(x k,(ln-1) )=g(x k,(ln-1) )+h(x k,(ln-1) ), (4)
其中按照定义1,According to definition 1,
Figure PCTCN2017097667-appb-000088
Figure PCTCN2017097667-appb-000088
具有初始值g(xk,(-1))=0,并且Has an initial value g(x k,(-1) )=0, and
Figure PCTCN2017097667-appb-000089
Figure PCTCN2017097667-appb-000089
在(6)中,s为前向路径xk,(ln-1)结束时所处的状态(位于水平l),并且sk为网格Tk的单个初始(也是最终)状态。可以看出f(xk,(N-1))=g(xk,(N-1)),这是因为In (6), s is the forward path x k, the state at which (ln-1) ends (at level l), and s k is the single initial (and final) state of the grid T k . It can be seen that f(x k,(N-1) )=g(x k,(N-1) ), because
Figure PCTCN2017097667-appb-000090
Figure PCTCN2017097667-appb-000090
因此,具有最小f函数值的衔尾路径是(2)中具有最小ML度量的衔尾路径。Therefore, the trailing path with the smallest f-function value is the trailing path with the smallest ML metric in (2).
优先级优先搜索算法需要两个数据结构。第一个称为开放堆栈,其通过优先级优先搜索算法存储目前为止已经访问过的路径。另一个称为封闭表,其可以跟踪先前某些时间曾经处于开放堆栈顶部的那些路径。它们之所以如此命名的原因是:开放堆栈中的路径在将来可能会被进一步扩展并由此保持打开状态,而封闭表中的路径不能再被扩展并因此会被关闭以备将来扩展。接下来,在下面总结Ns子网格的优先级优先搜索算法。 The priority-first search algorithm requires two data structures. The first is called an open stack, which stores the paths that have been accessed so far by a priority-first search algorithm. The other is called a closed table, which tracks those paths that were previously at the top of the open stack at some point in time. The reason they are so named is that the path in the open stack may be further extended and thus kept open in the future, while the paths in the closed table can no longer be expanded and therefore closed for future expansion. Next, the priority-first search algorithm for the N s sub-grid is summarized below.
<第二阶段:优先级优先搜索算法><Phase 2: Priority Priority Search Algorithm>
第1步:根据其累积度量的升序对从第一阶段获得的Ns后向存活路径进行排序。若具有最小累积度量的后向存活路径同样是一个(起止于相同状态的)衔尾路径,则将其输出为最终ML判定并停止算法。Step 1: Sort the N s backward survival paths obtained from the first stage according to the ascending order of their cumulative metrics. If the backward survival path with the smallest cumulative metric is also a trailing path (starting at the same state), it is output as the final ML decision and the algorithm is stopped.
第2步:若其存在,初始化
Figure PCTCN2017097667-appb-000091
Figure PCTCN2017097667-appb-000092
否则,将cUB=∞和xUB设置成=空。删除累积度量不小于cUB的所有后向存活路径。
Step 2: If it exists, initialize
Figure PCTCN2017097667-appb-000091
with
Figure PCTCN2017097667-appb-000092
Otherwise, set c UB = ∞ and x UB to = null. Delete all backward survival paths whose cumulative metric is not less than c UB .
第3步:将子网格的初始零长度前向路径加载到开放堆栈中,其在水平0的初始状态与剩余的后向存活路径的任何结束状态一致。按照开放堆栈中这些零长度路径的f函数值的升序来排列它们。Step 3: Load the initial zero length forward path of the subgrid into the open stack, which is consistent with the initial state of level 0 and any end state of the remaining backward survival paths. Arrange them in ascending order of the f-function values of these zero-length paths in the open stack.
第4步:若开放堆栈为空,则输出xUB作为最终的ML判定,并停止算法。2 Step 4: If the open stack is empty, output x UB as the final ML decision and stop the algorithm. 2
第5步:若开放堆栈中的当前顶部路径在其相应的子网格中达到水平L,则将该路径输出为最终的ML判定,并停止算法。Step 5: If the current top path in the open stack reaches level L in its corresponding subgrid, the path is output as the final ML decision and the algorithm is stopped.
第6步:若开放堆栈中的当前顶部路径已经被记录在封闭表中,则从开放表中舍弃它并转至第4步否则;将关于该顶部路径的信息记录在封闭表中。3 Step 6: If the current top path in the open stack has been recorded in the closed list, discard it from the open table and go to step 4 otherwise; record information about the top path in the closed list. 3
第7步:计算开放堆栈中顶部路径之后继路径的f函数值。然后,从开放堆栈中删除顶部路径。删除f函数值≥cUB的那些后继路径。Step 7: Calculate the f function value of the trailing path of the top path in the open stack. Then, remove the top path from the open stack. Delete those subsequent paths where the f function value ≥ c UB .
第8步:若一个后继xk,(Ln-1)达到具有f(xk,(Ln-1))<cUB,的水平L,则更新xUB=xk,(Ln-1)以及cUB=f(xk,(Ln-1))。重复前面的更新,直到所有达到水平L的后元都受到检查。删除所有达到水平L的后元。Step 8: If a successor x k, (Ln-1) reaches a level L with f(x k,(Ln-1) )<c UB , then update x UB =x k,(Ln-1) and c UB =f(x k,(Ln-1) ). Repeat the previous update until all subsequent elements that reach the level L are checked. Delete all subsequent elements that reach the level L.
第9步:将剩余的后继路径插入到开放堆栈中,并根据升序的f函数值对开放堆栈重新排序。转至第4步。Step 9: Insert the remaining subsequent paths into the open stack and reorder the open stack based on the ascending f function values. Go to step 4.
2请注意,当开放堆栈为空时,xUB不能为空。这是因为,只有当
Figure PCTCN2017097667-appb-000093
不存在时,xUB才能在第2步中被初始地清空,此时cUB=∞。在这种情况下,第8步将用第一个继任者xk,(Ln-1)替换xUB,其达到了水平L并且开放堆栈在该替换之前必不为空。因此,当通过删除f函数值≥cUB的路径对开放堆栈进行强制清空时,xUB永不会为空。
2 Note that x UB cannot be empty when the open stack is empty. This is because only when
Figure PCTCN2017097667-appb-000093
When it does not exist, x UB can be initially cleared in step 2, at which time c UB = ∞. In this case, step 8 replaces x UB with the first successor x k, (Ln-1) , which reaches level L and the open stack must not be empty before the replacement. Therefore, when the open stack is forcibly cleared by deleting the path of the f function value ≥ c UB , x UB will never be empty.
3请注意,为了唯一地标识路径,只有起始和结束状态以及结束水平才需要记录在封闭表中。 3 Please note that in order to uniquely identify the path, only the start and end states and the end level need to be recorded in the closed table.
从上述算法可以看出,开放堆栈与传统顺序解码算法中堆栈运转的方式类似。然而,封闭表的引入是为了消除结束于先前某些时间访问过之状态的顶部路径。(8)中随后将展示,这些顶部路径具有比先前访问过、结束于相同状态的顶部路径更差的f函数值;因此,可以直接消除它们以加快解码过程。As can be seen from the above algorithm, the open stack is similar to the way the stack operates in the conventional sequential decoding algorithm. However, the introduction of closed tables is to eliminate the top path that ends in a state that was accessed at some previous time. It will be shown later in (8) that these top paths have worse f function values than the top paths that were previously accessed and ended in the same state; therefore, they can be eliminated directly to speed up the decoding process.
想指出的是,所提出的MLWAVA可以通过采用其对应的超级网格和子网格来应用于一个通用(n,k,m)(其中1<k≤n)衔尾卷积码。此外,人们也可以在第一阶段中使用前向WAVA,并且在第二阶段中以后向的方式执行优先级 优先搜索。考虑到解码器通常倾向于以前向方式列出输出码位,选择在第二阶段以前向方式执行优先级优先搜索算法。It is to be noted that the proposed MLWAVA can be applied to a generic (n, k, m) (where 1 < k ≤ n) trailer convolutional codes by employing its corresponding super-mesh and sub-mesh. In addition, one can also use the forward WAVA in the first phase and the priority in the second phase. Priority search. Considering that the decoder typically tends to list the output code bits in a forward manner, it is chosen to perform a priority-first search algorithm in a forward manner in the second phase.
三、优先搜索算法的早期停止标准Third, the early stop criteria of the priority search algorithm
在该部分中,评估函数f的特性将得到推导,并且随后被用于加速优先级优先搜索解码算法。从一个引理开始,此引理对于在本部分中证明主要定理3而言至关重要。In this section, the characteristics of the evaluation function f will be derived and subsequently used to speed up the priority-first search decoding algorithm. Starting with a lemma, this lemma is crucial for proving the main theorem 3 in this section.
引理2:对于结束于网格Tk中处于水平l之状态s的路径xk,(ln-1),Lemma 2: l to end in a horizontal path to the state s x k T k to the grid, (ln-1),
Figure PCTCN2017097667-appb-000094
Figure PCTCN2017097667-appb-000094
Figure PCTCN2017097667-appb-000095
记录了水平L上开始状态为sk之后向存活路径的累积路径度量。4
If
Figure PCTCN2017097667-appb-000095
The cumulative path metric to the surviving path after the start state at level L is s k is recorded. 4
证明:显然,赋予
Figure PCTCN2017097667-appb-000096
的后向存活路径为网格Tk中的一个有效后向路径,这是因为其开始于水平L上的状态sk。令xk,(N-1)为结合xk,(ln-1)和此存活路径的衔尾路径。假设存在i2使得
Proof: obviously, give
Figure PCTCN2017097667-appb-000096
After the survival path after a valid grid T k to the path, because it starts at the level of the state s k L. Let x k, (N-1) be the end of the combination of x k, (ln-1) and this survival path. Suppose there is i 2
Figure PCTCN2017097667-appb-000097
Figure PCTCN2017097667-appb-000097
对于一些i2≠i1。由于f函数值顺着子网格Tk中的所有路径都是非递减的,所以得到For some i 2 ≠i 1 . Since the f function value is non-decreasing along all paths in the sub-grid T k ,
f(xk,(ln-1))≤f(xk,(N-1))=g(xk,(N-1))+h(xk,(N-1)).    (9)f(x k,(ln-1) )≤f(x k,(N-1) )=g(x k,(N-1) )+h(x k,(N-1) ). (9 )
还可以导出Can also be exported
Figure PCTCN2017097667-appb-000098
Figure PCTCN2017097667-appb-000098
4请注意,通过符号
Figure PCTCN2017097667-appb-000099
中所指定的参数,该后向存活路径必在第i1个WAVA迭代期间获得,并且必结束于水平l上的状态s。
4 Please note that passing the symbol
Figure PCTCN2017097667-appb-000099
The parameter specified in this, the backward survival path must be obtained during the i1th WAVA iteration and must end at state s on level l.
方程(9)和(10)则联合表明Equations (9) and (10) are combined to show
Figure PCTCN2017097667-appb-000100
Figure PCTCN2017097667-appb-000100
这与函数h的定义相矛盾,其确保h(xk,(N-1))=0。This contradicts the definition of the function h, which ensures that h(x k,(N-1) )=0.
定理3:令xk,(ln-1)=(xk,0,xk,1,...,xk,ln-1)开放堆栈中的当前顶部路径,并且用s 来表示其水平l上的结束状态。假设
Figure PCTCN2017097667-appb-000101
记录了水平L上开始状态为sk、水平l上结束状态为s的后向存活路径
Figure PCTCN2017097667-appb-000102
的累积路径度量,其在第i1次WAVA迭代期间获得。则将前向路径xk,(ln-1)与后向路径
Figure PCTCN2017097667-appb-000103
合并得到期望ML衔尾路径,即
Theorem 3: Let x k,(ln-1) =(x k,0 ,x k,1 ,...,x k,ln-1 ) open the current top path in the stack and use s to indicate its level The end state on l. Hypothesis
Figure PCTCN2017097667-appb-000101
A backward survival path with a start state of s k at level L and an end state of s at level l is recorded.
Figure PCTCN2017097667-appb-000102
The cumulative path metric, which is obtained during the iteration i 1 times WAVA. Then forward path x k, (ln-1) and backward path
Figure PCTCN2017097667-appb-000103
Merge to get the desired ML end trail, ie
Figure PCTCN2017097667-appb-000104
Figure PCTCN2017097667-appb-000104
证明:令路径
Figure PCTCN2017097667-appb-000105
为路径xk,(ln-1)沿着后向存活路径
Figure PCTCN2017097667-appb-000106
的紧接后元,并且以
Figure PCTCN2017097667-appb-000107
来表示其结束状态。根据后向WAVA,得到
Proof: make the path
Figure PCTCN2017097667-appb-000105
For the path x k, (ln-1) along the backward survival path
Figure PCTCN2017097667-appb-000106
Immediately after the yuan, and
Figure PCTCN2017097667-appb-000107
To indicate its end state. According to the backward WAVA, get
Figure PCTCN2017097667-appb-000108
Figure PCTCN2017097667-appb-000108
那么,路径xk,((l+1)n-1)的f函数值则可进行如下计算:Then, the f function value of the path x k, ((l+1)n-1) can be calculated as follows:
Figure PCTCN2017097667-appb-000109
Figure PCTCN2017097667-appb-000109
其中(12)有引理2暗示得出,而(13)遵循(11)。通过类似的论证,可以连续地证明:沿着后向路径的每个下一后元的f函数值
Figure PCTCN2017097667-appb-000110
保持不变。因为顶部路径xk,(ln-1).具有开放堆栈内与之共存的所有路径当中最小的f函数值,并且f函数值沿着所有路径都是非递减的,所以组合前向路径xk,(ln-1)与后向路径
Figure PCTCN2017097667-appb-000111
给出了ML衔尾路径,其在长度为N的所有衔尾路径中具有最小f函数值。
Among them, (12) has Lemma 2 implies, and (13) follows (11). Through a similar argument, it can be continuously proved that the value of the f function of each next posterior element along the backward path
Figure PCTCN2017097667-appb-000110
constant. Because the top path x k,(ln-1). has the smallest f function value among all the paths coexisting in the open stack, and the f function value is non-decreasing along all paths, the forward path x k is combined , (ln-1) and backward path
Figure PCTCN2017097667-appb-000111
The ML tailing path is given, which has the smallest f-function value in all trailing paths of length N.
定理3随即提供了第二阶段程序步骤的早期停止标准,使得优先级优先搜索过程不一定得达到水平L才能确定最终的ML判定。因此,可以相应地修改第二阶段的第5步,通过补充“若顶部路径的h函数值由一个(水平L上的)初始状态与该顶部路径之初始状态相同的后向存活路径所确定,则将组合路径输出为ML判定并停止算法”以加快解码算法。 Theorem 3 then provides an early stop criterion for the second stage of the program step so that the priority-first search process does not necessarily have to reach level L to determine the final ML decision. Therefore, step 5 of the second phase can be modified accordingly by adding "if the value of the h function of the top path is determined by a backward survival path of the initial state of the top path (on the horizontal L) and the initial state of the top path, Then the combined path is output as an ML decision and the algorithm is stopped" to speed up the decoding algorithm.
从定理3的证明可以看出,f(xk,((l+1)n-1))和f(xk,ln-1))的等效性主要依赖于(12)的有效性,其中最大化
Figure PCTCN2017097667-appb-000112
的WAVA迭代数量i1应该得到识别。当迭代次数I等于1时,i1必为1。在这种情况下,沿着从第一阶段所获得后向存活路径之紧接后元的f函数值总是保持不变。在接下来的推论中总结这个事实。
It can be seen from the proof of Theorem 3 that the equivalence of f(x k,((l+1)n-1) ) and f(x k,ln-1) ) depends mainly on the validity of (12). Maximize
Figure PCTCN2017097667-appb-000112
The number of WAVA iterations i 1 should be identified. When the number of iterations I is equal to 1, i 1 must be 1. In this case, the f-function value of the immediately following element along the backward survival path obtained from the first stage is always unchanged. Summarize this fact in the next inference.
推论1:将WAVA迭代的最大数量固定为I=1。令路径xk,(ln-1)表示开放堆栈中的当前顶部路径,并以s表示其在水平l上的结束状态。假设路径
Figure PCTCN2017097667-appb-000113
为在水平l上结束状态为s的后向存活路径。则,
Corollary 1: Fix the maximum number of WAVA iterations to I=1. Let path x k, (ln-1) denote the current top path in the open stack, and denote its end state on level l with s. Assumed path
Figure PCTCN2017097667-appb-000113
To end the backward survival path with state s on level l. then,
f(xk,((l+1)n-1))=f(xk,(ln-1)),f(x k,((l+1)n-1) )=f(x k,(ln-1) ),
其中
Figure PCTCN2017097667-appb-000114
among them
Figure PCTCN2017097667-appb-000114
推论1的意义在于,当I=1时,它可以极大地加快第二阶段的优先级优先搜索过程。具体地说,可以证实,每个子网格具有从水平m到水平L-m之间,除水平0到(m-1)以及水平(L-m+1)到L以外的全部Ns状态S0,…,SNs-1,其中m是(n,1,m)衔尾卷积码的存储顺序。因此,通过上述推论,当优先级优先搜索算法将结束于m和L-m间一个水平的当前顶部路径xk,(ln-1)扩大时,其紧接后元中的一个应该具有与该顶部路径相同的f函数值。具有相同f函数值之xk,(ln-1)的紧接后元随后可以通过从第一阶段所获得的后向存活路径快速地求出,并且应该成为下一个顶部路径。后元f函数值的计算因此变得没必要并且可以被节省。The significance of Inference 1 is that when I=1, it can greatly speed up the priority-first search process of the second stage. Specifically, it can be confirmed that each sub-grid has all Ns states S 0 , from horizontal m to horizontal Lm except horizontal 0 to (m-1) and horizontal (L-m+1) to L,... , S Ns-1 , where m is the storage order of the (n, 1, m) trailer convolutional codes. Therefore, by the above inference, when the priority-first search algorithm will end at a horizontal current top path x k between m and Lm , when (ln-1) is expanded, one of its immediately following elements should have the top path The same f function value. The immediately following element of x k with the same f-function value , (ln-1) can then be quickly obtained from the backward survival path obtained from the first phase and should be the next top path. The calculation of the value of the post-e f function thus becomes unnecessary and can be saved.
尽管如此,结束于0和m-1间一个水平的顶部路径沿着满足该顶部路径的第一阶段后向存活路径并不一定具有一个紧接后元。这是因为不是所有的Ns状态在子网格中的水平0到m-1都可用。因此,对于以小于m之某水平结束的顶部路径,原始优先级优先搜索步骤仍然应该得到执行。另一方面,当一个路径达到水平L-m时,它具有沿其相应子网格至水平L的唯一轨迹;因此,可以将其直接扩展至水平L以形成一个长度为N的衔尾路径,并且通过将其f函数值与开放堆栈中的f函数值进行比较来测试所得到的衔尾路径是否是最终的ML判定。Nonetheless, a top path ending at a level between 0 and m-1 does not necessarily have a immediately following element along the first stage backward survival path that satisfies the top path. This is because not all N s states are available at levels 0 through m-1 in the subgrid. Therefore, for a top path ending at a level less than m, the original priority first search step should still be performed. On the other hand, when a path reaches the horizontal Lm, it has a unique trajectory along its corresponding sub-grid to level L; therefore, it can be extended directly to the horizontal L to form a length N-tailed path and pass Compare the f function value with the f function value in the open stack to test if the resulting end trail is the final ML decision.
由推论1可知,的第二阶段过程可以简化为一个深度优先算法,并且解码复杂度大大降低。第二阶段算法的第7步可以相应地通过如下附加条件来修改:若所有剩余后元结束于一个小于m的水平,则转至下一步(即第8步);否则,执行如下操作:It can be seen from Inference 1 that the second-stage process can be simplified to a depth-first algorithm, and the decoding complexity is greatly reduced. The seventh step of the second stage algorithm can be modified accordingly by the following additional conditions: if all remaining back elements end at a level less than m, then go to the next step (ie step 8); otherwise, do the following:
●直接将(达到水平m的)后元依次沿着第一阶段后向存活路径扩展至水平L-m,并将长度为(L-m)n之扩展后元的f函数值设置成与刚才为长度为mn之原始非扩展后元所计算出的值相同。● Directly (after reaching the level m) the post-element sequentially extends to the horizontal Lm along the first-stage backward survival path, and sets the f-function value of the extended-length element of length (Lm)n to be just mn with the length. The values calculated by the original non-expanded elements are the same.
●进一步沿着位于其相应子网格上的唯一轨迹将长度为(L-m)n的后元扩展至水平L,并且计算所产生N长度衔尾路径的f函数值。• Further extending the posterior element of length (L-m)n to the horizontal L along a unique trajectory located on its respective sub-grid, and calculating the f-function value of the generated N-length trailing path.
回想一下,在第二部分的结尾已经说过,所提出的两阶段解码算法可以替代性地实施为前向迭代WAVA,并后跟一个优先级优先搜索算法。虽然这种替代性实施在性能上也是最佳的,但是对于给定的接收向量,其可能导致不同的解码复杂度;而平均来说,两种算法的解码复杂度仍然应该是一样的。 Recall that, as mentioned at the end of the second part, the proposed two-stage decoding algorithm can alternatively be implemented as a forward iterative WAVA followed by a priority-first search algorithm. While this alternative implementation is also optimal in performance, it may result in different decoding complexity for a given receive vector; on average, the decoding complexity of both algorithms should still be the same.
考虑到复杂性的降低,人们实际上可以在网格的任何预先选择水平上启动所提出的两阶段解码算法,因为衔尾卷积码网格本质上是“循环相同的”。一个等效却可能更直接的处理是通过循环旋转水平数列0,1,…,L来将网格上所选择的起始水平重新编号为水平0。[13]中的模拟表明,如果可以根据接收矢量的“可靠性”正确地选择起始水平,就可以实现一定程度的复杂度降低。例如,起始水平l*[13]可以选择为Given the reduced complexity, one can actually launch the proposed two-stage decoding algorithm at any pre-selected level of the grid, since the end-of-flight convolutional code grid is essentially "loop-like". An equivalent, but perhaps more straightforward, process is to renumber the selected starting level on the grid to level 0 by cyclically rotating the horizontal series 0, 1, ..., L. The simulation in [13] shows that a certain degree of complexity reduction can be achieved if the starting level can be correctly selected based on the "reliability" of the received vector. For example, the starting level l*[13] can be selected as
Figure PCTCN2017097667-appb-000115
Figure PCTCN2017097667-appb-000115
对于一些恰当选择的λ。值得注意的是,对起始水平及其后续对齐的选择应该在第一阶段算法开始之前执行,并且与两阶段解码的解码复杂度相比,其复杂度几乎可以忽略不计。For some λ that are properly chosen. It is worth noting that the selection of the starting level and its subsequent alignment should be performed before the start of the first phase algorithm, and its complexity is almost negligible compared to the decoding complexity of the two-stage decoding.
四、通过AWGN信道的模拟实验Fourth, through the simulation experiment of AWGN channel
在这一部分中,借助模拟、通过加性白高斯噪声(AWGN)信道探究了所提出ML解码算法的计算工作量和字错误率。假设发送的二进制码字u=(u0,u1,...,uN-1)是二进制相移键控(BPSK)调制的。因此,接收向量r=(r0,r1,...,rN-1)由下式给出In this part, the computational workload and word error rate of the proposed ML decoding algorithm are explored by means of analog, through an additive white Gaussian noise (AWGN) channel. It is assumed that the transmitted binary codeword u = (u 0 , u 1 , ..., u N-1 ) is binary phase shift keying (BPSK) modulated. Therefore, the reception vector r = (r 0 , r 1 , ..., r N-1 ) is given by
Figure PCTCN2017097667-appb-000116
Figure PCTCN2017097667-appb-000116
其中ε为每信道位的信号功率,而
Figure PCTCN2017097667-appb-000117
是每赫兹单面噪声功率为N0之白高斯过程的独立噪声样本。因此,信噪比(SNR)由
Figure PCTCN2017097667-appb-000118
给出。为了解决不同码率的码冗余,在以下讨论中使用每信息比特的SNR,即
Where ε is the signal power per channel bit, and
Figure PCTCN2017097667-appb-000117
It is an independent noise sample for a white Gaussian process with a single-sided noise power of N 0 per Hertz. Therefore, the signal-to-noise ratio (SNR) consists of
Figure PCTCN2017097667-appb-000118
Given. In order to address code redundancy at different code rates, the SNR per information bit is used in the following discussion, ie
Figure PCTCN2017097667-appb-000119
Figure PCTCN2017097667-appb-000119
注意,对于AWGN信道,与定义1中的路径x(ln-1)相关联的度量可以等效地简化为Note that for an AWGN channel, the metric associated with path x (ln-1) in Definition 1 can be equivalently simplified to
Figure PCTCN2017097667-appb-000120
Figure PCTCN2017097667-appb-000120
在模拟中使用分别具有发生器103,166(八进制)和133,171(八进制)的两个(2,1,6)衔尾卷积码,以获得表I中所展示的结果。前者的信息长度为12和48,其为[24,12,8]扩展戈洛码[14],而后者相当于[96,48,10]分组码。Two (2, 1, 6) trailer convolutional codes with generators 103, 166 (octal) and 133, 171 (octal), respectively, were used in the simulation to obtain the results shown in Table I. The information length of the former is 12 and 48, which is [24, 12, 8] extended Golog code [14], and the latter is equivalent to [96, 48, 10] block code.
在介绍的仿真结果之前,想指出,顺序搜索算法(比如MLWAVA第二阶段所执行的那个算法)的计算量不仅包括解码度量的评估,而且包括在搜索和重新排序堆栈元素方面消耗的工作量。通过采用一个优先级队列数据结构[15], 后者的工作量可以做到与前者相当。人们可以进一步采用基于硬件的堆栈结构[16],并为每个堆栈插入运算获得恒定的复杂性。这些注释将每信息比特之度量计算数量的惯常采纳证明为基于顺序搜索算法的一个算法复杂度度量。Before introducing the simulation results, it is pointed out that the computational complexity of a sequential search algorithm (such as the one performed by the second phase of MLWAVA) includes not only the evaluation of the decoding metric, but also the amount of work involved in searching and reordering the stack elements. By adopting a priority queue data structure [15], The latter's workload can be comparable to the former. One can further adopt a hardware-based stack structure [16] and achieve constant complexity for each stack insertion operation. These annotations prove the usual adoption of the number of metric calculations per information bit as an algorithmic complexity metric based on a sequential search algorithm.
现在准备提供解码复杂性的模拟结果以及MLWAVA的、与MLWAVA进行比较的四种解码算法的相应字出错率(WER)。为了便于指定各种图表内这些算法中所使用的参数,具有最大迭代数量I和参数λ(其通过(14)决定起始位置l*)的MLWAVA由MLWAVAλ(I)来表示,并且具有A最大迭代数量I的WAVA[2]、基于陷阱侦测的ML解码算法[12]、创新性ML解码算法[9]和基于双向阈值的顺序解码算法[8]分别表示为WAVA(I)、TDMLDA、CMLDA和BEAST。请注意,MLWAVAλ(I)、TDMLDA、CMLDA和BEAST是ML解码器。对于所有模拟,确保发生至少100个字错误,以使得模拟结果中不存在偏向性。It is now ready to provide simulation results of decoding complexity and the corresponding word error rate (WER) of the four decoding algorithms of MLWAVA compared to MLWAVA. In order to facilitate the specification of the parameters used in these algorithms within the various charts, the MLWAVA with the largest number of iterations I and the parameter λ (which determines the starting position l* by (14)) is represented by MLWAVA λ (I) and has A The maximum iteration number I WAVA [2], the trap detection based ML decoding algorithm [12], the innovative ML decoding algorithm [9] and the bidirectional threshold based sequential decoding algorithm [8] are expressed as WAVA (I), TDMLDA, respectively. , CMDDA and BEAST. Note that MLWAVA λ (I), TDMLDA, CMDDA, and BEAST are ML decoders. For all simulations, ensure that at least 100 word errors occur so that there is no bias in the simulation results.
作为参考,首先根据对比在图1中展示了WAVA(2)和ML解码器(如MLWAVA,TDMLDA,CMLDA或BEAST)的WER性能。图1表明在WER=10-3,与采用[24,12,8]扩展戈洛码时的ML性能相比,WAVA(2)具有约0.5dB的编码损失,但在[96,48,10]分组码取而代之得到采用时接近于最优性能。该结果表明,当使用较长的衔尾卷积码时,WAVA可以仅使用少量(在此为两次)迭代来实现近似最优性能。For reference, the WER performance of WAVA (2) and ML decoders (such as MLWAVA, TDMLDA, CMDDA or BEAST) is first shown in Figure 1 according to the comparison. Figure 1 shows that at WER = 10 -3 , WAVA (2) has a coding loss of about 0.5 dB compared to the ML performance when [Gore code is extended using [24, 12, 8], but at [96, 48, 10 The block code is instead used to approximate optimal performance. This result indicates that WAVA can use only a small number (here twice) of iterations to achieve approximate optimal performance when using a longer trailer convolutional code.
在表I中列出了WAVA(2)、MLWAVA6(1)、TDMLDA、CMLDA和BEAST的分支度量计算的平均和最大数量。得出四个观察结果。第一,因为在最坏的情况下SNRb值只是遍历整个网格两次,所以无论该值如何,分支度量计算的最大数量对于WAVA(2)而言都是一个常数。事实上,WAVA的每信息比特双重迭代的计算量可以由2m×n×I=26×2×2=256直接给出。第二,MLWAVA6(1)对于BEAST之外所有SNRb而言,在平均解码复杂度方面优于其他解码算法。第三,对于所采用的短衔尾卷积码,BEAST在五种解码算法中具有最小的平均解码复杂度,并且对于SNRb≥3dB的长衔尾卷积码,其同样具有最小的平均解码复杂度。然而,当SNRb降低时,其平均解码复杂度显着增加,并且对于长衔尾卷积码,其在SNRb=1dB时超过了WAVA(2)、MLWAVA6(1)和CMLDA的平均解码复杂度。第四,对于所模拟的每个SNR值,MLWAVA6(1)所需的每信息比特分支度量计算的最大数量在所有ML解码算法中是最低的,并且其减少量与其他ML解码算法相比是显著的。例如,在SNRb=4dB时,对于[96,48,10]分组码,MLWAVA6(1)的最大解码复杂度分别比TDMLDA、CMLDA和BEAST的最大解码复杂度小4倍、4倍和8倍。因为解码延迟主要由最坏情况的复杂性决定,所以这种改善具有实际重要性。The average and maximum number of branch metric calculations for WAVA (2), MLWAVA 6 (1), TDMLDA, CMDDA, and BEAST are listed in Table I. Four observations were obtained. First, because in the worst case the SNRb value only traverses the entire grid twice, the maximum number of branch metric calculations is a constant for WAVA(2) regardless of the value. In fact, the amount of computation for the double iteration of each information bit of the WAVA can be directly given by 2 m × n × I = 2 6 × 2 × 2 = 256. Second, MLWAVA 6 (1) is superior to other decoding algorithms in terms of average decoding complexity for all SNR bs other than BEAST. Third, for the short-tailed convolutional codes used, BEAST has the smallest average decoding complexity among the five decoding algorithms, and also has the smallest average decoding for long-tailed convolutional codes with SNR b ≥ 3 dB. the complexity. However, as SNR b decreases, its average decoding complexity increases significantly, and for long tail-end convolutional codes, it exceeds the average decoding of WAVA(2), MLWAVA 6 (1), and CMLDA at SNR b =1 dB. the complexity. Fourth, for each SNR value simulated, the maximum number of information bit branch metric calculations required by MLWAVA 6 (1) is the lowest of all ML decoding algorithms, and its reduction is compared to other ML decoding algorithms. It is remarkable. For example, at SNR b = 4 dB, for [96, 48, 10] block codes, the maximum decoding complexity of MLWAVA 6 (1) is 4 times, 4 times and 8 times smaller than the maximum decoding complexity of TDMLDA, CMDDA and BEAST, respectively. Times. This improvement is of practical importance because the decoding delay is primarily determined by the worst-case complexity.
表ITable I
WAVA(2)、MLWAVA6(1)、TDMLDA、CMLDA和BEAST的每信息比特分 支度量计算的平均(AVE)和最大(MAX)数量。定理1中的停止标准已经实施于WAVA(2)和MLWAVA6(1)。为了易于找出最佳值,每列中的最小数字已被粗体示出。The average (AVE) and maximum (MAX) numbers calculated for each information bit branch of WAVA (2), MLWAVA 6 (1), TDMLDA, CMDDA, and BEAST. The stopping criteria in Theorem 1 have been implemented in WAVA (2) and MLWAVA 6 (1). In order to easily find the best value, the smallest number in each column has been shown in bold.
Figure PCTCN2017097667-appb-000121
Figure PCTCN2017097667-appb-000121
此外,在表I中展示了五种解码算法之解码复杂度的方差,并在图2和图3中总结了所得结果。两图都表明,MLWAVA具有显著小于表I所列出其他四个解码器的方差。特别是对于[96,48,10]分组码,MLWAVA解码复杂度方差和其他四种解码算法解码复杂度方差至少差了两个数量级。当仅考BEAST和MLWAVA时,表I和图2和图3中所展示的结果共同表明,即使BEAST对于SNR的许多值而言在平均解码复杂度方面更优越,但其方差远远高于MLWAVA。例如,在SNRb=4dB时,对于[96,48,10]分组码而言,MLWAVA的解码复杂度方差比BEAST低7,234倍。这再次表明,当解码延迟实践在实际应用中受到特别关注时,MLWAVA是两者之间的更好选择。In addition, the variance of the decoding complexity of the five decoding algorithms is shown in Table I, and the results obtained are summarized in Figures 2 and 3. Both figures show that MLWAVA has significantly smaller variance than the other four decoders listed in Table I. Especially for the [96, 48, 10] block code, the variance of the MLWAVA decoding complexity and the decoding complexity of the other four decoding algorithms are at least two orders of magnitude worse. When only BEAST and MLWAVA are tested, Table I and the results shown in Figures 2 and 3 together show that even though BEAST is superior in average decoding complexity for many values of SNR, its variance is much higher than MLWAVA. . For example, at SNR b = 4 dB, the MLWAVA's decoding complexity variance is 7,234 times lower than BEAST for the [96, 48, 10] block code. This again shows that MLWAVA is a better choice between the two when decoding delay practices are of particular interest in practical applications.
在先前从表I得出的观察结果中,注意到,对于长衔尾卷积码,除了低SNR区域外,Beast在平均解码复杂度方面击败了其他四种解码算法。此外,BEAST和MLWAVA在解码复杂度方差方面的比较有利于MLWAVA,特别是对于长度为96的衔尾卷积码。沿着这条主线,接下来检查码字长度对BEAST和MLWAVA的解码复杂度的影响。通过将表I中[96,48,10]分组码的长度翻倍,获得了适合 本实验的[192,96,10]分组码。表II然后显示,当与表I中[96,48,10]分组码的每信息比特平均解码复杂度比较时,对于新型双倍长度衔尾卷积码而言MLWAVA的每信息比特平均解码复杂度保持不变或略微降低;尽管如此,当信息长度加倍时,BEAST的平均和最大解码复杂度都会大大增加。这表明MLWAVA的解码复杂度对于变化的码字长度是高度稳定的,而当码字长度增加时,BEAST的解码复杂度可能会显著增加。In the observations previously derived from Table I, it is noted that for long tail-end convolutional codes, in addition to the low SNR region, Beast defeats the other four decoding algorithms in terms of average decoding complexity. In addition, BEAST and MLWAVA are advantageous for MLWAVA in terms of decoding complexity variance, especially for trailing convolutional codes of length 96. Along this main line, the effect of the codeword length on the decoding complexity of BEAST and MLWAVA is examined next. By doubling the length of the [96, 48, 10] block code in Table I, it is suitable The [192, 96, 10] block code of this experiment. Table II then shows that when comparing the average decoding complexity per bit of information for the [96, 48, 10] block code in Table I, the average decoding per bit of information for MLWAVA is complex for the new double length tailing convolutional code. The degree remains the same or decreases slightly; nevertheless, when the length of the information is doubled, the average and maximum decoding complexity of BEAST is greatly increased. This indicates that the decoding complexity of MLWAVA is highly stable for varying codeword lengths, and the decoding complexity of BEAST may increase significantly as the codeword length increases.
可能影响解码复杂度的另一个因素是代码约束长度。该因素对MLWAVA和BEAST的此类影响已经得到了检验,并且也总结于表II中。本实验采用的代码是具有发生器5135,14477(八进制)的(2,1,12)衔尾卷积码[17]。它的信息长度为48,其等价于[96,48,16]分组码。Another factor that may affect decoding complexity is the code constraint length. This effect of this factor on MLWAVA and BEAST has been tested and is also summarized in Table II. The code used in this experiment is a (2, 1, 12) trailer convolutional code [17] with generators 5135, 14477 (octal). Its message length is 48, which is equivalent to the [96, 48, 16] block code.
表IITable II
MLWAVAλ(I)和BEAST的每信息比特分支度量计算的平均(AVE)和最大(MAX)数量。定理1中的停止标准已对MLWAVAλ(I)进行实施。为了易于找出最佳值,每列中的较小数字已被粗体示出。The average (AVE) and maximum (MAX) numbers calculated for each information bit branch metric of MLWAVAλ(I) and BEAST. The stopping criterion in Theorem 1 has been implemented for MLWAVA λ (I). In order to easily find the best value, the smaller number in each column has been shown in bold.
Figure PCTCN2017097667-appb-000122
Figure PCTCN2017097667-appb-000122
如预期那样,表II展示了当约束长度增加时,MLWAVA和BEAST的解码复杂度会显著地增加;然而,它们的平均解码复杂度之比也在SNRb=1dB时从298/131=2.275增加到19910/8299=2.399。这意味着在低SNR下,BEAST的解码复杂度随着约束长度的增加比MLWAVA的解码复杂度要适度地更快。在这里想补充说,在BEAST中存在隐藏的解码成本,这就是检查前向子网格和后向子网格之间匹配节点的复杂度。类似于针对基于顺序之搜索算法的计算量说明,其中用于搜索和重新排序堆栈元素的工作量可以通过采用优先级队列数据结构[15]或甚至基于硬件的堆栈结构[16]来显著地减轻,BEAST的节点检查 过程的成本可以通过将所有扩展节点的状态存储在适当结构的阵列中来缓解,其中匹配的节点可以位于存储奇取的一个步骤中。As expected, Table II shows that the decoding complexity of MLWAVA and BEAST increases significantly as the constraint length increases; however, their average decoding complexity ratio increases from 298/131=2.275 at SNR b =1dB. To199100/8299=2.399. This means that at low SNR, the decoding complexity of BEAST is moderately faster than the MLWAVA decoding complexity as the constraint length increases. I would like to add here that there is a hidden decoding cost in BEAST, which is to check the complexity of matching nodes between the forward sub-grid and the backward sub-grid. Similar to the computational specification for a sequence-based search algorithm, where the workload for searching and reordering stack elements can be significantly mitigated by employing a priority queue data structure [15] or even a hardware-based stack structure [16]. The cost of the BEAST node check process can be mitigated by storing the state of all extended nodes in an array of appropriate structures, where the matching nodes can be located in one step of the storage singularity.
然而,对于[96,48,16]分组码,这种成本将具有
Figure PCTCN2017097667-appb-000123
的次序,这对于实际的实施可能是不可行的。在没有节点匹配之单步存储器访问实现的情况下,的实验表明,对于[96,48,16]分组码,在SNRb=3:5dB处,BEAST完成一个分支度量计算的时间将比MLWAVA12(1)要多859倍。
However, for [96, 48, 16] block codes, this cost will have
Figure PCTCN2017097667-appb-000123
The order, which may not be feasible for actual implementation. In the case of a single-step memory access implementation without node matching, experiments have shown that for [96, 48, 16] block codes, at SNR b = 3: 5 dB, BEAST completes a branch metric calculation longer than MLWAVA 12 (1) It should be 859 times more.
表II中可能观察到的不明显结果是,MLWAVAλ(2)有时比MLWAVAλ(1)运行得更快,特别是在低SNR下。具体地说,在SNRb=1dB下,对于[192,96,10]分组码,MLWAVA6(2)具有比MLWAVA6(1)更小的最大复杂度。这表明,从第一阶段保留信息中所获得的更优预测,连同提出的早期停止标准,确实有助于在第二阶段降低优先级优先搜索的复杂度。The inconspicuous result that may be observed in Table II is that MLWAVA λ (2) sometimes runs faster than MLWAVA λ (1), especially at low SNR. Specifically, at SNR b =1 dB, MLWAVA 6 (2) has a smaller maximum complexity than MLWAVA 6 (1) for the [192, 96, 10] block code. This suggests that the better predictions obtained from the first stage of retention information, along with the proposed early stop criteria, do help to reduce the complexity of priority-first search in the second phase.
对于完整性而言,在图4中描述了用于[192,96,10]和[96,48,16]分组码的ML解码器字出错率性能,并在图5和6中对表I中平均解码复杂度与解码算法的关系进行了绘图。对应于表II的解码复杂度曲线则展示在图7和8中。For completeness, the ML decoder word error rate performance for the [192, 96, 10] and [96, 48, 16] block codes is depicted in Figure 4, and Table I in Figures 5 and 6. The relationship between the average decoding complexity and the decoding algorithm is plotted. The decoding complexity curves corresponding to Table II are shown in Figures 7 and 8.
指出MLWAVA和CMLDA都需要记录来自各自第一阶段算法的信息;尽管如此,这种来自MLWAVA和CMLDA的存储需求分别位于ILNs和LNs的大多数次序。It is pointed out that both MLWAVA and CMLDA need to record information from their respective first-stage algorithms; nevertheless, this storage requirement from MLWAVA and CMLDA is in most of the order of ILN s and LN s , respectively.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。 The above is a further detailed description of the present invention in connection with the specific preferred embodiments, and the specific embodiments of the present invention are not limited to the description. It will be apparent to those skilled in the art that the present invention may be made without departing from the spirit and scope of the invention.

Claims (9)

  1. 一种新型衔尾卷积码用最大似然解码算法,其特征在于:包括如下步骤:(A)后向环绕的网格上执行维特比算法VA;获取前一后向维特比算法VA轮次所保留的信息;(B)将优先级优先搜索算法应用于所有子网格的前向方向之中。A novel maximum likelihood decoding algorithm for tail-end convolutional codes, comprising: (A) performing a Viterbi algorithm VA on a backward-surrounded grid; acquiring a previous-to-backward Viterbi algorithm VA round The information retained; (B) applies the priority-first search algorithm to the forward direction of all sub-grids.
  2. 根据权利要求1所述的新型衔尾卷积码用最大似然解码算法,其特征在于:组合前向路径与后向路径给出了ML衔尾路径,其在长度为N的所有衔尾路径中获得具有最小f函数值,最小f函数值的计算公式获取具体为:路径
    Figure PCTCN2017097667-appb-100001
    为路径xk,(ln-1)沿着后向存活路径
    Figure PCTCN2017097667-appb-100002
    的紧接后元,并且以
    Figure PCTCN2017097667-appb-100003
    来表示其结束状态,路径xk,((l+1)n-1)的f函数值
    The novel maximum likelihood decoding algorithm for a tail-end convolutional code according to claim 1, wherein the combined forward path and backward path give an ML trailing path, and all trailing paths of length N are used. The calculation formula with the smallest f function value and the minimum f function value is obtained as follows:
    Figure PCTCN2017097667-appb-100001
    For the path xk, (ln-1) along the backward survival path
    Figure PCTCN2017097667-appb-100002
    Immediately after the yuan, and
    Figure PCTCN2017097667-appb-100003
    To indicate the end state, the f function value of the path x k, ((l+1)n-1)
    Figure PCTCN2017097667-appb-100004
    其中,xk,(ln-1)=(xk,0,xk,1,...,xk,ln-1)开放堆栈中的当前顶部路径,并且用s来表示其水平l上的结束状态,
    Figure PCTCN2017097667-appb-100005
    记录了水平L上开始状态为sk、水平l上结束状态为s的后向存活路径
    Figure PCTCN2017097667-appb-100006
    的累积路径度量,其在第i1次WAVA迭代期间获得。
    Figure PCTCN2017097667-appb-100004
    Where x k,(ln-1) =(x k,0 ,x k,1 ,...,x k,ln-1 ) open the current top path in the stack and use s to represent its level l End state,
    Figure PCTCN2017097667-appb-100005
    A backward survival path with a start state of s k at level L and an end state of s at level l is recorded.
    Figure PCTCN2017097667-appb-100006
    The cumulative path metric, which is obtained during the iteration i 1 times WAVA.
  3. 根据权利要求1所述的新型衔尾卷积码用最大似然解码算法,其特征在于:所述步骤(A)中,后向环绕维特比算法WAVA被应用于衔尾卷积码网格T,并检查衔尾路径以及检查辅助超级代码
    Figure PCTCN2017097667-appb-100007
    中的所有路径。
    The maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 1, wherein in the step (A), the backward surround Viterbi algorithm WAVA is applied to the end-of-tail convolutional code grid T And check the trailing path and check the auxiliary super code
    Figure PCTCN2017097667-appb-100007
    All paths in .
  4. 根据权利要求2所述的新型衔尾卷积码用最大似然解码算法,其特征在于:所述辅助超级代码
    Figure PCTCN2017097667-appb-100008
    由网格
    Figure PCTCN2017097667-appb-100009
    上的所有路径组成,其中,
    Figure PCTCN2017097667-appb-100010
    表示具有L个信息比特的(n,1,m)衔尾卷积码,目标卷积映射从1个信息比特限制为n个码位,m是存储器顺序。
    A maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 2, wherein said auxiliary super code
    Figure PCTCN2017097667-appb-100008
    By grid
    Figure PCTCN2017097667-appb-100009
    All the paths on it, among them,
    Figure PCTCN2017097667-appb-100010
    Representing a (n, 1, m) trailer convolutional code having L information bits, the target convolution map is limited from 1 information bit to n code bits, and m is the memory order.
  5. 根据权利要求2所述的新型衔尾卷积码用最大似然解码算法,其特征在于:网格T中路径的度量采用如下方式设定:令l为满足0≤l≤L的一个固定整数,对于一个二进制标签为
    Figure PCTCN2017097667-appb-100011
    的路径;其结束于网格T中的水平l,与其相关的路径度量定义为
    Figure PCTCN2017097667-appb-100012
    其中
    Figure PCTCN2017097667-appb-100013
    为对应的比特度量,该路径的所谓累积度量是预先指定的初始度量及与之相关的上述路径度量的总和。
    The maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 2, wherein the metric of the path in the grid T is set as follows: Let l be a fixed integer satisfying 0 ≤ l ≤ L For a binary tag
    Figure PCTCN2017097667-appb-100011
    Path; it ends at level l in grid T, and its associated path metric is defined as
    Figure PCTCN2017097667-appb-100012
    among them
    Figure PCTCN2017097667-appb-100013
    For the corresponding bit metric, the so-called cumulative metric for the path is the sum of the pre-specified initial metric and the associated path metrics associated therewith.
  6. 根据权利要求1所述的新型衔尾卷积码用最大似然解码算法,其特征在于:所述步骤(A)中,获得最大似然ML判断按以下方式进行:在第一次迭代结束时,若最佳的后向存活路径
    Figure PCTCN2017097667-appb-100014
    是一个衔尾路径,则它就是ML判定,对于首个迭代以外剩余的迭代,若
    The maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 1, characterized in that in the step (A), obtaining the maximum likelihood ML judgment is performed in the following manner: at the end of the first iteration If the best backward survival path
    Figure PCTCN2017097667-appb-100014
    Is a trailing path, then it is the ML decision, for the remaining iterations other than the first iteration, if
    Figure PCTCN2017097667-appb-100015
    Figure PCTCN2017097667-appb-100015
    适用于每个
    Figure PCTCN2017097667-appb-100016
    其中
    Figure PCTCN2017097667-appb-100017
    是后向WAVA直到第i次迭代所遇到的衔尾存活路径所有结束状态的集合,则最佳衔尾存活路径
    Figure PCTCN2017097667-appb-100018
    就是ML判定。
    Suitable for each
    Figure PCTCN2017097667-appb-100016
    among them
    Figure PCTCN2017097667-appb-100017
    Is the best end-of-life path for the end-to-WAVA until the end of the end-of-life path encountered by the i-th iteration.
    Figure PCTCN2017097667-appb-100018
    It is the ML decision.
  7. 根据权利要求1所述的新型衔尾卷积码用最大似然解码算法,其特征在于:所述步骤(B)中,采用两个数据结构进行优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。The maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 1, wherein in the step (B), two data structures are used to perform a priority-first search algorithm, and the two data structures are open. Stack and closed tables; the open stack stores paths that have been accessed so far by a priority-first search algorithm that keeps track of those paths that were previously at the top of the open stack.
  8. 根据权利要求6所述的新型衔尾卷积码用最大似然解码算法,其特征在于:根据其累积度量的升序对从步骤(A)中获得的多个子网格后向存活路径进行排序,并获得最大似然ML判断后停止算法。The maximum likelihood decoding algorithm for a novel tailing convolutional code according to claim 6, wherein the plurality of subgrids obtained from the step (A) are sorted to the survival path according to the ascending order of the cumulative metrics. And the algorithm is stopped after obtaining the maximum likelihood ML judgment.
  9. 根据权利要求1所述的新型衔尾卷积码用最大似然解码算法,其特征在于:采用有效早期停止标准以减少解码复杂度。 A novel maximum likelihood decoding algorithm for a trailer-type convolutional code according to claim 1 wherein an effective early stop criterion is employed to reduce decoding complexity.
PCT/CN2017/097667 2017-08-11 2017-08-16 Maximum likelihood decoding algorithm for tail-biting convolutional code WO2018171110A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710687027.6 2017-08-11
CN201710687027.6A CN107872232B (en) 2017-08-11 2017-08-11 Novel convolutional code maximum likelihood decoding algorithm of closing behind

Publications (1)

Publication Number Publication Date
WO2018171110A1 true WO2018171110A1 (en) 2018-09-27

Family

ID=61761279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097667 WO2018171110A1 (en) 2017-08-11 2017-08-16 Maximum likelihood decoding algorithm for tail-biting convolutional code

Country Status (2)

Country Link
CN (1) CN107872232B (en)
WO (1) WO2018171110A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1855732A (en) * 2005-04-25 2006-11-01 中兴通讯股份有限公司 Encoding method and encoder for tailing convolution codes
CN103444086A (en) * 2011-03-29 2013-12-11 英特尔公司 System, method and apparatus for tail biting convolutional code decoding
CN103634015A (en) * 2012-08-28 2014-03-12 上海无线通信研究中心 Maximum likehood decoding algorithm of tail biting code
US20150358100A1 (en) * 2013-01-18 2015-12-10 Lg Electronics Inc. Interference-removed reception method and terminal
CN106301391A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of soft output tail-biting convolutional code interpretation method of improvement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796160B (en) * 2014-01-22 2019-04-12 华为技术有限公司 Interpretation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1855732A (en) * 2005-04-25 2006-11-01 中兴通讯股份有限公司 Encoding method and encoder for tailing convolution codes
CN103444086A (en) * 2011-03-29 2013-12-11 英特尔公司 System, method and apparatus for tail biting convolutional code decoding
CN103634015A (en) * 2012-08-28 2014-03-12 上海无线通信研究中心 Maximum likehood decoding algorithm of tail biting code
US20150358100A1 (en) * 2013-01-18 2015-12-10 Lg Electronics Inc. Interference-removed reception method and terminal
CN106301391A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of soft output tail-biting convolutional code interpretation method of improvement

Also Published As

Publication number Publication date
CN107872232B (en) 2019-10-22
CN107872232A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN108462558B (en) Method and device for decoding polarization code SCL and electronic equipment
KR100846869B1 (en) Apparatus for Decoding LDPC with Low Computational Complexity Algorithms and Method Thereof
JP6451955B2 (en) System and method for multi-stage soft input decoding
JP4038518B2 (en) Method and apparatus for efficiently decoding low density parity check code
US9337866B2 (en) Apparatus for processing signals carrying modulation-encoded parity bits
US7512869B2 (en) Convolutional decoding
US8943381B1 (en) Systems and methods for performing bit flipping in an LDPC decoder
KR20080098391A (en) Map decoder with bidirectional sliding window architecture
US9300328B1 (en) Methodology for improved bit-flipping decoder in 1-read and 2-read scenarios
KR20150128750A (en) Systems and methods for decoding with late reliability information
CN110233628B (en) Self-adaptive belief propagation list decoding method for polarization code
CN110661533B (en) Method for optimizing decoding performance of decoder for storing polarization code
WO2020108586A1 (en) Polar code decoding method and apparatus, multi-stage decoder, and storage medium
Shen et al. Low-latency software successive cancellation list polar decoder using stage-located copy
CN110995279B (en) Polarization code combined SCF spherical list overturning decoding method
EP2339757B1 (en) Power-reduced preliminary decoded bits in viterbi decoder
CN108092671A (en) A kind of NB-LDPC code coding methods of high-performance low complex degree
WO2018171110A1 (en) Maximum likelihood decoding algorithm for tail-biting convolutional code
KR100362912B1 (en) Apparatus for stopping recursive decoding and turbo decoder comprising it
Chevillat et al. Distance and computation in sequential decoding
WO2018219031A1 (en) Polar code processing method, decoder and terminal
CN113131950A (en) Self-adaptive continuous elimination priority decoding method for polarization code
CN102291198A (en) channel decoding method and device
JP2010535459A (en) Coordinate ascent method for linear programming decoding.
KR101112121B1 (en) Method for decoding using reduced complexity-and-latency dynamic scheduling scheme for low density parity check codes and apparatus thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17902063

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17902063

Country of ref document: EP

Kind code of ref document: A1