WO2018184334A1 - 衔尾卷积码用最大似然双向优先级优先搜索算法 - Google Patents

衔尾卷积码用最大似然双向优先级优先搜索算法 Download PDF

Info

Publication number
WO2018184334A1
WO2018184334A1 PCT/CN2017/097677 CN2017097677W WO2018184334A1 WO 2018184334 A1 WO2018184334 A1 WO 2018184334A1 CN 2017097677 W CN2017097677 W CN 2017097677W WO 2018184334 A1 WO2018184334 A1 WO 2018184334A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
search algorithm
metric
backward
priority
Prior art date
Application number
PCT/CN2017/097677
Other languages
English (en)
French (fr)
Inventor
韩永祥
吴庭伊
陈伯宁
瓦悉尼星巴
Original Assignee
东莞理工学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东莞理工学院 filed Critical 东莞理工学院
Publication of WO2018184334A1 publication Critical patent/WO2018184334A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors

Definitions

  • the present invention relates to the field of data processing, and in particular to a maximum likelihood bidirectional priority first search algorithm BiPFSA for a trailer convolutional code.
  • the decoding of the tail convolutional code is performed on the grid, and the codeword passes through the grid corresponding to the path starting and ending in the same state (but not necessarily the all-zero state).
  • a path having the same initial state and final state on the trailer convolutional code grid is referred to as a trailing path. Because there is a one-to-one correspondence between the codeword and the trailing path, these two terms are often used interchangeably.
  • the mesh can be decomposed into M sub-grids with the same initial state and final state. According to previous naming conventions, these subtrees are referred to as the suffixes, or if there is no ambiguity resulting from the abbreviations, they can simply be called sub-lattices.
  • a trailing convolutional code grid will be denoted by T, and its i-th sub-cell will be represented by Ti, where 0 ⁇ i ⁇ M-1.
  • T 0 , T 1 , T 2 and T 3 for the trailing convolutional code grid are depicted in FIG. It can be clearly seen from the two figures that the decoding complexity of the trailer convolutional code will be multiplied compared to an equal-sized zero-tailed convolutional code because all the trailing paths in each sub-frame must be performed. check.
  • WAVA Viterbi algorithm
  • VA Viterbi algorithm
  • the WAVA can be equally viewed as an application of the VA on the extended grid, which is formed by a pre-specified number of grids connected to each other in series. Through simulation, it is enough to wrap up to four grids to obtain an approximate optimal performance [2].
  • WAVA Likelihood
  • BEAST bidirectional efficient algorithm
  • CMLDA Creative Maximum Likelihood Decoding Algorithm
  • the present invention provides a maximum likelihood bidirectional priority first search algorithm for a trailer convolutional code, which solves the problem of high maximum decoding complexity in the prior art.
  • a maximum likelihood bidirectional priority first search algorithm for designing and manufacturing a trailer convolutional code is characterized in that it comprises the following steps: (A) path metric by backward path The guided backward bidirectional priority first search algorithm is applied to the auxiliary super code in a backward manner Grid T; (B) The forward two-way priority-first search algorithm guided by the precise path metric of the forward path is applied to the end-of-cold convolutional code Above all grids; Representing a (n, k, m) tailing convolutional code having kL information bits, wherein k information bits are mapped to n code bits by inputting a linear convolution circuit storing m order, Corresponding to Consists of all the binary words of the path on the grid.
  • the path vector is represented in the following manner: a path; it ends at a level in a graphical structure
  • the path metric associated with it is among them, To satisfy Fixed integer,
  • two data structures are used to perform a bidirectional priority first search algorithm
  • two data structures are an open stack and a closed table; the open stack stores a path that has been accessed so far by a bidirectional priority first search algorithm, and is closed.
  • the table tracks those paths that were previously at the top of the open stack.
  • the top-level element of the two-way priority-first search algorithm is the element with the smallest boot metric.
  • forward path The exact path metric is defined as Where j is the index of the end state of the path.
  • the path metric is initialized, the backward path is inserted into the open stack and the backward path is reordered, so that the top backward path has the smallest path metric, and the optimal is obtained. Subsequent backward paths and their path metrics.
  • an accurate path metric of all subsequent forward paths of the top forward path in the open stack is calculated, and the forward path in the open stack is rearranged according to the incremental precise path metric. Delete all subsequent forward paths that reach level L.
  • the information of the top forward path is recorded in the closed list, and information identifying the forward path requires only the initial state, the end state, and the end level.
  • the beneficial effects of the present invention are that there is a significantly smaller standard deviation in terms of decoding complexity, and the average decoding complexity is significantly better than all other decoding algorithms.
  • Figure 1 is a grid of (2, 1, 2) trailer convolutional codes of prior art information length 5.
  • S0, S1, S2, and S3 represent four possible states in each level;
  • S0, S1, S2, and S3 represent four possible states in each level;
  • WER word error rate
  • Figure 5 is a standard deviation of the number of calculations per information bit branch metric for the [24, 12, 8] extended Golay code decoding algorithm of Figure 4;
  • Figure 7 is a standard deviation of the number of calculations per information bit branch metric for the [96, 48, 16] extended-end block code decoding algorithm of Figure 6.
  • a tail-following code uses a maximum likelihood two-way priority-first search algorithm, which comprises the following steps: (A) a backward two-way priority-first search algorithm guided by a path metric of a backward path to backward Way to apply to auxiliary super code Grid T; (B) The forward two-way priority-first search algorithm guided by the precise path metric of the forward path is applied to the end-of-cold convolutional code Above all grids; Representing a (n, k, m) tailing convolutional code having kL information bits, wherein k information bits are mapped to n code bits by inputting a linear convolution circuit storing m order, Corresponding to Consists of all the binary words of the path on the grid.
  • Two data structures are used for the bidirectional priority first search algorithm.
  • the two data structures are open stack and closed tables.
  • the open stack stores the paths that have been accessed so far through the bidirectional priority first search algorithm.
  • the closed table track was previously open. Those paths at the top of the stack.
  • the top-level element of the two-way priority-first search algorithm is the element with the smallest boot metric.
  • the path metric is initialized, the backward path is inserted into the open stack and the backward path is reordered, so that the top backward path has the smallest path metric, thereby obtaining the optimal subsequent backward path and its path. measure.
  • the tail mesh calculate the exact path metric of all subsequent forward paths of the top forward path in the open stack, rearrange the forward path in the open stack according to the incremental precise path metric, and delete all subsequent to the level L Forward path.
  • the information of the top forward path is recorded in the closed list, and information identifying the forward path requires only the initial state, the end state, and the end level.
  • the present invention provides a novel Maximum Likelihood (ML) Bidirectional Priority First Search Algorithm (BiPFSA) for tailing convolutional codes.
  • ML Maximum Likelihood
  • BiPFSA performs PFSA on the backward mesh for the first time, and then applies the PFSA to all sub-latches in a forward manner based on the information retained from the previous stage.
  • this work carefully designed a new ML decoding metric and a new evaluation function to guide the priority-first search of the first and second phases.
  • the simulation results obtained for the (2,1,6) and (2,1,12) trailer convolutional codes with information lengths of 12 and 48 respectively indicate that the average decoding complexity of BiPFSA is far below all simulated SNR levels. Far below BEAST [8], PFSA [10] and BS [13].
  • BiPFSA guarantees optimal performance and WAVA cannot be realized, BiPFSA has lower average decoding complexity than near-optimal WAVA [2].
  • the two-way priority first search algorithm :
  • the trailing path corresponds to Codeword, but introduced auxiliary super code Its corresponding to Consists of all the binary words of the path on the grid.
  • auxiliary super code Its corresponding to Consists of all the binary words of the path on the grid.
  • the path considered in this can start and end in different states.
  • (0,0,0,0,0,1,1,1,1,1,0) marks a path whose initial state is S 0 and whose final state is S 2 , and which therefore contains in But not One of the code words.
  • Definition 1 Order To satisfy Fixed integer.
  • a path it ends with a graphic structure (such as a grid T and a sub-grid) Level in Define the path metric associated with it as
  • the goal of ML decoding is The output code path x (Ln) such that its path metric m(x (Ln) ) is less than or equal to Path metric for all other code paths in .
  • CMLDA and PFSA perform VA in their first phase, with the goal of extracting specially designed heuristic information for use in the second phase.
  • running the VA in the first phase inevitably induces a floor constant to the overall decoding complexity, and equal to the floor constant performed by the VA not only increases exponentially with respect to the code memory sequence, but also cannot be reduced by increasing the SNR. small.
  • the only way to break the floor barrier of overall decoding complexity is to replace the first stage VA with a search algorithm that is less complex than VA. This has prompted the proposed BiPFSA described herein.
  • Phase 1 A backward PFSA guided by the path metric of the backward path is applied in a backward manner Grid T. Similar to the content defined in Definition 1, for one use Backward path marked backwards, whose path metric is defined as
  • a heuristic function value for each state at each level expressed as (among them And 0 ⁇ i ⁇ M-1), which is then calculated as the latter given in steps 5 and 9 of the backward BiPFSA.
  • Phase 2 A forward PFSA guided by the precise path metric of the forward path is applied to of Above all grids.
  • Forward path The exact path metric is defined as Where j is the index of the end state of the path (equivalently, S j is the end state of the path).
  • Two data structures will be used during the implementation of the forward and backward PFSA. They are respectively Named open stack and closed table. The former stores paths that have been accessed by PFSA, and the top-level element that the PFSA has is the element with the smallest boot metric. The latter keeps track of those paths that were once above the open stack. The reason they are so named is that the paths in the open stack may be further expanded and thus remain open, while the paths in the closed table can no longer be expanded and will therefore be closed for future expansion.
  • Step 2 Load all zero-length backward paths to a state in which the open stack starts at level L on the grid T. There are M backward paths here. The path metrics for these zero-length backward paths are initialized to zero.
  • Step 3 If the open stack is empty, output x UB as the final ML decision and stop BiPFSA without performing the second phase.
  • Step 4 Get the current top backward path in the open stack. Record the top back path and its path metrics as x TOP and c TOP , respectively . Remove the top backward path from the open stack. If x TOP reaches level 0 (ie, a state ending at level 0), then go to step 9.
  • Step 5 Make the level The upper state S i is the end state of x TOP . If it is less than infinity, go to step 3; otherwise, assign c TOP to
  • Step 6 Obtain the subsequent backward paths of x TOP on the grid and calculate their path metrics according to (1). Delete those subsequent backward paths whose path metrics are not less than c UB .
  • Step 7 If a subsequent backward path reaches level 0 and its path metric is less than c UB , and it is still a trailing path, replace x UB and c UB with the subsequent backward path and its path metric, respectively. Repeat this step until all subsequent backward paths are verified.
  • Step 8 Insert the remaining subsequent backward paths from step 6 into the open stack and reorder the backward paths such that the top backward path has the smallest path metric. Go to step 3.
  • Step 9 If x TOP is the trailing path, output it as an ML decision and stop BiPFSA without performing the second phase; otherwise, for all And 0 ⁇ i ⁇ M-1, whenever Time assignment Go to the second stage.
  • Step 1 Clear and reset the open stack from the first stage. Put all the grids All zero-length forward paths on level 0 are reloaded onto the open stack. There are M such forward paths here. The exact path metrics for these zero-length forward paths are initialized to Keep x UB and c UB from the first stage.
  • Step 2 If the open stack is empty, output x UB as the final ML decision and stop BiPFSA. 1
  • This step will no longer output x UB equal to the null value.
  • Step 3 If the current top forward path x TOP in the open stack has been recorded in the closed table, discard it from the open table and go to step 2; otherwise, record the information of the top forward path in Closed in the table. 2
  • Step 4 Calculate the exact path metric for all subsequent forward paths of the top forward path in the open stack, based on the structure of the M-tail grid. Remove the top forward path from the open stack. Delete subsequent backward paths whose exact path metrics are not less than c UB .
  • Step 5 If a subsequent forward path reaches level L and its exact path metric is less than c UB , replace x UB and c UB with the subsequent forward path and its exact path metric, respectively. Repeat this step until all subsequent forward paths are verified. Delete all subsequent forward paths that reach level L.
  • Step 6 Insert the remaining subsequent forward paths from step 5 into the open stack and rearrange the forward paths in the open stack based on the incremental precision path metrics. Go to step 2.
  • the top path ending in the state that has been accessed in some previous time is eliminated. Since these paths must have worse path metrics than the top path that was previously accessed and ended in the same state (a poorly accurate path metric for the second phase, respectively), their elimination does not affect performance optimality. But it helps speed up priority-first searches.
  • two different criteria are applied in the first and second phases. In the first phase, notice that at (and only at) the end of the level In the case where the state of the upper state S i has been accessed, It is less than 1. therefore, Can be used to verify if the status was accessed in the first phase.
  • a closed table is introduced to record all paths that are already at the top and therefore have been visited.
  • a common feature of priority-first search or any sequential search algorithm is that the efficiency of sequential search can be improved when searching from a more reliable component. Therefore, the average decoding complexity of BiPFSA can be further reduced by cyclically shifting the received vector r according to the reliability of its components. By specifying for any integer carried out The bit-shifted grid and its corresponding sub-grids remain the same, and BiPFSA can be utilized in the same way to determine the cyclically shifted ML codeword relative to the cyclically shifted receive vector. The ML codeword can then be easily recovered by the reverse action of the initial cyclic shift operation. Based on a similar method to [14], it is recommended to rotate the receive vector to the right before feeding it to BiPFSA. Bit, where
  • is a predetermined window size for reliability measurement. Compared with the decoding complexity of BiPFSA, it is attributed to (2) The additional computational complexity of the solution and the inverse left shift at the end of BiPFSA is almost negligible.
  • the signal-to-noise ratio (SNR) consists of Given.
  • SNR per information bit is used in the following discussion, ie
  • Two tail convolutional codes are used in the simulation. They are (2, 1, 6) and (2, 1, 12) with the generator 103, 166 (octal) and 5133, 14477 (octal) end-of-the-line convolutional codes.
  • the computational complexity of the sequential search algorithm includes not only the evaluation of the decoding metric but also the workload consumed in searching and reordering the stack elements.
  • a priority queue data structure commonly referred to as HEAP [17]
  • the latter can do the same amount of work as the former.
  • One can further adopt a hardware-based stack structure [18] and achieve constant complexity for each stack insertion operation.
  • the BiPFSA with a window size of ⁇ (determining the initial cyclic shift by (2)) and the WAVA surrounding up to one grid [2] are respectively parameterized as BiPFSA( ⁇ ) and WAVA (I).
  • the same cyclic shift pre-processing can also be applied to the PFSA in [10], which can be similarly expressed as PFSA( ⁇ ).
  • BEAST, BiPFSA( ⁇ ), BS, and PFSA( ⁇ ) are both ML decoders, so both should achieve optimal WER. For all experiments, ensure that at least 100 word errors occur so that there is no bias in the simulation results.
  • the decoding latency is considered to have an importance comparable to the average decoding complexity, especially when considering sequential decoding algorithms.

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

一种衔尾卷积码用最大似然双向优先级优先搜索算法,包括如下步骤:(A)由后向路径的路径度量所引导的向后双向优先级优先搜索算法以向后的方式应用于辅助超级代码(I)的网格T;(B)由前向路径的精确路径度量所引导的向前双向优先级优先搜索算法应用到衔尾卷积码(II)的所有网格之上。本算法的有益效果是:解码复杂度方面具有明显更小的标准差,平均解码复杂度方面显著地优于所有其他解码算法。

Description

衔尾卷积码用最大似然双向优先级优先搜索算法 【技术领域】
本发明涉及数据处理领域,尤其涉及一种衔尾卷积码用最大似然双向优先级优先搜索算法BiPFSA。
【背景技术】
在数位通讯领域,卷积码在提供有效出错防止能力方面已经得到了广泛应用。为了清除移位寄存器的内容以使得下一个信息序列的编码可以在无需重新初始化的情况下直接进行,适当数量的零位(足以填满并由此重置移位寄存器)通常被附加在待编码的信息比特流的结尾。实际上,因为编码移位寄存器的初始状态对于每个信息序列总是确定的并且解码器因此可以始终从代码树或代码格的相同根节点开始解码过程,所以这有助于解码算法的设计。其附带的好处是,这些零尾位还可以增强卷积码的出错防止能力。然而,如果由于这些零尾位导致码率的显著损失(特别是当信息序列短时),这种性能增强可能会大打折扣。
在文献中,已经提出了几种方法来减轻上述(短长度)零尾卷积码的码率损失,例如直接截断和穿刺[1]。或者,所谓的衔尾卷积码[2]、[3]、[4]以对解码器上编码移位寄存器的初始状态引入不确定性为代价彻底消除码率损失。与始终起止于全零状态的零尾卷积编码器不同,衔尾卷积编码器仅确保初始状态和最终状态相同,其中特定状态由先前的信息比特流决定。因为除了全零状态之外的任何状态也都可以是衔尾卷积编码器的初始状态,解码搜索需要检查的可能传输空间大小是与编码移位寄存器的可能状态数量相乘,所以解码复杂度显着增加。尽管存在这种解码挑战,但正如图[1]所表明的那样,衔尾卷积码通常可以比零尾卷积码获得更好的出错防止性能。
类似于零尾卷积码的解码,尾部卷积码的解码在网格上执行,而码字通过该网格对应于起止于相同状态的路径(但不一定是全零状态)。出于方便起见,衔尾卷积码网格上具有相同初始状态和最终状态的路径被称为衔尾路径。因为在码字和衔尾路径之间存在一对一的对应关系,所以这两个术语通常可互换使用。
令衔尾卷积码网格的所有可能初始状态(等价于最终状态)的数量由M表示。那么,网格可以分解成具有相同初始状态和最终状态的M个子格。依照以前的命名约定,这些子树被称为衔尾子格,或者如果没有缩写所产生的歧义,则可简单地称为子格。为了方便起见,将用T表示一个衔尾卷积码网格,并用Ti表示其第i个子格,其中0≤i≤M-1。例如,图1中的示例说明了M=4的衔尾卷积码网格T,其中起止于不同状态的那些路径均包括在内。图2中描绘了用于该尾随卷积码网格的四个子格T0、T1、T2和T3。从两张图可以清楚地看到,与等大小的零尾卷积码相比,衔尾卷积码的解码复杂度将增加多倍,这是因为每个子格中的所有衔尾路径必须进行查验。
为了降低解码复杂度,文献[2]、[3]、[5]、[6]、[7]中已经提出了用于衔尾卷积码的几种次优解码算法,在这些算法中环绕维特比算法(WAVA)具有最低的解码复杂度[2]。顾名思义,WAVA以环绕方式迭代地将维特比算法(VA)应用于衔尾卷积码的网格上。在其执行期间,WAVA不仅遍历衔尾路径,而且还遍历起止于不同状态的路径;因此,它可以输出没有码字对应的路径。于是,在每次迭代结束时,它会查验最终输出是否为衔尾路径。如果输出路径不是衔尾路径并且迭代次数小于允许的最大迭代次数,则以先前迭代中残存路径的度量数据作为初始度量的形式启动另一次迭代。其结果是,WAVA可以被等同地视为VA在扩展网格上的一个应用,其由预先指定数量、以串联方式彼此连接的网格所形成。通过模拟显示,最多环绕四个网格就足以获得一个近似最优性能[2]。
在需要精确最优解码性能的情况下,因为WAVA不能保证找到最大似然(ML)衔尾路径,所以它不再是合适的选择。通过在所有衔尾子格上执行VA,ML衔尾路径可以通过一种直接的方式获得;然而,这种强力方法却因其高计算复杂度显得不切实际。
在2005年,Bocharova等人提出了用于衔尾卷积码的ML解码算法,这被称为搜索树的双向有效算法(BEAST)[8]。从概念上讲,BEAST在 向前和向后的方向上都会重复地及同时地对特定节点进行探索,这些节点所具有的解码度量低于每个子格上的某一阈值。它在每个步骤中不断增加阈值,直到找到一个ML路径。[8]中所提供的模拟结果表明,BEAST在高信噪比(SNR)下非常有效。
一年后,Shankar等人提出了[9]另一种用于衔尾卷积码的ML解码算法,称之为创造性最大似然解码算法(CMLDA)。CMLDA具有两个阶段。第一阶段将VA应用于衔尾卷积码的网格以提取某些网格信息。基于获得的网格信息,算法A*会在第二阶段的所有子格上执行以产生ML判定。正如[9]已展示的那样,在不牺牲性能最优性的情况下,CMLDA将解码复杂度从强力逼近方法所需之M子格上VA的M执行次数降低至大约1.3倍的VA执行次数。为了在解码复杂性方面进一步改进CMLDA,[10]和[11]的作者重新定义了[9]中给出的启发式函数,并通过优先级优先搜索算法(PFSA)替换了其在第二阶段的算法A*。PFSA在平均和最大解码复杂度两方面均改进了CMLDA,尤其是在第二阶段[10]、[11]。然而,CMLDA和PFSA两者的整体解码复杂度都由它们的第一阶段解码复杂度所主导,并且因为它们仍然将VA的使用保留在第一阶段,所以它们在衔尾卷积码网格上的一次VA执行成为复杂度下界。
在2013年,Wang等人提出了不使用堆栈的另一种ML解码算法[12]。它比较两次连续WAVA迭代之间的残存路径,并且每当[12]所述的关键残存路径在某些非ML路径上被“捕获”时,它都会在特定的衔尾子格上启动VA。然而,[12]的解码复杂度结果比[9]高得多。后来在2015年,Qian等人提出了一种新颖的两阶段深度优先ML解码算法,称为有界搜索(BS)[13]。该种算法概念性地于第一阶段在网格上运行一次深度优先搜索以期排除大多数子格。然后,在第二阶段对剩余的子格执行双向搜索算法。与其他两阶段解码算法类似,BS在高SNR下提供较低的平均解码复杂度。
【发明内容】
为了解决现有技术中的问题,本发明提供了一种衔尾卷积码用最大似然双向优先级优先搜索算法,解决现有技术中存在较高的最大解码复杂度的问题。
本发明是通过以下技术方案实现的:设计、制造了一种衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:包括如下步骤:(A)由 后向路径的路径度量所引导的向后双向优先级优先搜索算法以向后的方式应用于辅助超级代码
Figure PCTCN2017097677-appb-000001
的网格T;(B)由前向路径的精确路径度量所引导的向前双向优先级优先搜索算法应用到衔尾卷积码
Figure PCTCN2017097677-appb-000002
的所有网格之上;
Figure PCTCN2017097677-appb-000003
表示一个(n,k,m)具有kL个信息位的衔尾卷积码,其中k个信息位通过输入存储次序为m的线性卷积电路来映射至n个码位,
Figure PCTCN2017097677-appb-000004
由对应于
Figure PCTCN2017097677-appb-000005
网格上路径的所有二进制字组成。
作为本发明的进一步改进:路径向量按以下方式进行表示:对于二进制标号为
Figure PCTCN2017097677-appb-000006
的一个路径;其结束于一个图形结构中的水平
Figure PCTCN2017097677-appb-000007
与之相关联的路径度量为
Figure PCTCN2017097677-appb-000008
其中,
Figure PCTCN2017097677-appb-000009
为满足
Figure PCTCN2017097677-appb-000010
的固定整数,
Figure PCTCN2017097677-appb-000011
为第(j+1)个二进制标号的比特度量,接收的LLR向量为φ=(φ01,...,φN-1)及其对应的硬判定向量为y=(y0,y1,...,yN-1)。
作为本发明的进一步改进:采用两个数据结构进行双向优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过双向优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。
作为本发明的进一步改进:开放堆栈中,该双向优先级优先搜索算法所具有的顶层元素是具有最小引导度量的元素。
作为本发明的进一步改进:前向路径
Figure PCTCN2017097677-appb-000012
的精确路径度量被定义为
Figure PCTCN2017097677-appb-000013
其中j为该路径结束状态的指数。
作为本发明的进一步改进:所述步骤(A)中,初始化路径度量,后向路径插入到开放堆栈并对后向路径进行重新排序,使得顶部后向路径具有最小的路径度量,进而获得最优后续后向路径和其路径度量。
作为本发明的进一步改进:根据衔尾网格的结构,计算开放堆栈中顶部前向路径之所有后续前向路径的精确路径度量,根据递增精确路径度量值重新排列开放堆栈中的前向路径,删除所有达到水平L的后续前向路径。
作为本发明的进一步改进:顶部前向路径的信息记录在封闭表中,识别一个前向路径仅需要初始状态、结束状态和结束水平的信息。
本发明的有益效果是:解码复杂度方面具有明显更小的标准差,平均解码复杂度方面显著地优于所有其他解码算法。
【附图说明】
图1为现有技术信息长度为5的(2,1,2)衔尾卷积码的网格。在此,S0、S1、S2和S3表示每个水平中的四个可能状态;
图2是现有技术信息长度为5的(2,1,2)衔尾卷积码的子格。在此,S0、S1、S2和S3表示每个水平中的四个可能状态;
图3是WAVA(I=2)[19]的字错误率(WER)和[24,12,8]扩展戈雷码的一个ML解码器以及[96,48,16]衔尾分组码;
图4是用于[24,12,8]扩展戈雷码之BEAST、BiPFSA(λ=6)、BS、PFSA(λ=6)和WAVA(I=2)的每信息位分支度量计算的平均数量;
图5是图4中用于[24,12,8]扩展戈雷码解码算法的每信息位分支度量计算数量之标准差;
图6是用于[96,48,16]衔尾分组码之BEAST、BiPFSA(λ=12)、BS、PFSA(λ=12)和WAVA(I=2)的每信息位分支度量计算的平均数量;
图7是图6中用于[96,48,16]扩展衔尾分组码解码算法的每信息位分支度量计算数量之标准差。
【具体实施方式】
下面结合附图说明及具体实施方式对本发明进一步说明。
一种衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:包括如下步骤:(A)由后向路径的路径度量所引导的向后双向优先级优先搜索算法以向后的方式应用于辅助超级代码
Figure PCTCN2017097677-appb-000014
的网格T;(B)由前向路径的精确路径度量所引导的向前双向优先级优先搜索算法应用到衔尾卷积码
Figure PCTCN2017097677-appb-000015
的所有网格之上;
Figure PCTCN2017097677-appb-000016
表示一个(n,k,m)具有kL个信息位的衔尾卷积码,其中k个信息位通过输入存储次序为m的线性卷积电路来映射至n个码位,
Figure PCTCN2017097677-appb-000017
由对应于
Figure PCTCN2017097677-appb-000018
网格上路径的所有二进制字组成。
路径向量按以下方式进行表示:对于二进制标号为
Figure PCTCN2017097677-appb-000019
Figure PCTCN2017097677-appb-000020
的一个路径;其结束于一个图形结构中 的水平
Figure PCTCN2017097677-appb-000021
与之相关联的路径度量为
Figure PCTCN2017097677-appb-000022
其中,
Figure PCTCN2017097677-appb-000023
为满足
Figure PCTCN2017097677-appb-000024
的固定整数,
Figure PCTCN2017097677-appb-000025
为第(j+1)个二进制标号的比特度量,接收的LLR向量为φ=(φ01,...,φN-1)及其对应的硬判定向量为y=(y0,y1,...,yN-1)。
采用两个数据结构进行双向优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过双向优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。
开放堆栈中,该双向优先级优先搜索算法所具有的顶层元素是具有最小引导度量的元素。
前向路径
Figure PCTCN2017097677-appb-000026
的精确路径度量被定义为
Figure PCTCN2017097677-appb-000027
其中j为该路径结束状态的指数。
所述步骤(A)中,初始化路径度量,后向路径插入到开放堆栈并对后向路径进行重新排序,使得顶部后向路径具有最小的路径度量,进而获得最优后续后向路径和其路径度量。
根据衔尾网格的结构,计算开放堆栈中顶部前向路径之所有后续前向路径的精确路径度量,根据递增精确路径度量值重新排列开放堆栈中的前向路径,删除所有达到水平L的后续前向路径。
顶部前向路径的信息记录在封闭表中,识别一个前向路径仅需要初始状态、结束状态和结束水平的信息。
本发明提供了一种新型的衔尾卷积码用最大似然(ML)双向优先级优先搜索算法(BiPFSA)。不同于其他现有工作,BiPFSA首次在后向网格上执行PFSA,然后基于从前一阶段所保留的信息将PFSA以向前的方式再次应用于所有子格上。为了不影响性能最优性,本项工作精心设计了一个新型ML解码度量和一个新型评估函数,其用于指导第一和第二阶段的优先级优先搜索。针对信息长度分别为12和48的(2,1,6)和(2,1,12)衔尾卷积码所获得的模拟结果表明,BiPFSA的平均解码复杂度在所有模拟的SNR水平下远远低于BEAST[8]、PFSA[10]和BS[13]。尽管BiPFSA保证实现最优性能而WAVA却不能实现,但是与近乎最佳WAVA相比,BiPFSA的平均解码复杂度更低[2]。
一种实施例中,双向优先级优先搜索算法:
Figure PCTCN2017097677-appb-000028
表示一个(n,k,m)具有kL个信息位的衔尾卷积码,其中k个信息位通过输入存储次序为m的线性卷积电路来映射至n个码位。这样的系统有时被称为[N,K,dmin]=[Ln,Lk,dmin]分组码,这是因为一个(n,k,m)衔尾卷积码同样是大小为2kL、长度为nL的分组码,其中dmin是该代码的最小成对汉明距离。衔尾卷积码的网格T在每个水平
Figure PCTCN2017097677-appb-000029
具有M=2m个状态,并且其具有L+1个水平。虽然只有将自身限制为具有相同初始状态和最终状态的衔尾路径对应于
Figure PCTCN2017097677-appb-000030
的码字,但是引入了辅助超级代码
Figure PCTCN2017097677-appb-000031
其由对应于
Figure PCTCN2017097677-appb-000032
网格上路径的所有二进制字组成。换句话说,在
Figure PCTCN2017097677-appb-000033
中所考虑的路径可以起止于不同的状态。如图1中的一个示例,(0,0,0,0,1,1,1,1,1,0)标记了初始状态为S0和最终状态为S2的一个路径,并且其因此包含在
Figure PCTCN2017097677-appb-000034
之内但并不是
Figure PCTCN2017097677-appb-000035
中的一个码字。
通过
Figure PCTCN2017097677-appb-000036
表示二进制码字
Figure PCTCN2017097677-appb-000037
将对应于接收向量r=(r0,r1,…,rN-1)的硬判定序列y=(y0,y1,…,yN-1)定义为
Figure PCTCN2017097677-appb-000038
其中
Figure PCTCN2017097677-appb-000039
是第j个分量rj的对数似然比(LLR),并且Pr(rj|0)和Pr(rj|1)分别表示给定码位0和1所发送的信道转换概率。Y的校正子相应地由
Figure PCTCN2017097677-appb-000040
所给出,其中
Figure PCTCN2017097677-appb-000041
Figure PCTCN2017097677-appb-000042
的等分组效码奇偶校验矩阵,而上标“T”是矩阵转置运算。
令E(s)是校正子为s的所有错误模式的集合。则接收向量r的ML解码输出
Figure PCTCN2017097677-appb-000043
等于
Figure PCTCN2017097677-appb-000044
其中
Figure PCTCN2017097677-appb-000045
满足
Figure PCTCN2017097677-appb-000046
(对于所有的e=(e0,e1,...,eN-1)∈E(s)),并且
Figure PCTCN2017097677-appb-000047
是逐个分量模2加法。因此,定义了如下可以普遍应用于网格T或子格
Figure PCTCN2017097677-appb-000048
上各种路径的路径度量。
定义1:令
Figure PCTCN2017097677-appb-000049
为满足
Figure PCTCN2017097677-appb-000050
的固定整数。给出接收的LLR向量φ=(φ01,...,φN-1)及其对应的硬判定向量y=(y0,y1,...,yN-1)。对于二进制标号为
Figure PCTCN2017097677-appb-000051
的一个路径;其结束于一个图形结构(如网格T和子格
Figure PCTCN2017097677-appb-000052
)中的水平
Figure PCTCN2017097677-appb-000053
将与之相关联的路径度量定义为
Figure PCTCN2017097677-appb-000054
其中
Figure PCTCN2017097677-appb-000055
为第(j+1)个二进制标号的比特度量。
通过这种定义,ML解码的目标是在
Figure PCTCN2017097677-appb-000056
中输出码路径x(Ln),使得其路径度量m(x(Ln))小于或等于
Figure PCTCN2017097677-appb-000057
中所有其他码路径的路径度量。
如引言部分所述,CMLDA和PFSA在其第一阶段执行VA,其目的是提取专门设计的启发式信息以供第二阶段使用。然而,在第一阶段运行VA不可避免地向整体解码复杂度诱导出一个地板常数,并且等于一次VA执行的地板常数不仅相对于代码存储器顺序呈指数式地增长,而且还不能通过增加SNR来减小。显然,打破整体解码复杂性的地板壁垒的唯一方法是通过一个复杂度低于VA的搜索算法来替换第一阶段的VA。这推动了本文所述BiPFSA的提出。
BiPFSA每个阶段的总体任务描述如下。
·第一阶段:一个由后向路径之路径度量所引导的向后PFSA以向后的方式被应用于
Figure PCTCN2017097677-appb-000058
的网格T。类似于定义1所定义的内容,对于一个用
Figure PCTCN2017097677-appb-000059
向后标记的后向路径,其路径度量定义为
Figure PCTCN2017097677-appb-000060
每个水平上每一状态的一个启发式函数值,表示为
Figure PCTCN2017097677-appb-000061
(其中
Figure PCTCN2017097677-appb-000062
Figure PCTCN2017097677-appb-000063
且0≤i≤M-1),然后被计算为向后BiPFSA的第5步和第9步中所给定的后者。
·第二阶段:一个由前向路径之精确路径度量所引导的向前PFSA被应用到
Figure PCTCN2017097677-appb-000064
Figure PCTCN2017097677-appb-000065
所有网格之上。前向路径
Figure PCTCN2017097677-appb-000066
Figure PCTCN2017097677-appb-000067
的精确路径度量被定义为
Figure PCTCN2017097677-appb-000068
其中j为该路径结束状态的指数(等效地,Sj是该路径的结束状态)。
在向前和向后PFSA的实施过程中将会用到两个数据结构。它们分别被 命名为开放堆栈和封闭表。前者存储已由PFSA访问过的路径,该PFSA所具有的顶层元素是具有最小引导度量的元素。后者跟踪某些先前时间曾经位于开放堆栈之上的那些路径。它们之所以如此命名的原因是:开放堆栈中的路径可能会被进一步扩展并由此保持打开状态,而封闭表中的路径不能再被扩展并因此会被关闭以备将来扩展。
根据上述背景,现在介绍BiPFSA的算法步骤。
<第1阶段:向后PFSA>
第1步:初始化
Figure PCTCN2017097677-appb-000069
其中
Figure PCTCN2017097677-appb-000070
且0≤i≤M-1。将路径度量的当前上限设置为无穷大,即cUB=∞。令相应的、具有无穷大路径度量的后向路径为xUB=空值。
第2步:加载到开放堆栈起止于网格T上水平L的一个状态之所有零长度后向路径。这里有M个后向路径。这些零长度后向路径的路径度量都被初始化成零。
第3步:若开放堆栈为空,则将xUB输出为最终ML判定,并停止BiPFSA而不执行第二阶段。
第4步:获取开放堆栈中的当前顶部后向路径。将top后向路径及其路径度量分别记录为xTOP和cTOP。从开放堆栈中删除顶部后向路径。若xTOP达到水平0(即止于水平0上的一个状态),则转至第9步。
第5步:令水平
Figure PCTCN2017097677-appb-000071
上的状态Si为xTOP的结束状态。若
Figure PCTCN2017097677-appb-000072
小于无穷大,则转至第3步;否则,将cTOP赋值于
Figure PCTCN2017097677-appb-000073
第6步:获取网格上xTOP的后续后向路径,并按照(1)计算它们的路径度量。删除路径度量不小于cUB的那些后续后向路径。
第7步:若一个后续后向路径达到了水平0并且其路径度量小于cUB,以及它还是一个衔尾路径,则将xUB和cUB分别替换为该后续后向路径和其路径度量。重复该步骤,直到所有后续后向路径都得到检验。
第8步:将来自第6步的剩余后续后向路径插入到开放堆栈并对后向路径进行重新排序,使得顶部后向路径具有最小的路径度量。转至第3步。
第9步:若xTOP为衔尾路径,则将其输出为ML判定,并停止BiPFSA而不执行第二阶段;否则,对于所有的
Figure PCTCN2017097677-appb-000074
且0≤i≤M-1,每当
Figure PCTCN2017097677-appb-000075
时赋值
Figure PCTCN2017097677-appb-000076
转至第二阶段。
在第一阶段完成之后,
Figure PCTCN2017097677-appb-000077
以及xUB和cUB会被保留以待第二阶段使用。值得注意的是,
Figure PCTCN2017097677-appb-000078
可以被看作为从水平N之任意状态到水平
Figure PCTCN2017097677-appb-000079
上状态Si的一个后向路径的路径度量的一个估计值。通过将该估计值与结束于子格上水平
Figure PCTCN2017097677-appb-000080
的状态Si的前向路径的路径度量进行组合,获得长度为N之组合衔尾路径总体路径度量的估计值。可以合理地预测,若可使用衔尾路径之路径度量的优良估计值来引导第二阶段中的前向优先级优先搜索,则可非常有效地获得ML判定。这是BiPFSA的第二阶段设计背后的基本思路,其中的算法步骤如下。
<第2阶段:向前PFSA>
第1步:清除和重置来自第一阶段的开放堆栈。将所有网格
Figure PCTCN2017097677-appb-000081
上水平0的所有零长度前向路径重新加载到开放堆栈。这里有M个这样的向前路径。这些零长度前向路径的精确路径度量都分别被初始化成
Figure PCTCN2017097677-appb-000082
保留来自第一阶段的xUB和cUB
第2步:若开放堆栈为空,则输出xUB作为最终的ML判定,并停止BiPFSA。1
1该步骤将不再输出等于空值的xUB。换句话说,当开放堆栈为空时xUB≠空值。这是因为只有在第一阶段没有遇到达到水平0的衔尾路径时,xUB才在第二阶段初始地保持为空值(参见后向PFSA的第7步)。在这种情况下,cUB=1,并且前向PFSA的第4步因此将永远不会删除任何后续前向路径。这样的话,前向PFSA的第5步将会用第一个达到水平L的后续前向路径替代xUB,并且在该替代之前开放堆栈必不能为空。
第3步:若开放堆栈中的当前顶部前向路径xTOP已经被记录在封闭表中,则从开放表中舍弃它并转至第2步;否则,将该顶部前向路径的信息记录在封闭表中。2
第4步:根据M衔尾网格的结构,计算开放堆栈中顶部前向路径之所有后续前向路径的精确路径度量。从开放堆栈中删除顶部前向路径。删除那些精确路径度量不小于cUB的后续后向路径。
第5步:若一个后续前向路径达到了水平L并且其精确路径度量小于cUB,则将xUB和cUB分别替换为该后续前向路径和其精确路 径度量。重复该步骤,直到所有后续前向路径都得到检验。删除所有达到水平L的后续前向路径。
第6步:将来自第5步的剩余后续前向路径插入到开放堆栈中,并根据递增精确路径度量值重新排列开放堆栈中的前向路径。转至第2步。
从两个阶段的算法步骤可以看出,结束于先前某些时间已被访问过状态的顶部路径被消去。由于这些路径必须具有比先前访问过、结束于相同状态的顶端路径更差的路径度量(分别为第二阶段的一个较差精确路径度量),因此它们的消去不会影响性能的最优性,但有助于加快优先级优先搜索。为了验证一个状态是否已被访问,在第一和第二阶段应用了两个不同的标准。在第一阶段,注意到,在(也只有在)结束于水平
Figure PCTCN2017097677-appb-000083
上状态Si的状态已经受到访问的情况下,
Figure PCTCN2017097677-appb-000084
才小于1。因此,
Figure PCTCN2017097677-appb-000085
可以被用来验证该状态是否在第一阶段被访问过。在第二阶段中,引入封闭表来记录所有已位于顶部且因此已被访问过的路径。
2唯一性地识别一个前向路径只需要初始状态、结束状态和结束水平的信息。
优先级优先搜索或任何顺序搜索算法的共同特征是,当从一个更可靠的分量开始搜索时,顺序搜索的效率可以得到提高。因此,通过根据其分量的可靠性循环地移位接收向量r,可以进一步减少BiPFSA的平均解码复杂度。通过指明对于任何整数
Figure PCTCN2017097677-appb-000086
执行
Figure PCTCN2017097677-appb-000087
位循环移位时网格以及其对应的子格保持相同,可以用相同的方式来利用BiPFSA,以确定相对于循环移位接收向量的循环移位ML码字。ML码字然后就可以通过初始循环移位运算的逆向动作轻松进行恢复。基于与[14]中类似的方法,建议在将其馈入到BiPFSA之前将接收向量循环右移
Figure PCTCN2017097677-appb-000088
位,其中
Figure PCTCN2017097677-appb-000089
并且λ是用于可靠性测量的预定窗口大小。与BiPFSA的解码复杂度相比,归因于(2)中
Figure PCTCN2017097677-appb-000090
求解和BiPFSA结尾处逆向左移的附加计算复杂度几乎可以忽略不计。
在本节结尾指出,人们也可以在第一阶段采用前向PFSA,并在第二阶段以后向的方式执行PFSA。考虑到解码器通常倾向于以前向方式列出输 出码位,选择在第二阶段以前向方式执行PFSA。
通过AWGN信道的实验:
在这一部分中,借助模拟、通过加性白高斯噪声(AWGN)信道探究了所提出ML解码算法的计算工作量以及字错误率。假设发送的二进制码字v=(v0,v1,...,vN-1)是二进制相移键控(BPSK)调制的。接收向量r=(r0,r1,...,rN-1)因此由
Figure PCTCN2017097677-appb-000091
给出,对于0≤j≤N-1,
其中ε为每信道位的信号能量,并且
Figure PCTCN2017097677-appb-000092
为一个每赫兹单边噪声功率为N0的白高斯过程的独立噪声样本。因此,信噪比(SNR)由
Figure PCTCN2017097677-appb-000093
给出。为了解决不同码率的码冗余,在以下讨论中使用每信息位的SNR,即
Figure PCTCN2017097677-appb-000094
注意,对于AWGN信道,与定义1中的路径
Figure PCTCN2017097677-appb-000095
相关联的度量可以等效地简化为
Figure PCTCN2017097677-appb-000096
在模拟中使用衔两个尾卷积码。它们分别是(2,1,6)和(2,1,12)带有发生器103,166(八进制)和5133,14477(八进制)的衔尾卷积码。在信息长度L=12和48的情况下,前者正好是[24,12,8]扩展戈雷码[15],而后者等同于[96,48,16]分组码[16]。
在介绍的仿真结果之前,指出,顺序型搜索算法的计算量不仅包括解码度量的评估,而且包括在搜索和重新排序堆栈元素方面消耗的工作量。通过采用优先级队列数据结构(通常称为HEAP[17]),后者的工作量可以做到与前者相当。人们可以进一步采用基于硬件的堆栈结构[18],并为每个堆栈插入运算获得恒定的复杂性。这些注释将每个信息位之度量计算数量的惯常采纳证明为顺序型搜索算法的算法复杂度度量。
现在准备展示解码复杂性的模拟结果以及BiPFSA的相应字错误率(WER)。与BiPFSA进行比较的四种解码算法与BEAST、BS、PFSA和WAVA。这四种算法在其设计中具有非常不同的特征,因此每种算法可以 被认为是各自类别的典型代表。WAVA可能是最为人知的衔尾卷积码之次佳解码器;BEAST执行双向搜索并通常被认为是高SNR下在平均解码复杂度方面最高效的ML解码器;BS基于有界搜索并是四者当中最新的ML解码器;PFSA是一种两相解码算法。值得注意的是,由于CMLDA和PFSA具有相似的两相结构,并且PFSA被证实在平均和最大解码复杂度方面的表现都优于CMLDA,因此CMLDA将不包括在的模拟中。
为了便于指定模拟中所采用的参数,窗口尺寸为λ(通过(2)决定初始循环移位量)的BiPFSA和环绕最多I个网格[2]的WAVA分别被参数化为BiPFSA(λ)和WAVA(I)。同样的循环移位预处理也可以应用于[10]中的PFSA,其可类似地表示为PFSA(λ)。再次强调,BEAST、BiPFSA(λ)、BS和PFSA(λ)都是ML解码器,因此都应该达到最优WER。对于所有实验,确保发生至少100个字错误,以使得模拟结果中不存在偏向性。
作为参考,首先在图3中展示了WAVA(I=2)的WER性能以及ML解码器。图3显示,在WER=10-3处,当采用[24,12,8]扩展Golay码时,WAVA(I=2)具有大约0.4dB、与ML性能相关的编码损耗。当改用[96,48,16]衔尾分组码时,WAVA(I=2)在WER=10-3处观察到了0.25dB的较小编码损失。
接下来研究了用于[24,12,8]扩展戈雷码之BEAST、BiPFSA(λ=6)、BS、PFSA(λ=6)和WAVA(I=2)的解码复杂度,并将结果汇总在图4中。这一张图中同样展示了针对解码复杂度的基准化下限,其通过经由网格顺序解码一个码字的无噪声传输而获得。从图4可以得出三个观察结果。第一,网格上的一个VA执行的平均数量很显然是WAVA(I=2)和PFSA(λ=6)的分支度量计算平均数量的下限,其为一个等于2m x n=M x n=26x 2=128的常量。
第二,BiPFSA(λ=6)在所有SNR下的平均解码复杂度方面显著地优于所有其他解码算法。第三,BiPFSA(λ=6)的平均解码复杂度接近于高SNR下的基准下限,这证实了BiPFSA(λ=6)的优越效率,因为基准下限只有在无噪声场景下完美地获得。
在某些实际应用中,解码等待时间被认为拥有与平均解码复杂度可比的重要性,特别是当考虑到顺序型解码算法时。通过研究图4中解码算法的最大解码复杂度来对这方面的考虑作出回应。结果如表1所示。然后,正如所预期的那样,从表I中观察到,对于所有SNR,WAVA(I=2)的 分支度量计算的最大数量保持为一个常数,并且等于通过网格上运行两次VA的计算量,即2m x n x I=26x 2x 2=256。还注意到,除了WAVA(I=2)和PFSA(λ=6)之外,与BEAST和BS相比,BiPFSA(λ=6)具有最小的解码复杂度峰值。更引人注目的是,在高SNR下,BiPFSA(λ=6)分支度量计算的最大数量甚至低于WAVA(I=2)。这些实验结果表明,BiPFSA(λ)不仅提供了用于衔尾卷积码的现有解码算法中的最小平均解码复杂度,而且对于具有低解码等待时间需求的系统来说,这是一种可行的解决方案。
应注意,表I中列出的分支度量计算的最大数量有时会随着SNR的增大而增加。这是因为表I中记录的数量是模拟运行期间曾发生的最大解码复杂度;因此,这一情况在实验上是可能的,并且可能无法通过增加模拟运行的数量来消除。因此,作为一个额外的考虑,检验了与图4中那些曲线相对应之解码复杂度的标准差。图5中的结果随后表明,只要解码复杂度的标准差受到关注,BEAST、BiPFSA(λ=6)、BS和WAVA(I=2)中的获胜者则会是BEAST或BiPFSA(λ=6),这取决于SNRb是大于还是小于3dB。当仅考虑两个可能的获胜者时,BiPFSA(λ=6)在不同信噪比下的解码复杂度显然具有更稳定的标准差。的模拟还表明,PFSA(λ=6)在解码复杂度方面比其他四个解码器具有明显更小的标准差,这表明在其第二阶段的执行PFSA解码复杂度根本不变,因为它在其第一阶段采用了一个恒定复杂度的VA。
针对[96,48,16]衔尾分组码重复上述实验。在具有相同代码长度和代码大小的所有衔尾码之中,该代码具有最大的自由距[16]。其代码存储器顺序为12,因此其代码网格在每个水平都有212=4096个状态。具有如此庞大数量的状态,其预计具有极高的解码复杂度。因此,该代码的高效解码是一个挑战。
与图4类似,在图6中展示了用于[96,48,16]衔尾分组码的BEAST、BiPFSA(λ=12)、BS、PFSA(λ=12)和WAVA(I=2)的平均解码复杂度。
图4中用于[24,12,8]扩展戈雷码解码算法的每信息位分支度量计算的最大数量。为了易于找出最佳值,一列中的最小数字已用粗体示出。
表I
Figure PCTCN2017097677-appb-000097
将可靠性窗口大小从λ=6改变为λ=12,以应对考虑中的衔尾卷积码的长编码长度和大编码尺寸。从该图可以看出,BiPFSA(λ=12)在平均解码复杂度方面仍然优于其他解码算法,并且在高SNR下接近于理想下限。相比之下,当SNR低于1.5dB时,BEAST和BS的平均解码复杂度在低SNR下显着增加,并且变得比PFSA(λ=12)和WAVA(I=2)更差。
表II中展示了所模拟的五个解码器的最大解码复杂度。获得了对BiPFSA(λ=12)有利的观察结果,其中在高SNR下其在最大解码复杂度方面击败所有其他解码算法,包括PFSA(λ=12)和WAVA(I=2)。当与PFSA(λ=12)进行比较时,BiPFSA(λ=12)的最大解码复杂度在SNRb=4dB下可以节省高达5.5x103个每信息位的分支度量计算。
图7中总结了关于[96,48,16]衔尾分组码解码复杂度之标准差的实验。该图再次证实,类似于从图5获得的观察结果,BiPFSA(λ=12)的平均解码复杂度比PFSA(λ=12)以外的所有其他解码算法更稳定。
图6中用于[96,48,16]衔尾分组码解码算法的每信息位分支度量计算的最大数量。为了易于找出最佳值,每一列中的最小数字已用粗体示出。
表II
Figure PCTCN2017097677-appb-000098
REFERENCES
[1]Y.P.E.Wang and R.Ramesh,“To bite or not to bite-a study of tail bits versus tail-biting,”in Proceeding of IEEE Personal,Indor and Mobile Radio Communications,vol.2,pp.317-321,October 1996.
[2]R.Y.Shao,S.Lin.and M.P.C.Fossorier,“Two decoding algorithm for tailbiting codes,”IEEE Trans.Commun.,vol.COM-51,no.10,pp.1658-1665,October 2003.
[3]Q.Wang and V.K.Bhargava,“An efficient maximum likelihood decoding algorithm for generalized tail biting,”IEEE Trans.Commun.,vol.COM-37,no.8,pp.875-879,1989.
[4]G.D.F.Jr.and S.Guha,“Simple rate-1/3convolutional and tail-biting quantum error-correcting codes,”in Proceedings.International Symposiun on Information Theory,2005,pp.1028-1032.
[5]H.H.Ma and J.K.Wolf,“On tail biting convolutional codes,”IEEE Trans.Commun,vol.COM-34,no.2,pp.104-111,February 1986.
[6]R.V,Cox and C.E.W.Sundberg,“An efficient adaptive circular viterbi algorithm for decoding generalized tailbiting convolutional codes,”IEEE Trans.Veh.Techol,vol.43,no.11,pp.57-68,February 1994.
[7]H.-T.Pai,Y.S.Han,and Y.-J.Chu,“New HARQ scbeme based on decoding of tail-biting convolutional codes in IEEE 802.16e,”IEEE Trans.Veh.Techol,vol.60,no.3,pp.912-918,March 2011.
[8]I.E.Bocharova,M.Handlery,R.Johannesson,and B.D.Kudryashov,“Beast decoding of block codes obtained via convolutional codes,”IEEE Trans.Inform.Theory,vol.51,no.5,pp.1880-1891,May 2005.
[9]P.Shankar,P.N.A.Kumar,K.Sasidharan,B.S.Rajah,and A.S.Madhu,“Efficient convergent maximum likelihood decoding on tail-biting.”available at http://arxiv,org/abs/cs.IT/0601023.
[10]Y.S.Han,T.-Y.Wu,H.-T.Pai,P.-N.Chen,and S.-L.Shieh,“Priority-first search decoding for convolutional tail-biting codes,”in Information Theory and Its Applications(ISITA 2008),2008.
[11]H.-T.Pai,Y.S.Han,T.-Y.Wu,P.-N,Chen,and S.-L,Shieh,“Low-complexity ML decoding for convolutional tail-biting codes,”IEEE Communications Letters,vol.12,no.12,pp.883-885,December 2008.
[12]X.-T.Wang,H.Qian,W.-D.Xiang,J.Xu,and H.Huang,“An efficient ML decoder for tail-biting codes based on circular trap detection,”IEEE Trans.Commun.,vol.61,no.4,pp.1212-1221,April 2013.
[13]H.Qian,X.-T.Wang,K.Kang,and W.-D.Xiang,“A depth-first ML decoding algorithm for tail-biting trellises,”IEEE Transactions on Vehicular Technology,vol.64,no.8,pp.3339-3346,2015.
[14]M.Handlery,R.Johannesson,and V.V.Zyablov,“Boosting the error performance of suboptimal tailbiting decoders,”IEEE Trans.Commun.,vol.51,no.9,pp.1485-1491,September 2003.
[15]P,Stahl,J.B.Anderson,and R.Johannesson,“Optimal and near-optimal encoders for short and moderate-length tailbiting trellises,”IEEE Trans.Inform.Theory,vol.45,pp.2562-2571,November 1999.
[16]I.E.Bocharova,B.D.Kudryashov,R.Johannesson,and P.Stahl,“Searching for tailbiting codes with large minimum distances,”in IEEE Int.Symp.on Information Theory,Sorrento,Italy,2000.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若 干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (8)

  1. 一种衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:包括如下步骤:(A)由后向路径的路径度量所引导的向后双向优先级优先搜索算法以向后的方式应用于辅助超级代码
    Figure PCTCN2017097677-appb-100001
    的网格T;(B)由前向路径的精确路径度量所引导的向前双向优先级优先搜索算法应用到衔尾卷积码
    Figure PCTCN2017097677-appb-100002
    的所有网格之上;
    Figure PCTCN2017097677-appb-100003
    表示一个(n,k,m)具有kL个信息位的衔尾卷积码,其中k个信息位通过输入存储次序为m的线性卷积电路来映射至n个码位,
    Figure PCTCN2017097677-appb-100004
    由对应于
    Figure PCTCN2017097677-appb-100005
    网格上路径的所有二进制字组成。
  2. 根据权利要求1所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:路径向量按以下方式进行表示:对于二进制标号为
    Figure PCTCN2017097677-appb-100006
    的一个路径;其结束于一个图形结构中的水平l,与之相关联的路径度量为
    Figure PCTCN2017097677-appb-100007
    其中,l为满足0≤l≤L的固定整数,
    Figure PCTCN2017097677-appb-100008
    为第(j+1)个二进制标号的比特度量,接收的LLR向量为φ=(φ01,...,φN-1)及其对应的硬判定向量为y=(y0,y1,...,yN-1)。
  3. 根据权利要求1所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:采用两个数据结构进行双向优先级优先搜索算法,两个数据结构为开放堆栈和封闭表;开放堆栈通过双向优先级优先搜索算法存储目前为止已经访问过的路径,封闭表跟踪先前时间曾经处于开放堆栈顶部的那些路径。
  4. 根据权利要求3所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:开放堆栈中,该双向优先级优先搜索算法所具有的顶层元素是具有最小引导度量的元素。
  5. 根据权利要求1所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:前向路径x(ln-1)=(x0,x1,...,xln-1)的精确路径度量被定义为m(x(ln-1))+hl,j,其中j为该路径结束状态的指数。
  6. 根据权利要求3所述的衔尾卷积码用最大似然双向优先级优先搜索算 法,其特征在于:所述步骤(A)中,初始化路径度量,后向路径插入到开放堆栈并对后向路径进行重新排序,使得顶部后向路径具有最小的路径度量,进而获得最优后续后向路径和其路径度量。
  7. 根据权利要求3所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:根据衔尾网格的结构,计算开放堆栈中顶部前向路径之所有后续前向路径的精确路径度量,根据递增精确路径度量值重新排列开放堆栈中的前向路径,删除所有达到水平L的后续前向路径。
  8. 根据权利要求7所述的衔尾卷积码用最大似然双向优先级优先搜索算法,其特征在于:顶部前向路径的信息记录在封闭表中,识别一个前向路径仅需要初始状态、结束状态和结束水平的信息。
PCT/CN2017/097677 2017-08-11 2017-08-16 衔尾卷积码用最大似然双向优先级优先搜索算法 WO2018184334A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710687179.6A CN107645296B (zh) 2017-08-11 2017-08-11 衔尾卷积码用最大似然双向优先级优先搜索算法
CN201710687179.6 2017-08-11

Publications (1)

Publication Number Publication Date
WO2018184334A1 true WO2018184334A1 (zh) 2018-10-11

Family

ID=61110997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097677 WO2018184334A1 (zh) 2017-08-11 2017-08-16 衔尾卷积码用最大似然双向优先级优先搜索算法

Country Status (2)

Country Link
CN (1) CN107645296B (zh)
WO (1) WO2018184334A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1841116A2 (en) * 2006-03-31 2007-10-03 STMicroelectronics (Beijing) R&D Co. Ltd. Decoding method for tail-biting convolutional codes using a search-depth Viterbi algorithm
CN102857242A (zh) * 2011-06-28 2013-01-02 联芯科技有限公司 咬尾卷积码译码方法与装置
CN103634015A (zh) * 2012-08-28 2014-03-12 上海无线通信研究中心 咬尾码的最大似然译码算法
CN106301391A (zh) * 2016-08-08 2017-01-04 西安电子科技大学 一种改进的软输出咬尾卷积码译码方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1841116A2 (en) * 2006-03-31 2007-10-03 STMicroelectronics (Beijing) R&D Co. Ltd. Decoding method for tail-biting convolutional codes using a search-depth Viterbi algorithm
CN102857242A (zh) * 2011-06-28 2013-01-02 联芯科技有限公司 咬尾卷积码译码方法与装置
CN103634015A (zh) * 2012-08-28 2014-03-12 上海无线通信研究中心 咬尾码的最大似然译码算法
CN106301391A (zh) * 2016-08-08 2017-01-04 西安电子科技大学 一种改进的软输出咬尾卷积码译码方法

Also Published As

Publication number Publication date
CN107645296A (zh) 2018-01-30
CN107645296B (zh) 2019-11-12

Similar Documents

Publication Publication Date Title
US6982659B2 (en) Method and apparatus for iterative decoding
KR100846869B1 (ko) 저 복잡도 ldpc복호 장치 및 그 방법
US20080016426A1 (en) Low-Complexity High-Performance Low-Rate Communications Codes
CN108847850A (zh) 一种基于crc-sscl的分段极化码编译码方法
KR20080098391A (ko) 양방향 슬라이딩 윈도우 아키텍처를 갖는 map 디코더
CN110661533B (zh) 优化译码器存储极化码译码性能的方法
CN106059596A (zh) 以二元bch码为成份码的分组马尔可夫叠加编码方法及其译码方法
CN107911195A (zh) 一种基于cva的咬尾卷积码信道译码方法
JP4836379B2 (ja) エントロピック・コードを持つ符号化データを復号する方法とそれに対応する復号デバイスおよび伝送システム
Hu et al. A comparative study of polar code decoding algorithms
Subbalakshmi et al. On the joint source-channel decoding of variable-length encoded sources: The BSC case
Doan et al. Decoding Reed-Muller codes with successive codeword permutations
Dou et al. Soft-decision based sliding-window decoding of staircase codes
Han et al. A low-complexity maximum-likelihood decoder for tail-biting convolutional codes
CN102611464B (zh) 基于外信息并行更新的Turbo译码器
WO2018184334A1 (zh) 衔尾卷积码用最大似然双向优先级优先搜索算法
Sui et al. CRC-Aided High-Rate Convolutional Codes With Short Blocklengths for List Decoding
Wu et al. On the design of variable-length error-correcting codes
Antonini et al. Suppressing error floors in SCPPM via an efficient CRC-aided list viterbi decoding algorithm
Doan et al. Successive-cancellation decoding of Reed-Muller codes with fast Hadamard transform
ES2549773T3 (es) Método de decodificación de canales y decodificador convolucional en bucle
Bocharova et al. BEAST decoding for block codes
Bocharova et al. BEAST decoding of block codes obtained via convolutional codes
Zolotarev et al. Modified Viterbi algorithm for decoding of block codes
Lu et al. Fast List Decoding of High-Rate Polar Codes Based on Minimum-Combinations Sets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17904788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17904788

Country of ref document: EP

Kind code of ref document: A1