CN111384978A - Polar code decoding method and device and communication equipment - Google Patents

Polar code decoding method and device and communication equipment Download PDF

Info

Publication number
CN111384978A
CN111384978A CN201811636287.1A CN201811636287A CN111384978A CN 111384978 A CN111384978 A CN 111384978A CN 201811636287 A CN201811636287 A CN 201811636287A CN 111384978 A CN111384978 A CN 111384978A
Authority
CN
China
Prior art keywords
llr
node
layer
memory
subtree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811636287.1A
Other languages
Chinese (zh)
Other versions
CN111384978B (en
Inventor
郭东亮
徐磊
赵锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Mobile Communications Equipment Co Ltd
Original Assignee
China Academy of Telecommunications Technology CATT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Telecommunications Technology CATT filed Critical China Academy of Telecommunications Technology CATT
Priority to CN201811636287.1A priority Critical patent/CN111384978B/en
Publication of CN111384978A publication Critical patent/CN111384978A/en
Application granted granted Critical
Publication of CN111384978B publication Critical patent/CN111384978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Semiconductor Memories (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)

Abstract

The invention provides a polar code decoding method, a polar code decoding device and communication equipment, wherein the method comprises the following steps: splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node; sequentially traversing each subtree layer by layer until a right leaf node on the subtree is found, performing primary PM sorting at a left leaf node on the same layer of the right leaf node, performing primary soft value sorting at the right leaf node so as to calculate a PM value corresponding to a path, and performing primary PM value sorting again so as to perform path screening; the embodiment of the invention takes the subtree as the basic processing unit, thereby reducing the consumption of FPGA resources and improving the utilization rate of the resources; the polar code decoding method avoids the problem of mutual copying, greatly reduces the resource occupation, effectively reduces the sequencing and PM value updating times, obviously reduces the expenditure, and improves the decoding efficiency.

Description

Polar code decoding method and device and communication equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for decoding a polarization code, and a communication device.
Background
In 1948, Shannon proposed the channel coding theorem, which defines the performance of error correction coding. In recent 20 years, channel coding represented by Turbo codes and LDPC codes adopts an iterative decoding technique, and can approach shannon channel capacity. In 2009, Arikan proposed the design concept of channel polar codes, and based on this, polar codes (polar codes) were proposed.
The core idea of an SCL (sequential Cancellation List) decoder is to save multiple paths during decoding, thereby improving the reliability of decoding.
The main problem with the SCL algorithm is that the entire binary tree needs to be traversed, which makes the computation heavy. For this purpose, the FSCL algorithm is proposed. The core idea of the FSCL algorithm is as follows: defining leaf node types, stopping when encountering the leaf nodes in the binary tree traversal process, expanding a plurality of paths according to the node types, and simultaneously giving reliability measurement of different paths. Based on the simplified operation of the leaf node on the subtree, the Polar decoding process is equivalent to traversing a non-complete binary tree.
There are 4 types of leaf nodes widely recognized at present, and there are many ways for different leaf nodes to expand, and the type definition and path expansion rule are given below.
Δ PM represents a measure of the deployment path;
sv[i]is the ith LLR (Log-likehood Ratio) value of a certain leaf node;
the bit hard decision sequence of a certain leaf node is
Figure RE-GDA0001992542380000011
And q is a check result of a hard judgment sequence of a certain leaf node.
The path expansion rules for different types of leaf nodes are described as follows:
1. rate 0: the node expansion is all frozen bits. Only one path is expanded, the bit values of all leaf nodes are zero, and the increment of the path metric is as follows:
Figure RE-GDA0001992542380000021
2. rate 1: the node expansions are all information bits.
The first one is:the bit sequence of 0 bit is
Figure RE-GDA0001992542380000022
A second bar: Δ PM ═ sv[min1]The | bit sequence is
Figure RE-GDA0001992542380000023
And a third: Δ PM ═ sv[min2]The | bit sequence is
Figure RE-GDA0001992542380000024
Fourth, the method comprises the following steps: Δ PM ═ sv[min1]|+|sv[min2]The | bit sequence is
Figure RE-GDA0001992542380000025
3. SPC: the nodes spread the first as frozen bits and the others are all information bits.
Defining the value with the minimum absolute value, the second minimum absolute value, the third minimum absolute value and the fourth minimum absolute value in the node LLR value sequence as | sv[min1]|、 |sv[min2]|、|sv[min3]I and sv[min4]I, the check result of the node bit sequence is q,
the first one is:
if q is 0, Δ PM is 0 bit sequence
Figure RE-GDA0001992542380000026
If q is 1, Δ PM is | sv[min1]The | bit sequence is
Figure RE-GDA0001992542380000027
A second bar:
ΔPM=|sv[min1]|+|sv[min2]the | bit sequence is
Figure RE-GDA0001992542380000028
And a third:
ΔPM=|sv[min1]|+|sv[min3]the | bit sequence is
Figure RE-GDA0001992542380000029
Fourth, the method comprises the following steps:
ΔPM=|sv[min1]|+|sv[min4]the | bit sequence is
Figure RE-GDA00019925423800000210
The fifth step:
if q is 0, Δ PM is | sv[min2]|+|sv[min3]The | bit sequence is
Figure RE-GDA0001992542380000031
If q is 1, Δ PM is | sv[min1]|+|sv[min2]|+|sv[min3]The bit sequence is
Figure RE-GDA0001992542380000032
The sixth item:
if q is 0, Δ PM is | sv[min2]|+|sv[min4]The | bit sequence is
Figure RE-GDA0001992542380000033
If q is 1, Δ PM is | sv[min1]|+|sv[min2]|+|sv[min4]The | bit sequence is
Figure RE-GDA0001992542380000034
The seventh item:
if q is 0, Δ PM is | sv[min3]|+|sv[min4]The | bit sequence is
Figure RE-GDA0001992542380000035
If q is 1, Δ PM is | sv[min1]|+|sv[min3]|+|sv[min4]The | bit sequence is
Figure RE-GDA0001992542380000036
The eighth item:
ΔPM=|sv[min1]|+|sv[min2]|+|sv[min3]|+|sv[min4]the | bit sequence is
Figure RE-GDA0001992542380000041
4. REP: the node spreads the last one as information bits, and others as frozen bits. The extension of the 2 paths is performed,
the first one is:
Figure RE-GDA0001992542380000042
the bit sequence is
Figure RE-GDA0001992542380000043
A second bar:
Figure RE-GDA0001992542380000044
the bit sequence is
Figure RE-GDA0001992542380000045
When the four types of nodes are encountered in the expansion in the decoding process, the division can not be continued, and a plurality of paths are expanded according to the path metric, so that the complete binary tree is reduced into an incomplete binary tree.
The disadvantages of the FSCL algorithm are manifold:
1. since part of the leaf node types need to expand multiple paths, a large amount of hardware resource overhead is incurred.
2. The calculation process is complex, and multiple sequencing is involved, so that the hardware implementation scheme is difficult to meet the time sequence requirement;
3. the mutual copying problem is very complex, and a connection line with huge data volume is introduced when the mutual copying problem is realized on an FPGA;
4. due to the characteristic that LLR is gradually decreased, the memory initially applied is in an idle state at the later stage, and the memory utilization rate is low;
5. the utilization rate of the device is low, f and g are two basic operation units of a polar decoding algorithm, f realizes the operation of obtaining absolute values, and g realizes the LLR summation operation.
f(a,b)≈sign(a)sign(b)min(|a|,|b|)
Figure RE-GDA0001992542380000046
In addition, a large number of absolute value calculation operations and summation operations exist in the polar decoding process; the traditional algorithm cannot reuse the existing resources of f and g devices, and needs to bring much extra overhead.
The above points make polar decoders perform unsatisfactorily in terms of occupied area and throughput rate.
Disclosure of Invention
The invention aims to provide a polar code decoding method, a polar code decoding device and communication equipment, and aims to solve the problems of occupied area and throughput rate of polar decoders in the prior art.
In order to solve the above problem, an embodiment of the present invention provides a polar code decoding method, including:
splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node;
sequentially traversing each subtree layer by layer until a right leaf child node on the subtree is found, sequencing path metrics PM at a left node of the right leaf child node on the same layer, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
Traversing each subtree layer by layer until a right leaf child node on the subtree is found, comprising:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before each subtree traversal is started, the value in the memory needs to be initialized to the soft value sequence to be decoded, and N is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
Wherein, if the left leaf node exists in the current traversal layer,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
Wherein, the path expansion caused by the left leaf node according to the type of the left leaf node comprises:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
Determining the copy number of LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node, wherein the copy number comprises;
if the type of the left leaf child node is rate0 and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
if the type of the left leaf child node is REP and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is x-1.
Wherein if the remaining LLRs of the node at the layer higher than the layer where the left-leaf child node is located, which are stored in the memory, except for one LLR used for calculating the PM are not enough to calculate the LLR corresponding to the path expansion caused by the left-leaf child node, the method further includes:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
Wherein the method further comprises:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
Wherein, the absolute value calculation f device used in the process of traversing the subtree is as follows:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure RE-GDA0001992542380000062
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
The LLR summation operation g device used in the process of traversing the subtree is as follows:
Figure RE-GDA0001992542380000061
or,
Figure RE-GDA0001992542380000071
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure RE-GDA0001992542380000072
is a hard bit sequence.
An embodiment of the present invention further provides a communication device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of:
splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node;
sequentially traversing each subtree layer by layer until a right leaf child node on the subtree is found, sequencing path metrics PM at a left node of the right leaf child node on the same layer, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
Wherein the processor is further configured to:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before each subtree traversal is started, the value in the memory needs to be initialized to the soft value sequence to be decoded, and N is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
Wherein, if the left leaf node exists in the current traversal layer,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
Wherein the processor is further configured to:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
Wherein the processor is further configured to:
if the type of the left leaf child node is rate0 and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
if the type of the left leaf child node is REP and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is x-1.
Wherein if the remaining LLRs of the node at the layer higher than the layer where the left-leaf child node is located, which are stored in the memory, except for one LLR used for calculating the PM are not enough to calculate the LLR corresponding to the path expansion caused by the left-leaf child node, the processor is further configured to:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
Wherein the processor is further configured to:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
Wherein, the absolute value calculating f device used in the process of traversing the subtree by the processor is:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure RE-GDA0001992542380000094
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
Wherein, the LLR summation operation g device used in the process of the processor traversing the subtree is:
Figure RE-GDA0001992542380000091
or,
Figure RE-GDA0001992542380000092
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure RE-GDA0001992542380000093
is a hard bit sequence.
An embodiment of the present invention further provides a polar code decoding apparatus, including:
the splitting module is used for splitting the binary tree describing the decoding process of the polarization code into at least two subtrees, and the splitting principle is that each subtree has only one right leaf node;
the traversal decoding module is used for sequentially traversing each subtree layer by layer until a right leaf subnode on the subtree is found, sequencing path metric PM at a left node on the same layer of the right leaf subnode once, sequencing soft values at the right leaf node once so as to calculate a PM value corresponding to a path, and sequencing PM values again so as to perform path screening;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the polar code decoding method are implemented as described above.
The technical scheme of the invention at least has the following beneficial effects:
in the polar code decoding method, the polar code decoding device and the communication equipment, the subtree is taken as the basic processing unit, so that the consumption of FPGA resources is reduced, and the utilization rate of the resources is improved; the polar code decoding method avoids the problem of mutual copying, greatly reduces the resource occupation, effectively reduces the sequencing and PM value updating times, obviously reduces the expenditure, and improves the decoding efficiency.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for decoding a polar code according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a sub-tree in the decoding method of the polar code according to the embodiment of the present invention;
fig. 3 is a schematic diagram illustrating resource splitting in a polar code decoding method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating resource consumption in a polar code decoding method according to an embodiment of the present invention;
fig. 5 is a diagram illustrating an example of operations of g devices in a polar code decoding method according to an embodiment of the present invention;
FIG. 6 is a simplified circuit diagram of an f/g device in a polar code decoding method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a memory and f devices in a polar code decoding method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a communication device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a polar code decoding apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a polar code decoding method, including:
step 11, splitting the binary tree describing the decoding process of the polarization code into at least two subtrees, wherein each subtree has only one right leaf node according to a splitting principle;
step 12, sequentially traversing each subtree layer by layer until a right leaf node on the subtree is found, performing primary sorting of path metrics PM at a left node on the same layer of the right leaf node, performing primary soft value sorting at the right leaf node to calculate a PM value corresponding to a path, and performing primary PM value sorting to perform path screening;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
In the embodiment of the invention, the subtree is used as a basic processing unit, and the resource demand does not need to adapt to the whole binary tree but only needs to adapt to the split subtree. First, the split of the binary tree is explained: and splitting the binary tree structure to ensure that each subtree has one and only one right-leaf child node, wherein the split structure is called the subtree of the binary tree. Taking (N:512, K:120) as an example, 17 subtrees with the structure shown in FIG. 2 can be split. Table 1 shows a list of the number of subtrees obtained by splitting a part of the binary tree.
TABLE 1
N=512,K Number of subtrees N=512,K Number of subtrees N=512,K Number of subtrees N=512,K Number of subtrees
36 6 65 10 95 14 125 17
37 6 66 10 96 14 126 17
38 6 67 10 97 13 127 17
39 7 68 11 98 13 128 17
40 6 69 11 99 14 129 17
41 6 70 11 100 15 130 17
42 6 71 12 101 13 131 17
43 7 72 12 102 13 132 16
44 8 73 12 103 13 133 17
45 7 74 12 104 13 134 18
46 7 75 11 105 13 135 18
47 7 76 11 106 13 136 18
48 7 77 11 107 14 137 18
49 8 78 11 108 14 138 18
50 8 79 12 109 14 139 18
51 8 80 12 110 15 140 19
52 8 81 13 111 15 141 17
53 8 82 14 112 14 142 17
54 8 83 14 113 14 143 17
55 8 84 14 114 15 144 18
56 9 85 14 115 15 145 18
57 8 86 15 116 15 146 18
58 9 87 15 117 16 147 18
59 9 88 12 118 17 148 17
60 10 89 12 119 17 149 18
61 10 90 13 120 17 150 18
62 10 91 14 121 15 151 19
63 10 92 14 122 15 152 19
64 10 93 14 123 15 153 19
65 10 94 14 124 16 154 19
Taking the subtree in fig. 2 as an example, through analysis, the traversal process of the above subtree has the following operations:
abandoning the 7 PM value sorting operations in the conventional implementation, only performing the 1 PM value sorting operation at node 373 in fig. 2; node 374 in fig. 2 performs a soft value ordering to compute the path-corresponding PM value, and another PM value ordering to perform path screening.
Further, the embodiment of the present invention avoids the mutual copy operation in the conventional implementation scheme, and replaces the mutual copy operation with a resource splitting implementation manner, which is described below, and fig. 3 is a schematic view of resource splitting:
optionally, step 12 includes:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before each subtree traversal is started, the value in the memory needs to be initialized to the soft value sequence to be decoded, and N is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
In order to keep the operation continuous, different levels of LLRs need to be stored, and the memory size required by the conventional implementation scheme is:
(512+256+128+64+32+16+8+4+2+1)*Lmax=1023*Lmax
since the present invention replaces mutual copying in a resource splitting manner, the resources of the present invention need to be 1024 units, which is no longer strongly correlated with the number of paths.
In the embodiment of the invention, the memory size of the memory is N times of the number of soft bits of the target sequence. For example, for a root node, the memory stores N copies of the LLR of the root node, which may also be referred to as N resources with the root node in the memory. The resources described in this scheme refer to the same copy of LLR, and if there are N copies of LLR in the memory of a node, the node is considered to have N resources.
If the left leaf child node exists in the current traversal layer, the path is expanded according to the type of the left leaf child node, and path sorting and selection are not performed temporarily. At this time, the increase and decrease conditions of the corresponding copy resources are related to the node type, and correspondingly, the number of copies of the LLR of the right node of the current traversal layer in the memory is determined according to the type of the left leaf child node;
as shown in fig. 4, if the type of the left leaf child node is rate0, and the number of copies of the LLR of the node at the layer above the layer where the left leaf child node is located in the memory is x, the number of copies of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
as shown in fig. 4, if the type of the left leaf node is REP and the number of copies of the LLR of the node at the layer higher than the layer where the left leaf node is located in the memory is x, the number of copies of the LLR of the right node at the layer where the left leaf node is located in the memory is x-1.
Preferably, if the left leaf node exists in the current traversal layer,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
Preferably, in the above embodiment of the present invention, the performing, according to the type of the left leaf node, the path expansion caused by the left leaf node includes:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
It should be noted that, if the remaining LLRs of the node at the layer higher than the layer where the left-leaf child node is located, which are stored in the memory, except for one LLR used for calculating the PM, are not enough to calculate the LLR corresponding to the path expansion caused by the left-leaf child node, the method further includes:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
Further, the method further comprises:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
The above operational mode may continue to run until the right leaf node is encountered. After encountering the right leaf node, the metric values of different paths can be processed uniformly, and then links such as path sorting, screening and the like are entered. Therefore, only 1 PM ordering +1 soft value ordering is required in the embodiment of the present invention.
Different scenarios are encountered during traversal of the binary tree, and therefore different requirements are met. In order to adapt to these different scenarios and utilize existing devices to complete all functions, the embodiments of the present invention redesign f-devices and g-devices.
Preferably, the absolute value calculating device used in the process of traversing the subtree is:
f (a, b) ═ sign (a) sign (b) min (| a |, | b |); the method is suitable for basic operation scenes, such as the operations from the node 0 to the node 1 shown in FIG. 2.
Or,
f (a, b) ≈ sign (a) sign (b) min (| a |, | b |); the method is suitable for a scene from a non-leaf node to a left-leaf node, such as the operations from node 1 to node 3 shown in fig. 2.
Or,
Figure RE-GDA0001992542380000144
this applies to the case where the right leaf node takes the absolute value, as shown in FIG. 2 for node 186 through node 374.
Wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
Preferably, the LLR summation g device used in the process of traversing the subtree is:
Figure RE-GDA0001992542380000141
the method is suitable for common computing scenes, such as the scenes of the nodes 1 to 4 shown in FIG. 2.
Or,
Figure RE-GDA0001992542380000142
the method is suitable for a scene that the left leaf node calculates the delta PM value.
Wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure RE-GDA0001992542380000143
is a hard bit sequence.
The method is suitable for a scene that the left leaf node calculates the delta PM value; LLR of length N, via
Figure RE-GDA0001992542380000151
After the sub-operation, the result of finding Δ PM may be achieved. For example, assume a total of 8 soft bit additions, 4 positive numbers and 4 negative numbers. Positive numbers we assume x1, x2, x3, x 4; negative numbers we assume y1, y2, y3, y 4. Then the addition of the positive and negative numbers can be achieved after at most log2(8) operations, as shown in fig. 5.
FIG. 6 is a simplified circuit diagram of the f/g function implementation; wherein,
Figure RE-GDA0001992542380000152
the x and y are spliced and output, wherein x is fixed to be 1bit, y is an N bit unsigned positive number, the output is (N +1) bit, wherein x occupies the highest bit, and the rest N bits occupy the low N bits.
In the embodiment of the invention, the input and the output of the f/g device are the same chip memory, thereby achieving the purpose of saving hardware resources. The connection rule is shown in fig. 7. For a clearer illustration, two figures are shown.
Under the coordination of the control information, the output of each layer follows the same rule and is cycled by log N. Shown below as 2, identical letters indicate that an input-output relationship exists.
TABLE 2
Period 1 Period 2 Period 3
Indexing Memory device Memory device Memory device Memory device
0 a a a a
1 b e c b
2 c b e c
3 d f g d
4 e c b e
5 f g d f
6 g d f g
7 h h h h
In summary, the embodiment of the present invention realizes a complex function by multiplexing devices with simple logic; the input and the output can be circulated, which is beneficial to designing control logic; the problem of mutual copying is avoided, the use of resources is effectively reduced, and the utilization rate of the resources is improved; the centralized sorting based on the subtrees effectively reduces the sorting and PM value updating times, obviously reduces the overhead and improves the decoding efficiency.
As shown in fig. 8, an embodiment of the present invention further provides a communication device, including: a memory 810, a processor 800 and a computer program stored on the memory 810 and executable on the processor 800, the processor 800 implementing the following steps when executing the program:
splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node;
sequentially traversing each subtree layer by layer until a right leaf child node on the subtree is found, sequencing path metrics PM at a left node of the right leaf child node on the same layer, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
Optionally, in the foregoing embodiment of the present invention, the processor 800 is further configured to:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before each subtree traversal is started, the value in the memory needs to be initialized to the soft value sequence to be decoded, and N is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
Optionally, in the above embodiment of the present invention, if there is a left leaf node in the current traversal layer,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
Optionally, in the foregoing embodiment of the present invention, the processor 800 is further configured to:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
Optionally, in the foregoing embodiment of the present invention, the processor 800 is further configured to:
if the type of the left leaf child node is rate0 and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
if the type of the left leaf child node is REP and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is x-1.
Optionally, in the above embodiment of the present invention, if the remaining LLRs of the node at the layer higher than the layer where the left-leaf child node is located, stored in the memory, except for one LLR used for calculating the PM, are not enough to calculate the LLR corresponding to the path expansion caused by the left-leaf child node, the processor 800 is further configured to:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
Optionally, in the foregoing embodiment of the present invention, the processor 800 is further configured to:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
Optionally, in the foregoing embodiment of the present invention, the absolute value calculating device used in the process of the processor 800 traversing the subtree is:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure RE-GDA0001992542380000171
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
Optionally, in the foregoing embodiment of the present invention, the LLR summation g device used in the process of traversing the subtree by the processor 800 is:
Figure RE-GDA0001992542380000172
or,
Figure RE-GDA0001992542380000181
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure RE-GDA0001992542380000182
is a hard bit sequence.
In summary, the embodiment of the present invention realizes a complex function by multiplexing devices with simple logic; the input and the output can be circulated, which is beneficial to designing control logic; the problem of mutual copying is avoided, the use of resources is effectively reduced, and the utilization rate of the resources is improved; the centralized sorting based on the subtrees effectively reduces the sorting and PM value updating times, obviously reduces the overhead and improves the decoding efficiency.
It should be noted that, the communication device provided in the embodiments of the present invention is a communication device capable of executing the above-mentioned polar code decoding method, and all embodiments of the above-mentioned polar code decoding method are applicable to the communication device, and can achieve the same or similar beneficial effects.
As shown in fig. 9, an embodiment of the present invention further provides a polar code decoding apparatus, including:
the splitting module 91 is configured to split the binary tree describing the decoding process of the polarization code into at least two subtrees, where each subtree has one and only one right leaf node according to the splitting principle;
a traversal decoding module 92, configured to sequentially traverse each sub-tree layer by layer until a right leaf sub-node on the sub-tree is found, perform primary sorting of the path metrics PM at a left node on the same layer of the right leaf sub-node, perform primary soft value sorting on the right leaf node to calculate a PM value corresponding to the path, and perform primary PM value sorting again to perform path screening;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
Optionally, in the above embodiment of the present invention, the traversal decoding module includes:
a configuration submodule, configured to configure a memory for the subtree, where the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before starting traversal of each subtree, a value in the memory needs to be initialized to the soft value sequence to be decoded, where N is an integer greater than or equal to 1;
the first traversal submodule is used for traversing the nodes of each subtree layer by layer from the root node;
the second traversal submodule is used for increasing the copy number of LLRs (log-likelihood ratios) of leaf nodes of the current traversal layer in the memory at a rate of 2 times if the left leaf sub-node does not exist in the current traversal layer;
and the third traversal submodule is used for performing path expansion caused by the left leaf sub-node according to the type of the left leaf sub-node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf sub-node if the left leaf sub-node exists in the current traversal layer.
Optionally, in the above embodiment of the present invention, if there is a left leaf node in the current traversal layer,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
Optionally, in the above embodiment of the present invention, performing path expansion caused by a left leaf node according to the type of the left leaf node includes:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
Optionally, in the foregoing embodiment of the present invention, the third traversal submodule includes;
a first unit, configured to, if the type of the left leaf child node is rate0, and the number of copies, in the memory, of the LLR of the node in the layer that is higher than the layer that the left leaf child node is located in is x, then the number of copies, in the memory, of the LLR of the right node in the layer that the left leaf child node is located in is 2 x-1;
a second unit, configured to, if the type of the left leaf node is REP and the number of copies of the LLR of the node at the layer higher than the layer where the left leaf node is located in the memory is x, determine that the number of copies of the LLR of the right node at the layer where the left leaf node is located in the memory is x-1.
Optionally, in the above embodiment of the present invention, if the remaining LLRs, other than one LLR used for calculating the PM, of the node in the layer higher than the layer where the left-leaf child node is located stored in the memory are not enough to calculate the LLR corresponding to the path expansion caused by the left-leaf child node, the apparatus further includes:
the node splitting module is used for splitting a left leaf child node of the current traversal layer into two child nodes, wherein the two split child nodes are leaf nodes, and the layer where the child nodes are located is continuously traversed; or,
and the memory increasing module is used for increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
Optionally, in the above embodiment of the present invention, the apparatus further includes:
and the secondary splitting module is used for splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
Optionally, in the above embodiment of the present invention, the absolute value calculating f device used in the process of traversing the subtree is:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure RE-GDA0001992542380000201
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
Optionally, in the foregoing embodiment of the present invention, the LLR summation g device used in the process of traversing the subtree is:
Figure RE-GDA0001992542380000202
or,
Figure RE-GDA0001992542380000203
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure RE-GDA0001992542380000204
is a hard bit sequence.
In summary, the embodiment of the present invention realizes a complex function by multiplexing devices with simple logic; the input and the output can be circulated, which is beneficial to designing control logic; the problem of mutual copying is avoided, the use of resources is effectively reduced, and the utilization rate of the resources is improved; the centralized sorting based on the subtrees effectively reduces the sorting and PM value updating times, obviously reduces the overhead and improves the decoding efficiency.
It should be noted that the polar code decoding apparatus provided in the embodiments of the present invention is a polar code decoding apparatus capable of executing the above-mentioned polar code decoding method, and all embodiments of the above-mentioned polar code decoding method are applicable to the polar code decoding apparatus and can achieve the same or similar beneficial effects.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the polar code decoding method, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (20)

1. A method for decoding a polar code, comprising:
splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node;
sequentially traversing each subtree layer by layer until a right leaf child node on the subtree is found, sequencing path metrics PM at a left node of the right leaf child node on the same layer, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
2. The method of claim 1, wherein traversing each sub-tree layer-by-layer until a right leaf node on the sub-tree is found comprises:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, and before starting traversal of each subtree, the value in the memory needs to be initialized to the soft value sequence to be decoded; n is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
3. The method of claim 2, wherein if there is a left leaf node in the current traversal level,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
4. The method of claim 2, wherein performing left-leaf-node-induced path expansion according to the type of the left-leaf node comprises:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
5. The method of claim 2, wherein the determining the number of copies of the LLR of the right node of the current traversal layer in the memory according to the type of the left-leaf child node comprises;
if the type of the left leaf child node is rate0 and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
if the type of the left leaf child node is REP and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is x-1.
6. The method of claim 3, wherein if the remaining LLRs of a node higher than the level of the left-leaf-node stored in the memory, except a set of LLRs used for calculating PMs, are not sufficient for calculating LLRs corresponding to path extensions caused by the left-leaf-node, the method further comprises:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
7. The method of claim 6, further comprising:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
8. The method of any one of claims 1-7, wherein the absolute value-finding operation, fdevice, used in traversing the subtree is:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure FDA0001930117530000021
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
9. The method of any of claims 1-7, wherein the LLR sum g device used in traversing the subtree is:
Figure FDA0001930117530000031
or,
Figure FDA0001930117530000032
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure FDA0001930117530000033
is a hard bit sequence.
10. A communication device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor; wherein the processor implements the following steps when executing the program:
splitting a binary tree describing a decoding process of the polarization code into at least two subtrees according to a splitting principle that each subtree has only one right leaf node;
sequentially traversing each subtree layer by layer until a right leaf child node on the subtree is found, sequencing path metrics PM at a left node of the right leaf child node on the same layer, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
11. The communications device of claim 10, wherein the processor is further configured to:
configuring a memory for the subtree, wherein the memory size of the memory is N times of the number of soft bits of a soft value sequence to be decoded, before each subtree traversal is started, the value in the memory needs to be initialized to the soft value sequence to be decoded, and N is an integer greater than or equal to 1;
each subtree traverses the nodes of the subtree layer by layer from the root node;
if the left leaf child node does not exist in the current traversal layer, the copy number of LLR of the leaf node of the current traversal layer in the memory is increased at a rate of 2 times;
and if the current traversal layer has the left leaf node, performing path expansion caused by the left leaf node according to the type of the left leaf node and determining the copy number of the LLR of the right node of the current traversal layer in the memory according to the type of the left leaf node.
12. The communication device of claim 11, wherein if there is a left leaf node in the current traversal level,
one LLR of a node on the layer higher than the layer where the left-leaf child node is stored in the memory is used for calculating the PM of the left-leaf child node, and the PM calculation result of the left-leaf child node is stored in the resource position of one LLR used for calculating the PM in the memory;
and the residual LLRs of the nodes of the upper layer of the layer where the left leaf node is located, which are stored in the memory, except one LLR for calculating the PM are used for carrying out LLR corresponding to path expansion caused by the left leaf node according to the type of the calculated left leaf node.
13. The communications device of claim 11, wherein the processor is further configured to:
if the type of the left leaf child node is rate0, expanding a path;
and if the type of the left leaf node is REP, expanding two paths.
14. The communications device of claim 11, wherein the processor is further configured to:
if the type of the left leaf child node is rate0 and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is 2 x-1;
if the type of the left leaf child node is REP and the copy number of the LLR of the node at the layer higher than the layer where the left leaf child node is located in the memory is x, the copy number of the LLR of the right node at the layer where the left leaf child node is located in the memory is x-1.
15. The communications device of claim 12, wherein if the remaining LLRs stored in the memory for a node higher than the level of the left-leaf-node, except for one LLR used for calculating the PM, are not sufficient for calculating the LLR corresponding to the path expansion caused by the left-leaf-node, the processor is further configured to:
splitting a left leaf node of a current traversal layer into two sub-nodes, wherein the two split sub-nodes are leaf nodes, and continuously traversing the layer where the sub-nodes are located; or,
and increasing the memory size of the memory configured for the subtree and traversing the subtree layer by layer again.
16. The communications device of claim 15, wherein the processor is further configured to:
and splitting the nodes which are not traversed on the binary tree again to obtain a plurality of subtrees.
17. The communication device of any of claims 10-16, wherein the absolute value-taking operation, fsubs, used by the processor in traversing the subtrees is:
f(a,b)=sign(a)sign(b)min(|a|,|b|);
or,
f(a,b)≈[sign(a)sign(b)min(|a|,|b|)];
or,
Figure FDA0001930117530000051
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1]。
18. The communications device of any one of claims 10-16, wherein the LLR sum g device used in traversing the subtree by the processor is:
Figure FDA0001930117530000052
or,
Figure FDA0001930117530000053
wherein, a is llr0,llr1,...llrN/2-1;b=llrN/2,llrN/2+1,...llrN-1(ii) a N represents the length of the LLR sequence of the current traversal layer, and the LLR sequence of the current traversal layer is: [ llr0,llr1,...llrN-1](ii) a L is the size of the memory of the subtree; idx is the index of the g devices;
Figure FDA0001930117530000054
is a hard bit sequence.
19. A polar code decoding apparatus, comprising:
the splitting module is used for splitting the binary tree describing the decoding process of the polarization code into at least two subtrees, and the splitting principle is that each subtree has only one right leaf node;
the traversal decoding module is used for sequentially traversing each subtree layer by layer until a right leaf node on the subtree is found, sequencing path metrics PM at a leaf node on the right leaf node, sequencing soft values at the right leaf node to calculate a PM value corresponding to a path, and sequencing PM values again to screen the path;
wherein, the result of path screening of the previous subtree is used as the input of the next subtree.
20. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the polar code decoding method according to any one of claims 1 to 9.
CN201811636287.1A 2018-12-29 2018-12-29 Polarization code decoding method and device and communication equipment Active CN111384978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811636287.1A CN111384978B (en) 2018-12-29 2018-12-29 Polarization code decoding method and device and communication equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811636287.1A CN111384978B (en) 2018-12-29 2018-12-29 Polarization code decoding method and device and communication equipment

Publications (2)

Publication Number Publication Date
CN111384978A true CN111384978A (en) 2020-07-07
CN111384978B CN111384978B (en) 2023-08-01

Family

ID=71221002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811636287.1A Active CN111384978B (en) 2018-12-29 2018-12-29 Polarization code decoding method and device and communication equipment

Country Status (1)

Country Link
CN (1) CN111384978B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI748739B (en) * 2020-11-10 2021-12-01 國立清華大學 Method and polar code decoder for determining to-be-flipped bit position

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106487479A (en) * 2016-09-27 2017-03-08 清华大学深圳研究生院 A kind of polarization code coding method that is adjudicated based on multidigit
CN106656205A (en) * 2016-09-30 2017-05-10 清华大学深圳研究生院 Polarization code decoding method and system capable of reducing memory consumption
US20170347095A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Jvet quadtree plus binary tree (qtbt) structure with multiple asymmetrical partitioning
CN107733446A (en) * 2016-08-12 2018-02-23 华为技术有限公司 Interpretation method and equipment, decoder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347095A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Jvet quadtree plus binary tree (qtbt) structure with multiple asymmetrical partitioning
CN107733446A (en) * 2016-08-12 2018-02-23 华为技术有限公司 Interpretation method and equipment, decoder
CN106487479A (en) * 2016-09-27 2017-03-08 清华大学深圳研究生院 A kind of polarization code coding method that is adjudicated based on multidigit
CN106656205A (en) * 2016-09-30 2017-05-10 清华大学深圳研究生院 Polarization code decoding method and system capable of reducing memory consumption

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯博文等: "基于树图剪枝的极化码译码简化算法" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI748739B (en) * 2020-11-10 2021-12-01 國立清華大學 Method and polar code decoder for determining to-be-flipped bit position

Also Published As

Publication number Publication date
CN111384978B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
WO2018028366A1 (en) Decoding method and device, and decoder
Atallah et al. Constructing trees in parallel
US20230004809A1 (en) Method and Device for Model Compression of Neural Network
WO2018219195A1 (en) Decoding method and decoder
US11467905B1 (en) Stripe merging method and system based on erasure codes
US20090217125A1 (en) Low density parity check (ldpc) decoder
CN108833052B (en) Channel polarization decoding path metric value sorting method
CN112737596A (en) Dynamic Huffman coding method, device and equipment based on sorting network
CN112332856A (en) Layer decoding method and device of quasi-cyclic LDPC code
CN106656213A (en) Implementation method for low-complexity polarization code folding hardware framework based on k-segment decomposition
CN115642918A (en) Encoding optimization method, device and equipment of double-prototype-graph LDPC code and storage medium
CN118297144A (en) Logic optimization method, device, computer equipment and storage medium based on AIG replacement with reverse graph
CN110635809A (en) Design method of parallel polarization code BP decoder based on formula language
CN111384978B (en) Polarization code decoding method and device and communication equipment
CN107508775A (en) Interpretation method and device in a kind of Sparse Code multiple access system
Erez et al. Efficient network codes for cyclic networks
EP3829088A1 (en) Decoder, decoding method, and computer storage medium
CN116366538A (en) Path updating and equivalent path planning method and related device under dynamic network
CN113872610B (en) LDPC code neural network training and decoding method and system thereof
Nguyen et al. On graphs with finite-time consensus and their use in gradient tracking
Buhrman et al. Space-efficient routing tables for almost all networks and the incompressibility method
CN113487036B (en) Distributed training method and device of machine learning model, electronic equipment and medium
CN106209114B (en) Interpretation method and device
Vangala et al. Quantization of binary input dmc at optimal mutual information using constrained shortest path problem
KR101976315B1 (en) Method for constructing polar codes on binary symmetric channel and apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210528

Address after: 100085 1st floor, building 1, yard 5, Shangdi East Road, Haidian District, Beijing

Applicant after: DATANG MOBILE COMMUNICATIONS EQUIPMENT Co.,Ltd.

Address before: 100191 No. 40, Haidian District, Beijing, Xueyuan Road

Applicant before: CHINA ACADEMY OF TELECOMMUNICATIONS TECHNOLOGY

GR01 Patent grant
GR01 Patent grant