WO2019201233A1 - 极化码的译码方法和装置 - Google Patents

极化码的译码方法和装置 Download PDF

Info

Publication number
WO2019201233A1
WO2019201233A1 PCT/CN2019/082856 CN2019082856W WO2019201233A1 WO 2019201233 A1 WO2019201233 A1 WO 2019201233A1 CN 2019082856 W CN2019082856 W CN 2019082856W WO 2019201233 A1 WO2019201233 A1 WO 2019201233A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
path
decoding
data structure
bit
Prior art date
Application number
PCT/CN2019/082856
Other languages
English (en)
French (fr)
Inventor
牛凯
管笛
董超
王桂杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019201233A1 publication Critical patent/WO2019201233A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/09Error detection only, e.g. using cyclic redundancy check [CRC] codes or single parity bit
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes

Definitions

  • the present application relates to the field of channel decoding, and in particular, to a method and an apparatus for decoding a polarization code.
  • Polar codes are a structured channel coding method proposed by E. Arikan in 2009 that is rigorously proven to achieve channel capacity.
  • SCL serial cancellation list
  • the existing SCL decoding algorithm has a high computational complexity when the list width is large, and the delay caused by the path sorting is large, and further improvement is needed to reduce the delay of computational complexity, thereby adapting to the modern communication system. Claim. Therefore, it has been proposed that the adaptive serial cancellation list (ADSCL) algorithm can greatly reduce the computational complexity of decoding under high SNR conditions.
  • ADSCL adaptive serial cancellation list
  • the present application provides a method for decoding a polarization code, which can reduce computational complexity without loss of decoding performance.
  • a method for decoding a polarization code comprising: acquiring a first bit sequence to be decoded; and if the selected first candidate decoding path does not pass the cyclic redundancy check CRC And reading, by the first data structure and the second data structure, data required for calculating the second candidate decoding path, wherein the first data structure stores a bit decision required for performing bit decision on each bit in the first bit sequence
  • the intermediate data the second data structure stores location information of a part of the nodes on the coding tree corresponding to the first bit sequence, a path metric value of the root node on the coding tree to each node in the partial node, and the part of the node
  • the decoding decision result the decoding tree is a full binary tree; the second candidate decoding path is calculated on the decoding tree according to the data read from the first data structure and the second data structure; and the second candidate decoding is performed in the second candidate decoding When the path passes the CRC, the corresponding bit estimation sequence of the second candidate
  • the first data structure and the second data structure to store all the intermediate data required for decoding, even if the decoding path is recalculated in the case of one decoding failure, a large number of repetitions are not required. It is calculated that under various channel conditions of signal to noise ratio (for example, channel conditions with medium to high signal to noise ratio or channel conditions with low signal to noise ratio), the computational complexity can be reduced without loss of decoding performance.
  • the second data structure includes two priority queues, the location information and path metric information of the partial nodes being stored in the two priority queues, the portion The path metric of the node is in ascending order in each preferred level queue, wherein the path metric near the front end of the queue is smaller than the path metric near the back end of the queue, and the location information of the at least one node and the at least one path metric have One-to-one mapping relationship.
  • the location information of any one of the priority queues includes a layer at which the node is located on the coding tree, an extension order of the node at the layer, and The order in which the parent node of the node is expanded on the coding tree.
  • the method before the CRC is performed on the first candidate coding path, the method further includes: pre-arranging according to the data stored in the first data structure and the second data structure Setting a first path search width to calculate a first candidate coding path on a coding tree; and, in a case where the first candidate coding path does not pass the CRC, reading from the first data structure and the second data structure Data, calculating a second candidate decoding path on the coding tree, comprising: calculating a second candidate translation on the coding tree by using a second path search width according to the data read in the first data structure and the second data structure The code path, wherein the second path search width is twice the first path search width, and the second path search width is less than or equal to the preset maximum path search width.
  • the code path includes: activating the first priority queue in the two priority queues, and reading the first node from the first priority queue, where the first node is the first node in the first priority queue; Position information of a node, determining whether the first node is a leaf node on the coding tree; and when the first node is a leaf node on the coding tree, outputting a root node of the coding tree to the first node a bit estimation sequence as a first candidate decoding sequence; and the method further comprising: using a bit estimation sequence corresponding to the first candidate coding path on the coding tree in a case where the first candidate coding path passes the CRC The decoding result of the first bit sequence.
  • the method further includes: determining whether the accessed leaf node is greater than the first search width; and the visited leaf node is greater than the first search width and does not exceed the preset
  • the activation states of the first priority queue and the second priority queue are exchanged, wherein switching the activation states of the first priority queue and the second priority queue includes activating the second priority queue, Setting the first priority queue to be inactive, and inserting all nodes not read in the first priority queue into the activated second priority queue according to the path metric value; and searching for the width in the second path in decoding Calculating the second candidate decoding path on the tree, comprising: reading the second node from the activated second priority queue, and according to the first data structure and the second data Configuration
  • the second candidate translation is computed on the coding tree with the second path search width if the visited leaf node is less than or equal to the first path search width
  • the code path includes: continuing to read the node from the first priority queue, and calculating the second candidate decoding on the coding tree by using the second path search width according to the data stored in the first data structure and the second data structure path.
  • the method further includes: determining, according to location information of the first node, that the first node is Decoding the layer on the tree; determining the extended node of the first node according to the size relationship between the visited node of the layer where the first node is located on the decoding tree, the first search width, and the maximum search width The storage location in the second data structure.
  • determining a storage location of the extension node of the first node in the second data structure includes: if the layer of the first node is on the coding tree If the number of access nodes is less than or equal to the first search width, the extended node is inserted into the first preferred level queue according to the path metric value of the extended node; if the number of visited nodes of the first node in the layer of the decoding tree is greater than the first node a search width, and less than or equal to the maximum search width, inserting the extended node into the second priority queue according to the path metric value of the extended node; if the visited node of the layer where the first node is located on the code tree is larger than the maximum search Width, the extension node is not stored.
  • the method further comprises: obtaining, from the first data structure, intermediate data required for performing a bit decision on each bit in the first bit sequence; Each bit in a bit sequence performs intermediate data required for bit decision, and a subchannel carrying each bit in the first bit sequence is an information bit or a frozen bit, and a decoding decision for each bit in the first bit sequence is determined. Resulting; storing the decoding decision result of each bit in the first bit sequence in the first priority queue or the second priority queue, wherein the decision result of each bit is in the first priority queue with each bit or The location information of the corresponding node in the second priority queue corresponds to the path metric value.
  • the decoding decision result of each bit in the first bit sequence that is, the decoding decision result of the source side node in the Trellis diagram, is an equivalent concept.
  • the decoding decision result of the source side node is the corresponding position stored in the priority queue, and is stored together with the location information and the path metric value of the source side node.
  • different information is separately stored in the priority queue structure and the Trellis diagram.
  • the information in the priority queue directs the iterative calculation of the decoding in the Trellis diagram, and the decoded decision result is returned to the priority queue for preservation.
  • the intermediate data stored in the first data structure includes a decoded intermediate log likelihood ratio and a hard decision value for each of all of the extended paths, and each The decoding intermediate log likelihood ratio of the node and the extended path to which the hard decision value belongs.
  • a decoding apparatus for performing the method of the first aspect or any possible implementation of the first aspect.
  • the decoding device comprises means for performing the method of the first aspect or any of the possible implementations of the first aspect.
  • the above described functions of the decoding device may be implemented in part or in whole by software.
  • the decoding device 600 can include memory and processing.
  • the memory is used to store a computer program, and the processor reads and runs the computer program from the memory to implement the decoding method of the polarization code of the present application.
  • the decoding device 600 when part or all of the decoding device 600 is implemented in software, the decoding device 600 includes a processor.
  • a memory for storing a computer program is located outside of the decoding device 600, and the processor is coupled to the memory through a circuit/wire for reading and executing a computer program stored in the memory.
  • the decoding apparatus includes: an input interface circuit for acquiring a first bit sequence to be decoded; a logic circuit for performing the decoding method in the above embodiment; and an output interface circuit for outputting The result of the decoding.
  • the decoding device may be a chip or a decoder.
  • the above memory and processor may be physically separate units or may be integrated.
  • the present application provides a computer readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform any of the first aspect or any of the possible implementations of the first aspect The method in the way.
  • the present application provides a chip (or a chip system) including a memory for storing a computer program, and a processor for calling and running the computer program from the memory such that the chip is mounted.
  • the communication device performs the method of the first aspect described above and any one of its possible implementations.
  • the communication device here can be a decoding end.
  • the decoding end may be a terminal device or a network device (refer to the specification) suitable for use in the communication system of the embodiment of the present application.
  • the present application provides a computer program product comprising computer program code for causing a computer to execute the first aspect and any one of its possible implementations when the computer program code is run on a computer method.
  • the first data structure and the second data structure to store all the intermediate data required for decoding, even if the decoding path is recalculated in the case of one decoding failure, a large number of repetitions are not required. It is calculated that under various channel conditions of signal to noise ratio (for example, channel conditions with medium to high signal to noise ratio or channel conditions with low signal to noise ratio), the computational complexity can be reduced without loss of decoding performance.
  • FIG. 1 is a wireless communication system suitable for use in an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a decoding tree.
  • Figure 3 is a flow chart of the ADSCL decoding algorithm.
  • FIG. 4 is a flow chart of a decoding algorithm of a polarization code in an embodiment of the present application.
  • Fig. 5 is a schematic structural view of a Trellis diagram.
  • FIG. 6 is a schematic structural diagram of a priority queue.
  • FIG 7 is an overall flow chart of the ADPSCL decoding algorithm of the present application.
  • Figure 8 is a schematic diagram of the activation state of exchanging two priority queues.
  • Figure 10 is a schematic diagram of the interaction of the priority queue and the Trellis diagram.
  • Figure 11 is a schematic diagram of a stored procedure of a priority queue.
  • FIG. 18 is a schematic block diagram of a decoding apparatus 600 according to an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of a decoder 700 of an embodiment of the present application.
  • FIG. 1 is a wireless communication system suitable for use in an embodiment of the present application.
  • the wireless communication system can include at least one network device 101 in communication with one or more terminal devices (e.g., terminal device 102 and terminal device 103 shown in FIG. 1).
  • terminal devices e.g., terminal device 102 and terminal device 103 shown in FIG. 1.
  • the network device sends a signal, it is the encoding end.
  • the network device receives the signal, it is the decoding end.
  • the terminal device is also the same.
  • the terminal device sends a signal, it is the encoding end.
  • the terminal device receives the signal, it is the decoding end.
  • a terminal device may also be called a subscriber unit, a subscriber station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, a user device, or a user equipment (UE).
  • UE user equipment
  • cellular telephone cordless telephone, session initiation protocol (SIP) telephone, wireless local loop (WLL) station, personal digital assistant (PDA), handheld device with wireless communication function , a computing device, or other processing device connected to a wireless modem.
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • the network device in the communication system may be a global system of mobile communication (GSM) or a base transceiver station (BTS) in code division multiple access (CDMA), or may be
  • GSM global system of mobile communication
  • BTS base transceiver station
  • CDMA code division multiple access
  • NB base station
  • WCDMA wideband code division multiple access
  • eNodeB evolutional Node B
  • LTE long term evolution
  • a relay station or an access point, or a base station device in a future 5G network may be a relay station or an access point, or a base station device in a future 5G network.
  • the construction of the polarization code is the selection problem of the polarization channel.
  • the selection of the polarized channel is actually based on the optimization of SC decoding performance.
  • the polarization channels are not independent of each other, but have a dependency, that is, the polarization channel with a large channel number depends on all other polarization channels having a small sequence number.
  • the SC decoding algorithm Based on this dependency between the polarized channels, the SC decoding algorithm performs a decoding decision (or bit decision) on each bit, which is based on the assumption that the results of the decoding decisions of all previous steps are correct.
  • the polarization code is proved to be reachable by the channel capacity.
  • FIG. 2 is a schematic structural diagram of a decoding tree.
  • V 0 only contains the root node.
  • each node on the coding tree T is connected to the successor node by two edges labeled 0 and 1, respectively.
  • the sequence corresponding to a node A sequence of tokens defined as the edges that need to pass through the root node to reach the node.
  • the path formed from the root node to any one node corresponds to a path metric (PM).
  • PM path metric
  • the SCL decoding algorithm has been proposed.
  • the number of candidate paths allowed to be reserved is increased at each layer of the coding tree. From each layer of the SC decoding algorithm, only the "optimal one path for the next expansion" is allowed to be changed to "the maximum allowable selection of the best path.” The next step is to expand.” Also, the candidate path that each layer is allowed to reserve is referred to as a search width L, L ⁇ 1 and L is an integer.
  • the SCL algorithm performs decoding, it still starts from the root node of the decoding tree and performs path search to the leaf node layer layer by layer. Different from the SC, after completing the path expansion of each layer, the L paths with the smallest PM are selected as candidate paths, and are saved in a list, waiting for the expansion of the next layer.
  • the SC decoding algorithm is depth-first, and it is required to quickly reach the leaf node from the root node.
  • the SCL decoding algorithm is breadth-first, first expanding, then pruning, and finally reaching the leaf node.
  • ADSCL adaptive sequential cancellation list
  • FIG. 3 is a flow chart of the ADSCL decoding algorithm.
  • the ADSCL algorithm is supplemented by a cyclic redundancy check (CRC).
  • CRC cyclic redundancy check
  • the path list width is doubled (ie, the path list width becomes 2) and re-decoded until the correct decoding or the path list width exceeds the maximum path list width, and the decoding fails.
  • L max in FIG. 3 represents a preset maximum path list width, and L is a current path list width.
  • the current path list width and the maximum path list width are compared. If L ⁇ L max is satisfied, the decoding is ended. Otherwise double the current path list width and return to step (2).
  • the ADSCL decoding algorithm is likely to be successfully decoded under a very low path list width, which can greatly reduce the calculation of SCL decoding algorithm. the complexity.
  • the CRC does not need to be re-decoded
  • the intermediate information and/or data of the path that has been calculated before is still calculated, so that a large number of repeated calculations occur.
  • the worst case computational complexity may even exceed the SCL decoding algorithm.
  • the existing ADSCL decoding algorithm for reducing the complexity under the channel condition with high signal to noise ratio, the current path list width can be successfully decoded.
  • the ADSCL decoding algorithm requires hardware resources equivalent to the maximum path list width, and in the case of poor channel conditions, the computational complexity and delay even exceed the traditional SCL decoding algorithm.
  • the present application proposes a decoding algorithm for a polarization code, which can avoid a large number of calculations in recalculating the decoding path in the ADSCL algorithm, and reduces computational complexity.
  • the ADPSCL decoding algorithm proposed in the following application can be executed by the decoding end. For example, when the terminal device communicates with the network device, the terminal device needs to decode the received sequence to be decoded as the decoding end.
  • FIG. 4 is a flow chart of a decoding algorithm of a polarization code in an embodiment of the present application.
  • the first data structure stores intermediate data required for performing bit decision on each bit in the first bit sequence.
  • the second data structure stores location information of a part of nodes on the coding tree corresponding to the first bit sequence, a path metric value of the root node on the coding tree to the part of the node, and a decoding decision result of the part of the node.
  • the first candidate decoding path mentioned herein may be any candidate decoding path calculated by the decoding end in the process of decoding the first bit sequence.
  • the process of calculating an arbitrary candidate decoding path can be referred to the detailed flow of the ADPSCL decoding algorithm introduced below.
  • the first data structure and the second data structure are respectively described in detail below.
  • the first data structure can be a trellis diagram, and the composition of the Trellis diagram is described below in conjunction with FIG.
  • Fig. 5 is a schematic structural view of a Trellis diagram.
  • the basic building blocks of the Trellis diagram are the Trellis nodes (such as nodes 1, 2, ..., 9 in Figure 5).
  • LLR Likelihood ratio
  • the calculation of the LLR value is a recursive process.
  • the leftmost column in the Trellis diagram is defined as a source layer, and the nodes of the source layer are referred to as source side nodes.
  • the third and fourth digits are frozen and the rest are information bits.
  • the first decision The estimated value, then you need to know the LLR value of node 1.
  • the LLR value of node 1 is calculated based on the LLR values of node 5 and node 7.
  • the LLR value of node 5 is calculated by the LLR values of node 9 and node 10.
  • the LLR value of node 7 is calculated by the LLR values of node 11 and node 12.
  • the decoding decision result of the source side node (the node of the first column in FIG. 5) can be obtained, and the decoding decision result of the source side node is returned. Save in the priority queue.
  • the decoding decision result of the source side node corresponds to the location information or the metric value recorded by the corresponding node in the priority level queue.
  • the calculation of the LLR value can be divided into two cases according to the parity of the value of i.
  • the calculation is performed using the F function when i is an odd number, and the calculation is performed using the G function when i is an even number.
  • the F function and the G function are well-known concepts in the iterative calculation of LLR values in the SC decoding algorithm, and are not described in detail herein.
  • the partial sum value is the result of the operation of the G function.
  • the intermediate data recorded by each Trellis node in the Trellis graph includes the intermediate value and the hard decision value of the corresponding positions of all the extended paths and the extended paths to which the intermediate values and the hard decision values are associated.
  • the intermediate value mentioned here is the LLR value in the iterative process described above.
  • each node on the non-source side records an intermediate value information group, and each intermediate value information group includes an intermediate decoded log likelihood ratio (ie, an LLR value) of a corresponding position of all the extended paths.
  • the hard decision value can also be derived from the LLR value. Therefore, in practical applications, the intermediate value information group may only include the LLR value and the extended path corresponding to the LLR values, and details are not described herein.
  • each Trellis node records up to L sets of intermediate value information sets, requiring a total space complexity of O(L ⁇ N ⁇ log 2 N).
  • the LLR value required for re-decoding is determined.
  • the sum and part values have been recorded in the Trellis diagram and can be read directly without recalculation.
  • the second data structure will be described below with reference to FIG.
  • the second data structure includes two priority queues. If the length of the first bit sequence to be decoded is denoted as N, and the path list width is denoted as L, the maximum number of records (N ⁇ L) in each priority queue Location information and path metrics for the node. Wherein, the location information and the path metric value of each node are one-to-one mapped.
  • the location information and path metrics of all nodes on the coding tree are not stored in a priority queue. If the current path search width is set to L, then each layer of the decoding tree selects only L nodes to expand to the next level. For N long bit sequences, the location information and path metric values of (N ⁇ L) nodes are stored in each priority queue.
  • the stored procedure for path metrics in a priority queue can be found in Table 1 below.
  • the path metric of a node referred to in this paper refers to the path metric from the root node of the decoding tree to the node.
  • FIG. 6 is a schematic structural diagram of a priority queue.
  • the path metrics are in ascending order. That is, the path metric near the front end of the queue is smaller than the path metric near the back end of the queue, or the path metric of the previous node is smaller than the path metric of the next node from the head node of the queue to the end of the queue. .
  • the location information of a node includes information about the layer in which the node is located on the coding tree, the order in which the nodes are located, and the order in which the nodes of the node are extended on the coding tree.
  • the decoding tree is a full binary tree.
  • the concept of the parent node and the leaf node is a well-known concept in the data structure of the binary tree of the computer, which is not described in the embodiment of the present application.
  • node A is located at the second level of the coding tree, and the parent node of node A is referred to as node B. It can also be said that node A is a successor node of node B. Another successor node of Node B is denoted as Node C. There is also a node D on the same layer as Node B. It is assumed that in the path extension process, when the layer 1 expands to the layer 2, the node B is preferentially extended, and the node D is expanded second. When expanding from the second layer to the third layer, the node C is preferentially expanded, and the node A is expanded second. Then, the location information of the node A includes the second layer of the node A in the coding tree, the extension order of the node A in the second layer is 2, and the extension order of the parent node of the node A in the first layer is 1 information.
  • the second data structure requires a total space complexity of O(N ⁇ L).
  • the ADPSCL decoding algorithm proposed in the present application relates to the interaction of the first data structure and the second data structure, which will be described in detail below with reference to the embodiments.
  • the first data structure stores intermediate data required for bit determination of each bit in the bit sequence to be decoded
  • the second data structure stores Decoding the location information and metric values of some nodes on the tree. Therefore, if a selected candidate decoding path does not pass the CRC in the process of decoding the first bit sequence, and the decoding path needs to be recalculated, it may be directly from the first data structure and the second data structure. Read the required data to avoid the computationally intensive and time consuming problems caused by repeated calculations in the prior art.
  • a second candidate decoding path is calculated based on data read from the first data structure and the second data structure.
  • the path selected before the CRC is referred to as a candidate decoding path, and only after a certain candidate decoding path passes the CRC, indicating that the decoding is successful, the candidate that passes the CRC is selected.
  • the decoding path is determined to be a decoding path.
  • the second candidate decoding path passes the CRC, use the bit estimation sequence corresponding to the second candidate decoding path on the coding tree as the decoding result of the first bit sequence.
  • the corresponding bit estimation sequence of the second candidate coding path on the coding tree is used as the decoding result of the first bit sequence.
  • the bit estimation sequence refers to an output estimation result of the first bit sequence after completion of the bit decision of each bit in the first bit sequence to be decoded, which is easy to understand.
  • This estimation result is a bit sequence, and therefore, is called Bit estimation sequence.
  • the bit estimation sequence corresponding to the root node to the node F is [0 0 1 1].
  • step 340 if the calculated second candidate decoding path passes the CRC, it indicates that the decoding is successful, and the second candidate decoding path is outputted in the corresponding bit estimation sequence on the decoding tree. That is, the decoding result of the first bit sequence to be decoded. Then, if the second candidate decoding path fails the CRC, indicating that the calculated second candidate decoding path fails in the case where the calculated first candidate decoding path fails. In this case, the decoding end needs to recalculate the candidate decoding path, and the process of calculation is the same as the process of calculating the second candidate decoding path.
  • the bit estimation sequence [0 0 1 1] in step 340 is output, that is, the decoding result of the first bit sequence.
  • the technical solution of the embodiment of the present application by using the first data structure and the second data structure to store the intermediate data required for decoding, does not require a large number of repeated calculations even if the decoding path is recalculated in the case of one decoding failure.
  • the computational complexity can be reduced without loss of decoding performance.
  • FIG. 7 is an overall flowchart of an ADPSCL decoding algorithm of the present application.
  • the priority queue to be activated below is referred to as a first priority queue
  • the second priority queue that is not activated is referred to as a second priority queue.
  • ADPSCL decoding is the adaptive priority successive cancellation list (ADPSCL) decoding algorithm proposed in this application.
  • the ADPSCL decoding algorithm will be described in detail below.
  • step 403 a candidate decoding path is determined.
  • the candidate decoding path passes the CRC, it indicates that the decoding is successful when the path search width is L, the decoding result is output, and the decoding is ended.
  • steps 405, 406 are performed.
  • the path search width L at this time will be set to 2.
  • the path search width L at this time is set to 8.
  • the activation status of the two priority queues is interchanged, which means that the activated first priority queue is set to be inactive, and the second priority queue that is not activated is activated. While the active state is interchanged, the metric information of the unread nodes in the first priority queue needs to be sequentially inserted into the second priority queue.
  • Figure 8 is a schematic diagram of switching the activation states of two priority queues.
  • the path metric values in the first priority queue and the second priority level queue are all in ascending order, after the path metric values in the first priority queue are sequentially inserted into the second priority queue, the second The stored path metrics in the priority queue are still in ascending order. At the same time, the path metric values of the remaining nodes in the first priority queue are also in ascending order.
  • step 405 there is no order between step 405 and step 406.
  • step 405 only the two processes of "doubling the current path search width” and “changing the activation states of the two priority queues" are respectively numbered. It can also be combined into one step in the flowchart, which is not limited herein.
  • step 406 the process returns to step 402 to re-determine the magnitude relationship between the current path search width L and the maximum path search width Lmax .
  • the subsequent process is the same as the above steps 402-406, and details are not described herein again.
  • FIG. 9 is a detailed flowchart of the ADPSCL decoding algorithm of the present application.
  • the following priority queue to be activated is referred to as a first priority queue
  • the inactive priority queue is referred to as a second priority queue
  • the first node of the first priority queue is referred to as a first node.
  • step 505 is performed.
  • the candidate decoding path can be directly obtained and CRC can be performed thereon.
  • the decoder performs steps 507-508.
  • step 506 if the candidate decoding path does not pass the CRC, the decoding fails.
  • the decoder performs step 509 and subsequent steps.
  • step 510 If the number of visited leaf nodes exceeds the current path search width, step 510 and subsequent steps are performed.
  • step 511 when step 511 is returned to step 503, the activated priority queue referred to in step 503 refers to the second priority queue. Because in step 510, the activation states of the first priority queue and the second priority queue are interchanged.
  • step 511 when step 511 is returned to execution step 503, the first node in the second priority queue (hereinafter referred to as the second node) is read.
  • step 504 if the first node is not a leaf node on the decoding tree, step 512 and subsequent steps are performed.
  • Path expansion is performed when the node to the next source layer is calculated. If it corresponds to the information bit, it expands to 2 nodes. If it corresponds to a frozen bit, it expands to 1 node (the node to be expanded below is called an extended node).
  • step 512 the interaction between the priority queue and the Trellis diagram is involved.
  • the interaction between the priority queue and the Trellis diagram is described below in conjunction with FIG.
  • FIG. 10 is a schematic diagram of interaction between a priority queue and a Trellis diagram.
  • the first node of the activated priority queue is always read first, and then the corresponding intermediate LLR value is iterated according to the location information of the first node stored in the priority queue. Calculations and updates to partial and numerical values. This iterative process will result in an extended node (corresponding to the frozen bit) or two extended nodes (corresponding information bits).
  • the number of visited nodes in the layer where the extended node is located on the statistical decoding tree (hereinafter referred to as Z) is counted, and the width L and the current path are searched according to the number of visited nodes of the layer where the extended node is located.
  • the relationship between the set maximum path search width L max determines whether the extended node is stored in the priority queue or discarded.
  • the activated priority queue is the priority queue 1 as shown in FIG.
  • the priority queue stores location information of some nodes on the coding tree and path metrics of the root node to the partial nodes. Therefore, storing the extended nodes in the priority queue 1 in order refers to inserting the extension into the priority queue 1 according to the size of the path metric of the extended node, and also recording the location information of the extended node.
  • the priority queue that is not activated is the priority level queue 2 as shown in FIG.
  • the two data queues are interleaved by two priority queues and the Trellis diagram.
  • the computational complexity is greatly reduced under the condition of no loss of decoding performance, and the signal-to-noise ratio is high. Under channel conditions, the delay and computational complexity are close to the traditional SC algorithm.
  • the same metric calculation method as the SCL algorithm is adopted. When the signal-to-noise ratio is high, it is easier to read the priority nodes corresponding to the correct path and continuously expand, thereby greatly reducing the computational complexity of the ADPSCL algorithm.
  • the ADPSCL algorithm decodes almost in the correct direction of extension, so the delay and computational complexity of the SC algorithm can be achieved. That is, the computational complexity of the ADPSCL algorithm is not high at low SNR. Exceeding the SCL algorithm, this is different from the ADPSCL algorithm. Because the ADPSCL algorithm avoids double counting. In addition, the ADPSCL decoding algorithm can guarantee the performance of the traditional ADSCL decoding algorithm. In addition, the storage structure of the ADPSCL's Trellis diagram and the priority queue ensures that it can be realized under the spatial complexity of O(L ⁇ N ⁇ log 2 N), so the ADPSCL algorithm is achievable. The data in the Trellis graph only needs to be read and calculated, and there is no operation such as path copying by the traditional SCL or ADSCL algorithm. Therefore, the maintenance overhead of the ADPSCL algorithm is also small.
  • Figure 11 is a schematic diagram of a stored procedure in a priority queue.
  • the circle in Figure 11 represents the node, and the number in the circle represents the path metric corresponding to this node.
  • the stored procedure can be as shown in Table 1 below.
  • the layer where the extended node is located refers to the layer where the extended node is located on the decoding tree.
  • the number of visited nodes at the layer where the extended node is located is denoted as Z below.
  • the decoder performs step 514.
  • the decoder performs step 515.
  • step 516 is performed.
  • step 517 is performed.
  • the bit decision result of the node can be obtained according to the intermediate data recorded in the Trellis diagram.
  • the first node is continuously read from the activated priority queue, and the bit decision is made until the entire candidate decoding path is selected, and the bit decision result of each bit in the first bit sequence to be decoded is completely known. .
  • the technical solution of the present application has high computational complexity, large computational complexity and large delay for the traditional SCL algorithm, and the ADSCL algorithm has multiple calculations, and the computational complexity is even higher than that of the traditional SCL algorithm in low SNR channel conditions.
  • An adaptive priority successive cancellation list (ADPSCL) algorithm is proposed. Depth-first search based on the traditional LLR-based SCL algorithm with search width L avoids the continuation of the impossible path, saves unnecessary computation, and combines the ADSCL algorithm from a very low search width. Decoding begins until the decoding succeeds or exceeds a preset maximum path search width. At the same time, the two data structures of the two priority queues and the Trellis diagram are used to store the intermediate data of the decoding process respectively. Even if the decoding fails, the decoding path needs to be recalculated, and a large number of repeated calculations are not needed, which reduces the computational complexity.
  • ADPSCL decoding algorithm The decoding performance comparison between the decoding method of the present application (referred to as ADPSCL decoding algorithm) and the conventional SCL decoding algorithm is given below.
  • the above 12-15 compares the computational complexity of ADPSCL, PSCL algorithm, traditional ADSCL and SCL algorithm when the code length is 256, 512, 1024 and 2048 respectively.
  • the abscissa is a different SNR condition (in decibels, ie dB), and the ordinate reflects the average complexity of the algorithm by the number of multiply-add operations.
  • the higher the signal-to-noise ratio the lower the complexity of the ADPSCL decoding algorithm and the more obvious the advantage.
  • the computational complexity of the ADPSCL algorithm approaches the SCL algorithm due to the avoidance of double counting at low SNR.
  • the traditional ADSCL algorithm requires even calculations, and the complexity is even higher than that of the traditional SCL algorithm under low SNR conditions.
  • the computational complexity of the ADPSCL algorithm approaches the SC algorithm at high SNR.
  • the computational complexity of the ADPSCL algorithm is not higher than the traditional ADSCL algorithm at any signal-to-noise ratio.
  • the same search width the shorter the code length, the greater the degree of complexity reduction.
  • the computational complexity of the ADPSCL algorithm can be reduced by at least 20% compared to the traditional SCL algorithm.
  • the computational complexity of the ADPSCL algorithm can be reduced by more than 50%.
  • the computational complexity of the ADPSCL algorithm can be reduced by more than 90%.
  • the ADPSCL algorithm has lower computational complexity under each SNR. This is especially noticeable at low signal to noise ratios.
  • the ADPSCL algorithm can significantly reduce the computational complexity, and the decoding performance is not lost. Therefore, the ADPSCL algorithm is an efficient and improved algorithm based on the traditional SCL and ADSCL algorithms.
  • FIG. 16 and FIG. 17 are comparisons of decoding performance at the same complexity.
  • the left-to-right circles respectively indicate the bit error rate of the SCL algorithm when the path list width is 16, 8, 4 and 2, respectively, under the signal-to-noise ratio corresponding to the circle.
  • the ADPSCL algorithm has a significant performance advantage over the SCL algorithm under the same complexity, and this advantage will increase as the signal-to-noise ratio increases and the code length increases.
  • the PSCL algorithm and the SCL algorithm have the same performance without any loss in the case where the search width of the PSCL is the same as the list size of the SCL.
  • the maximum path list search width of ADPSCL is the same as the maximum path list search width of ADSCL, and the performance is completely consistent without any loss.
  • the complexity of ADPSCL is always lower than the complexity of ADSCL. Since the computational complexity of the ADPSCL algorithm and the ADSCL algorithm are both reduced (ie, floating) as the signal-to-noise ratio increases, and the decoding performance of the ADPSCL algorithm and the ADSCL algorithm are always the same at the same SNR, complex. The degree is always lower than the ADSCL algorithm, and the complexity has no intersection. Therefore, Figure 16 and Figure 17 of this paper illustrate the performance advantages of the ADPSCL algorithm by comparing the performance of ADPSCL with the traditional SCL algorithm at the same complexity.
  • FIG. 18 is a schematic block diagram of a decoding apparatus 600 according to an embodiment of the present application.
  • the decoding device 600 mainly includes a first communication unit 610, a processing unit 620, and a second communication unit 630.
  • a first communication unit 610 configured to acquire a first bit sequence to be decoded
  • the processing unit 620 is configured to read, when the selected first candidate coding path does not pass the cyclic redundancy check CRC, the required calculation of the second candidate decoding path from the first data structure and the second data structure.
  • Data wherein the first data structure stores intermediate data required for performing bit decision on each bit in the first bit sequence, and the second data structure stores a partial node on the decoding tree corresponding to the first bit sequence Location information, the root node on the decoding tree to the path metric of each node in the partial node, and the decoding decision result of the partial node, the decoding tree is a full binary tree; according to the first data structure and the second Data read in the data structure, the second candidate decoding path is calculated on the coding tree; and in the case where the second candidate coding path passes the CRC, the corresponding candidate bit is estimated on the coding tree The sequence is used as a decoding result of the first bit sequence;
  • the second communication unit 630 is configured to output the decoding result.
  • the first communication unit 610 and the second communication unit 630 may be different or may be the same communication unit.
  • the above described functions of the decoding device 600 may be implemented in part or in whole by software.
  • the decoding device 600 can include memory and processing.
  • the memory is used to store a computer program, and the processor reads and runs the computer program from the memory to implement the decoding method of the polarization code of the present application.
  • the decoding device 600 when part or all of the decoding device 600 is implemented in software, the decoding device 600 includes a processor.
  • a memory for storing a computer program is located outside of the decoding device 600, and the processor is coupled to the memory through a circuit/wire for reading and executing a computer program stored in the memory.
  • the decoding apparatus 600 when part or all of the above functions of the decoding apparatus 600 are implemented by hardware, the decoding apparatus 600 includes: an input interface circuit for acquiring a first bit sequence to be decoded; and a logic circuit for The decoding method in the above embodiment is executed; the output interface circuit is configured to output a decoding result.
  • the decoding device may be a chip or an integrated circuit.
  • decoding device 600 can be a decoder or chip.
  • FIG. 19 is a schematic structural diagram of a decoder 700 according to an embodiment of the present application.
  • decoder 700 includes one or more processors 701, one or more memories 702, and one or more communication interfaces 703.
  • the communication interface 703 is configured to acquire a first bit sequence to be decoded
  • the memory 702 is used to store a computer program
  • the processor 701 is configured to call and run the computer program from the memory 702, so that the decoder 700 performs the embodiment of the present application.
  • the decoding method completes decoding of the first bit sequence.
  • the communication interface 703 is further configured to output a decoding result of the first bit sequence.
  • the communication interface that receives the first bit sequence to be decoded may be different from the communication interface that outputs the decoding result.
  • the decoding device 600 shown in Fig. 18 can be implemented by the decoder 700 shown in Fig. 19.
  • the first communication unit 610 and the second communication unit 630 may be implemented by the communication interface 703 in FIG. 19, and the processing unit 620 may be implemented by the processor 701 or the like.
  • the memory and processor may be integrated together or physically separate from each other.
  • the present application provides a computer readable storage medium having stored therein computer instructions that, when executed on a computer, cause the computer to perform a corresponding flow in the decoding method of the embodiments of the present application .
  • the present application also provides a computer program product comprising computer program code for causing a computer to execute a corresponding flow in a decoding method of an embodiment of the present application when the computer program code is run on a computer.
  • the present application also provides a chip (or chip system) including a memory and a processor for storing a computer program, the processor for calling and running the computer program from the memory, such that the communication device on which the chip is mounted performs Corresponding flow in the decoding method of the embodiment of the present application.
  • the present application also provides a communication device including the above described decoder 700.
  • the processor may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), or an off-the-shelf A field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, microprocessor, or one or more integrated circuits for controlling the execution of the program of the present application.
  • the processor can include a digital signal processor device, a microprocessor device, an analog to digital converter, a digital to analog converter, and the like.
  • the processor can distribute the control and signal processing functions of the mobile device among the devices according to their respective functions.
  • the processor can include functionality to operate one or more software programs, which can be stored in memory.
  • the functions of the processor may be implemented by hardware or by software executing corresponding software.
  • the hardware or software includes one or more units corresponding to the functions described above.
  • the memory can be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type of information and instructions that can be stored. Dynamic storage device. It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, and a disc storage (including a compact disc, a laser disc, a compact disc, a digital versatile disc, a Blu-ray disc, etc.), a disk storage medium or other magnetic storage device, or any other device that can be used to carry or store desired program code in the form of an instruction or data structure and accessible by a computer. Medium, but not limited to this.
  • EEPROM electrically erasable programmable read-only memory
  • CD-ROM compact disc read-only memory
  • disc storage including a compact disc, a laser disc, a compact disc, a digital versatile disc, a Blu-ray disc, etc.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program code. .

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

一种极化码的译码方法,该方法包括:获取待译码的第一比特序列(310);在选取的第一候选译码路径未通过CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据,第一数据结构中存储有对第一比特序列中的每个比特进行比特判决所需的中间数据,第二数据结构中存储有第一比特序列对应的译码树上的部分节点的位置信息、路径度量值以及该部分节点的译码判决结果(320),译码树为一个满二叉树;根据从第一数据结构和第二数据结构中读取的数据,在译码树上计算第二候选译码路径(330);在第二候选译码路径通过CRC的情况下,将第二候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果(340);输出译码结果(350)。

Description

极化码的译码方法和装置
本申请要求于2018年04月17日提交中国专利局、申请号为201810344057.1、申请名称的“极化码的译码方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信道译码领域,尤其涉及一种极化码的译码方法和装置。
背景技术
极化码(polar codes)是2009年由E.Arikan提出的一种被严格证明可以达到信道容量的结构化的信道编码方法。为了提高实际通信系统中极化码的可靠度,通常采用串行抵消列表(successive cancellation list,SCL)译码算法。现有的SCL译码算法在列表宽度较大时,计算复杂度很高,并且路径排序带来的时延较大,需要进一步改进来降低计算复杂度的时延,从而适配现代通信系统的要求。于是,有人提出了自适应串行抵消列表(adaptive successive cancellation list,ADSCL)算法可以在高信噪比的条件下极大降低译码的计算复杂度。但是,ADSCL算法在一次译码失败后重新进行译码时,会进行大量的重复计算,在低信噪比的信道条件时尤为明显,计算复杂度高。
可见,尽管近年来极化码的译码算法虽然已经取得了长足的进展,但是大多数的研究是在降低原译码算法的计算复杂度的同时造成译码性能的损失。或者,只能在一定条件下优于传统算法,在实际使用时受到一定的限制。因此,如何设计一种能够保证性能损失在可以接受的范围内,同时又能降低计算复杂度的译码算法显得尤为重要。
发明内容
本申请提供一种极化码的译码方法,能够在译码性能不损失的情况下降低计算复杂度。
第一方面,提供了一种极化码的译码方法,该方法包括:获取待译码的第一比特序列;在选取的第一候选译码路径未通过循环冗余校验CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据,其中,第一数据结构中存储有对第一比特序列中的每个比特进行比特判决所需的中间数据,第二数据结构中存储有第一比特序列对应的译码树上的部分节点的位置信息、译码树上的根节点到部分节点中每个节点的路径度量值以及该部分节点的译码判决结果,译码树为一个满二叉树;根据从第一数据结构和第二数据结构中读取的数据,在译码树上计算第二候选译码路径;在第二候选译码路径通过CRC的情况下,将第二候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果;输出该译码结果。
本申请实施例的技术方案,通过采用第一数据结构和第二数据结构存储译码所需的全部中间数据,即使在一次译码失败的情况下重新计算译码路径,也不需要大量的重复计算, 在各种信噪比的信道条件(例如,中高信噪比的信道条件或低信噪比的信道条件)下都可以在译码性能不受损失的情况下,降低计算复杂度。
结合第一方面,在第一方面的某些实现方式中,第二数据结构包括两个优先级队列,该部分节点的位置信息和路径度量值信息存储在该两个优先级队列中,该部分节点的路径度量值在每个优选级队列中升序排列,其中,靠近队列前端的路径度量值小于靠近队列后端的路径度量值,该至少一个节点的位置信息与该至少一个路径度量值之间具有一一映射关系。
结合第一方面,在第一方面的某些实现方式中,优先级队列中任意一个节点的位置信息包括该节点在译码树上所处的层、该节点在所处的层的扩展次序和该节点的父节点在译码树上的扩展次序。
结合第一方面,在第一方面的某些实现方式中,在对第一候选译码路径进行CRC之前,该方法还包括:根据第一数据结构和第二数据结构中存储的数据,以预先设置的第一路径搜索宽度在译码树上计算第一候选译码路径;以及,在第一候选译码路径未通过CRC的情况下,根据从第一数据结构和第二数据结构中读取的数据,在译码树上计算第二候选译码路径,包括:根据第一数据结构和第二数据结构中读取的数据,以第二路径搜索宽度在译码树上计算第二候选译码路径,其中,第二路径搜索宽度是第一路径搜索宽度的两倍,且第二路径搜索宽度小于或等于预先设置的最大路径搜索宽度。
结合第一方面,在第一方面的某些实现方式中,根据第一数据结构和第二数据结构中存储的数据,以预先设置的第一路径搜索宽度在译码树上搜索第一候选译码路径,包括:激活该两个优先级队列中的第一优先级队列,并从第一优先级队列中读取第一节点,第一节点为第一优先级队列中的首节点;根据第一节点的位置信息,确定第一节点是否为译码树上的叶子节点;在第一节点为译码树上的叶子节点的情况下,输出译码树的根节点到第一节点之间的比特估计序列,作为第一候选译码序列;以及,该方法还包括:在第一候选译码路径通过CRC的情况下,将第一候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果。
结合第一方面,在第一方面的某些实现方式中,在第一候选译码路径未通过CRC的情况下,根据第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在译码树上搜索第二候选译码路径之前,该方法还包括:确定已访问的叶子节点是否大于第一搜索宽度;在已访问的叶子节点大于第一搜索宽度且未超过预先设置的最大路径搜索宽度的情况下,交换第一优先级队列和第二优先级队列的激活状态,其中,交换第一优先级队列和第二优选级队列的激活状态包括将第二优先级队列激活,将第一优先级队列置为非激活,并将第一优先级队列中未读取的全部节点按照路径度量值插入激活的第二优先级队列中;以及,以第二路径搜索宽度在译码树上计算第二候选译码路径,包括:从激活的第二优先级队列中读取第二节点,并根据第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在译码树上计算第二候选译码路径,其中,第二节点为第二优先级队列中的首节点。
结合第一方面,在第一方面的某些实现方式中,在已访问的叶子节点小于或等于第一路径搜索宽度的情况下,以第二路径搜索宽度在译码树上计算第二候选译码路径,包括:继续从第一优先级队列中读取节点,并根据第一数据结构和第二数据结构中存储的数据, 以第二路径搜索宽度在译码树上计算第二候选译码路径。
结合第一方面,在第一方面的某些实现方式中,在第一节点非译码树上的叶子节点的情况下,该方法还包括:根据第一节点的位置信息,确定第一节点在译码树上所处的层;根据第一节点在译码树上所处的层的已访问节点、第一搜索宽度和最大搜索宽度之间的大小关系,确定第一节点的扩展节点在第二数据结构中的存储位置。
结合第一方面,在第一方面的某些实现方式中,确定第一节点的扩展节点在第二数据结构中的存储位置,包括:若第一节点在译码树上所处的层的已访问节点数小于或等于第一搜索宽度,则根据扩展节点的路径度量值将扩展节点插入第一优选级队列中;若第一节点在译码树上所处的层的已访问节点数大于第一搜索宽度,且小于或等于最大搜索宽度,则根据扩展节点的路径度量值将扩展节点插入第二优先级队列中;若第一节点在码树上所处的层的已访问节点大于最大搜索宽度,则不存储该扩展节点。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:从第一数据结构中获取对第一比特序列中的每个比特进行比特判决所需的中间数据;根据对第一比特序列中的每个比特进行比特判决所需的中间数据,以及承载第一比特序列中每个比特的子信道为信息位或冻结位,确定第一比特序列中每个比特的译码判决结果;将第一比特序列中每个比特的译码判决结果保存在第一优先级队列或第二优先级队列中,其中,每个比特的判决结果与每个比特在第一优先级队列或第二优先级队列中对应节点的位置信息和路径度量值对应。
这里,第一比特序列中每个比特的译码判决结果也即Trellis图中信源侧节点的译码判决结果,两者是等价的概念。
应理解,信源侧节点的译码判决结果是保存在优先级队列中的对应位置,与该信源侧节点的位置信息和路径度量值存储在一起。
在本申请实施例中,通过在优先级队列结构和Trellis图中分别存储不同的信息。优先级队列中的信息指导Trellis图中译码的迭代计算,而将译码判决结果又返回优先级队列中保存。
结合第一方面,在第一方面的某些实现方式中,第一数据结构中存储的中间数据包括所有扩展路径中每个节点的译码中间对数似然比和硬判决值,以及每个节点的译码中间对数似然比和硬判决值所属的扩展路径。
第二方面,提供了一种译码装置,用于执行第一方面或第一方面的任意可能的实现方式中的方法。具体地,该译码装置包括执行第一方面或第一方面的任意可能的实现方式中的方法的单元。
在一个可能的设计中,译码装置的上述功能可以部分或全部通过软件实现。当全部通过软件实现时,译码装置600可以包括存储器和处理。其中,存储器用于存储计算机程序,处理器从存储器中读取并运行该计算机程序,以实现本申请极化码的译码方法。
在一个可能的设计中,译码装置600的部分或全部通过软件实现时,译码装置600包括处理器。用于存储计算机程序的存储器位于译码装置600之外,处理器通过电路/电线与存储器连接,用于读取并执行所述存储器中存储的计算机程序。
在一个可能的设计中,译码装置600的上述功能的部分或全部通过硬件实现。当全部通过硬件实现时,译码装置包括:输入接口电路,用于获取待译码的第一比特序列;逻辑 电路,用于执行上述实施例中的译码方法;输出接口电路,用于输出译码结果。
可选地,该译码装置可以为芯片或译码器。
可选地,上述存储器和处理器可以是物理上互相独立的单元,或者也可以集成在一起。
第三方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行上述第一方面或第一方面的任意可能的实现方式中的方法。
第四方面,本申请提供一种芯片(或者说,芯片系统),包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行该计算机程序,使得安装有该芯片的通信设备执行上述第一方面及其任意一种可能的实现方式中的方法。
这里的通信设备可以为译码端。例如,译码端可以是适用于本申请实施例的通信系统中的终端设备或网络设备(可以参见说明书)。
第五方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行上述第一方面及其任意一种可能的实现方式中的方法。
本申请实施例的技术方案,通过采用第一数据结构和第二数据结构存储译码所需的全部中间数据,即使在一次译码失败的情况下重新计算译码路径,也不需要大量的重复计算,在各种信噪比的信道条件(例如,中高信噪比的信道条件或低信噪比的信道条件)下都可以在译码性能不受损失的情况下,降低计算复杂度。
附图说明
图1为适用于本申请实施例的无线通信系统。
图2是译码树的结构示意图。
图3是ADSCL译码算法的流程图。
图4是本申请实施例的极化码的译码算法的流程图。
图5是Trellis图的结构示意图。
图6是优先级队列的结构示意图。
图7是本申请的ADPSCL译码算法的整体流程图。
图8是交换两个优先级队列的激活状态的示意图。
图9是本申请的ADPSCL译码算法的详细流程图。
图10是优先级队列和Trellis图的交互示意图。
图11是一个优先级队列的存储过程示意图。
图12是码长N=256的ADPSCL算法与现有算法的复杂度对比图。
图13是码长N=512的ADPSCL算法与现有算法的复杂度对比图。
图14是码长N=1028的ADPSCL算法与现有算法的复杂度对比图。
图15是码长N=2048的ADPSCL算法与现有算法的复杂度对比图。
图16是码率为0.5,码长N=256,路径搜索宽度L=32,采用长度为8的CRC校验时的ADPSCL算法和传统SCL算法的译码性能对比图。
图17是是码率为0.5,码长N=512,路径搜索宽度L=32,采用长度为8的CRC校验时的ADPSCL算法和传统SCL算法的译码性能对比图。
图18是本申请实施例的译码装置600的示意性框图。
图19是本申请实施例的译码器700的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1为适用于本申请实施例的无线通信系统。该无线通信系统中可以包括至少一个网络设备101,该网络设备与一个或多个终端设备(例如,图1中所示的终端设备102和终端设备103)进行通信。当网络设备发送信号时,其为编码端,当网络设备接收信号时,其为译码端。终端设备也是一样的,当终端设备发送信号时,其为编码端,当终端设备接收信号时,其为译码端。
终端设备也可以称为用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理、用户装置或用户设备(user equipment,UE)、蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备。此外,该通信系统中的网络设备可以是全球移动通讯(global system of mobile communication,GSM)或码分多址(code division multiple access,CDMA)中的基站(base transceiver station,BTS),也可以是宽带码分多址(wideband code division multiple access,WCDMA)中的基站(NodeB,NB),还可以是长期演进(long term evolution,LTE)中的eNB或演进型基站(evolutional Node B,eNodeB),或者中继站或接入点,或者未来5G网络中的基站设备等。
为了便于理解,首先对本申请涉及的相关技术和概念作简单介绍。
根据极化码(polar codes)的编码原理可以知道,极化码的构造就是极化信道的选择问题。而极化信道的选择实际上是按照最优化SC译码性能为标准的。但是由于各个极化信道之间并不是相互独立的,而是具有依赖关系的,也即信道序号大的极化信道依赖于所有其它序号小的极化信道。基于极化信道之间的这个依赖关系,SC译码算法对各个比特进行译码判决(或者称为比特判决)时,是基于之前所有步骤的译码判决的结果都是正确的假设条件。并且,正是在这种译码算法下,极化码被证明了是信道容量可达的。
译码树:根据极化码在SC译码算法下各个比特判决之间的依赖关系,能够构造一棵译码树T=(ε,V),其中,ε和V分别表示码树中的边和节点集合。参见图2,图2是译码树的结构示意图。如图2所示,在译码树上,一个节点的深度定义为译码树的根节点到该节点的最短路径长度。可以看到,对于一个码长等于Ν的极化码,译码树上的节点组成的集合能够按照深度d划分为N+1个子集,记作V d,其中,d=0,1,…,N。容易理解,V 0仅包含根节点。除了译码树上的叶子节点(即d=N时),译码树T上的每一个节点υ分别通过两条分别标记0和1的边与后继节点相连。某一个节点υ所对应的序列
Figure PCTCN2019082856-appb-000001
定义为从根节点开始到达该节点υ所需要经过的各个边的标记序列。另外,在译码树中,从根节点到任意一个节点所形成的路径,都对应一个路径度量值(path metric,PM)。值得注意的是,译码树的结构仅与码长N有关。极化码的译码树实际上是一个满二叉树。因此,极化码的译码过程也就是在满二叉树上寻找合适的路径。如图2中所示,以码长N=4为例,在每个节 点处选择PM值最小的路径向下扩展,最终确定出的译码序列为
Figure PCTCN2019082856-appb-000002
而极化码在码长趋于无穷时,信道极化才会完全。在有限码长下,由于信道极化并不完全,依然会存在一些信息比特无法被正确译码。当前面的i-1个信息比特的译码发生错误之后,由于SC译码算法在对后面的信息比特译码时,需要用于之前的信息比特的估计值,这就会导致较为严重的错误传递。换句话说,SC译码是一种贪婪算法,在码树的每一层仅仅搜索到最优路径就进行下一层,所以无法对错误进行修改。
为此,针对SC译码算法的缺点,人们提出了SCL译码算法。在译码树的每一层增加允许保留的候选路径数量,从SC译码算法的每一层仅允许选择“最优的一条路径进行下一步扩展”改为“最大允许选择最好的路径进行下一步扩展”。并且,将每一层允许保留的候选路径称为搜索宽度L,L≥1且L为整数。与SC算法一样,SCL算法在进行译码时,依然从译码树的根节点开始,逐层依次向叶子节点层进行路径搜索。与SC不同的是,完成每一层的路径扩展后,选择PM最小的L条路径作为候选路径,保存在一个列表中,等待进行下一层的扩展。
经过上述的说明,可以知道SC译码算法是深度优先的,要求从根节点快速到达叶子节点。而SCL译码算法是广度优先的,先扩展、再剪枝,最终达到叶子节点。
从SCL译码算法的译码过程容易想到,在中高信噪比的信道条件下,用较低的路径列表宽度就可以完成正确译码,因此不需要进行太多的路径扩展,可以降低计算复杂度。但是在低信噪比的信道条件下,SCL需要较大的路径列表宽度来保证译码性能。而较大的路径列表宽度依然会带来较大的计算复杂度。
因此,进一步地,现有技术中有人提出了一种自适应SCL(adaptive successive cancellation list,ADSCL)算法。
参见图3,图3是ADSCL译码算法的流程图。如图3所示,ADSCL算法辅以循环冗余校验(cyclic redundancy check,CRC)。首先设置一个最大路径列表宽度。开始译码时,先从路径列表宽度为1进行译码。若译码结果能通过CRC,说明正确译码,此时直接输出译码结果。否则将路径列表宽度加倍(即路径列表宽度变为2)重新译码,直至正确译码或路径列表宽度超过最大路径列表宽度,译码失败。图3中的L max表示预先设置的最大路径列表宽度,L为当前路径列表宽度。ADSCL的详细流程图可以描述为:
(1)初始化。
预先设置一个最大路径列表宽度L max,并将当前路径列表宽度设置为L。
(2)进行当前路径列表宽度为L的SCL译码。
(3)对译码结果进行CRC。
若通过CRC,则直接输出译码结果,结束译码。
若未通过CRC,则比较当前路径列表宽度和最大路径列表宽度。如果满足L≥L max,则结束译码。否则将当前路径列表宽度加倍,并返回步骤(2)。
从ADSCL的译码过程可以看出,在中高信噪比的信道条件下,ADSCL译码算法很可能在一个极低的路径列表宽度下成功译码,从可以极大降低SCL译码算法的计算复杂度。然而,在未通过CRC需要重新进行译码的情况下,仍要计算之前已经计算过的路径中间信息和/或数据,从而会出现大量的重复计算。尤其在低信噪比的信道条件下,对于路径列表宽度为L max的情况下,最差情况下的计算复杂度甚至会超过SCL译码算法。
通过对SCL译码算法和ADSCL译码算法的介绍可以知道,尽管SCL译码算法辅以CRC可以极大程度的改善译码性能。但是由于需要同时进行路径列表宽度L条路径的计算,当K较大时,计算复杂度很高。此外,每次路径列表扩展后都需要从扩展后的路径中按照路径度量值进行排序来选择最为优选的L条候选路径(也称为幸存路径),这个排序过程会引起较大的时延,这在未来通信系统中(例如,5G)低时延甚至超低时延要求的应用场景下,例如,高可靠低时延(ultra reliable and low latency communication,URLLC),几乎是不能容忍的。另外,在现有的降低复杂度(相对于SCL译码算法)的ADSCL译码算法中,在信噪比较高的信道条件下,可以在当前路径列表宽度较低时就可以成功译码,极大减小计算复杂度和时延。但是,ADSCL译码算法需要等同于最大路径列表宽度的硬件资源,并且在信道条件恶劣的情况下,计算复杂度和时延甚至超过了传统的SCL译码算法。
由此可见,尽管近年来极化码的SCL译码算法在二进制加性高斯白噪声信道(binary additive white Gaussian noise,B-AWGN)下的研究虽然已经取得了长足的进展,但是大多数的研究是在降低原译码算法的计算复杂度的同时造成译码性能的损失。或者,只能在一定条件下优于传统算法,在实际使用时受到一定的限制。因此,如何设计一种能够保证性能损失在可以接受的范围内,同时又能显著降低计算复杂度的译码算法显得尤为重要。
为此,本申请提出一种极化码的译码算法,可以避免ADSCL算法中重新计算译码路径时的大量计算,降低了计算复杂度,
下面结合图4至图17,对本申请提出的极化码的译码算法进行详细介绍。
以下本申请提出的ADPSCL译码算法可以由译码端执行。例如,终端设备在与网络设备进行通信时,终端设备作为译码端需要对接收到的待译码序列进行译码。
图4是本申请实施例的极化码的译码算法的流程图。
310、获取待译码的第一比特序列。
320、在读取的第一候选译码路径未通过CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据。
其中,第一数据结构中存储有对第一比特序列中的每个比特进行比特判决所需的中间数据。第二数据结构中存储有第一比特序列对应的译码树上的部分节点的位置信息、译码树上的根节点到该部分节点的路径度量值以及该部分节点的译码判决结果。
这里所说的第一候选译码路径可以是译码端在对第一比特序列进行译码过程中,计算的任意一条候选译码路径。在本申请实施例中,计算任意一条候选译码路径的过程可以参见下文介绍的ADPSCL译码算法的详细流程。
下面对第一数据结构和第二数据结构分别作详细说明。
(一)第一数据结构
第一数据结构可以是一个trellis图,下面结合图5介绍Trellis图的组成形式。
图5是Trellis图的结构示意图。参见图5,Trellis图的基本组成单元为Trellis节点(如图5中的节点1,2,…,9)。在对待译码的比特序列进行译码时,是根据当前译码的信息位
Figure PCTCN2019082856-appb-000003
的似然比值(likelihood rate,LLR)值来判断该信息位
Figure PCTCN2019082856-appb-000004
为0或是为1。而
Figure PCTCN2019082856-appb-000005
的LLR值的计算是一个递归过程。
图5以码长N=4为例,对LLR值的计算过程作简单说明。如图5所示,Trellis图中 最左边的一列定义为信源层,信源层的节点称为信源侧节点。假定在信息向量
Figure PCTCN2019082856-appb-000006
的第3位和第4位为冻结位,其余为信息位。在进行译码时,首先要判决
Figure PCTCN2019082856-appb-000007
的估计值,那么需要知道节点1的LLR值。而节点1的LLR值,是根据节点5和节点7的LLR值计算得到的。而节点5的LLR值由节点9和节点10的LLR值计算得到,节点7的LLR值由节点11和节点12的LLR值计算得到。其中,节点9,10,11和12的LLR值由译码器接收的极化信道的输出y1,y2,y3和y4计算得到。由此可以知道,从节点9,10,11和1开始,从右向左进行Trellis节点的LLR值的迭代计算,可以计算出最左边节点1,2,3和4的LLR值。最后,根据节点1,2,3和4的LLR值,分别判决出节点1,2,3和4对应的位
Figure PCTCN2019082856-appb-000008
(i=1,2,3,4)的判决结果。同时,在进行比特判决时,还需要知道当前判决的位
Figure PCTCN2019082856-appb-000009
是否为信息位。如果
Figure PCTCN2019082856-appb-000010
为信息位,则根据LLR值判决为0或1。如果是冻结位,则直接判决
Figure PCTCN2019082856-appb-000011
这样,就可以得到待译码的比特序列(对应本申请中的第一比特序列)中所有比特的判决结果。
在本申请实施例中,根据图5所示的Trellis图,可以得到信源侧节点(图5中第一列的节点)的译码判决结果,并将信源侧节点的译码判决结果返回保存在优先级队列。信源侧节点的译码判决结果与优选级队列中对应节点记录的位置信息或度量值是对应的。
在上面进行迭代计算的过程中,根据i的取值的奇偶性,可以将LLR值的计算分为两种情况。当i为奇数时利用F函数进行计算,当i为偶数时利用G函数进行计算。F函数和G函数是SC译码算法中进行LLR值迭代计算时的公知概念,这里不作详述。
需要说明的是,在i为偶数的情况下,在完成比特判决之后,需要向下一层更新部分和数值。部分和数值即为G函数的运算结果。
由此,在本申请实施例中,Trellis图中的每个Trellis节点记录的中间数据包括所有扩展路径对应位置的中间值和硬判决值以及这些中间值和硬判决值对应所属的扩展路径。这里所说的中间值即是上面介绍的迭代过程中的LLR值。在图5中,非信源侧的每个节点记录有一个中间值信息组,每个中间值信息组中包括所有扩展路径的对应位置的中间译码对数似然比(即LLR值)和硬判决值以及这些中间译码对数似然比和硬判决值所属的扩展路径。当然,硬判决值也可以由LLR值推导得到,因此,在实际应用中,中间值信息组也可以只包括LLR值和这些LLR值对应的扩展路径,不再赘述。
需要指出的是,每个Trellis节点记录的中间值信息组的组数不确定,它是由ADPSCL算法的路径扩展情况而定的。如果假定路径列表宽度为L,那么每个Trellis节点最多记录L组中间值信息组,共需要O(L·N·log 2N)的空间复杂度。
通过将路径扩展过程中LLR值的迭代和部分和数值都记录在Trellis图中,不再需要像传统的SCL算法那样,在译码失败后重新计算译码路径时,再次计算这些LLR值和部分和数值,因此避免了重复计算。
由此可以理解的是,译码器在对第一比特序列进行译码的过程中,如果某一个候选译码路径未通过CRC,需要重新进行译码,那么重新译码时所需的LLR值和部分和数值已经记录在Trellis图中,可以直接读取,不再需要重新计算。
下面结合图6对第二数据结构进行说明。
(二)第二数据结构。
第二数据结构包括两个优先级队列,如果将待译码的第一比特序列的长度记作N,将路径列表宽度记作L,则每个优先级队列中最多记录(N·L)个节点的位置信息和路径度 量值。其中,每个节点的位置信息和路径度量值是一一映射的。
根据前文对译码树的介绍,可以知道,一个码长=N的比特序列对应的译码树上应该共有(2 1+2 2+…+2 N)个节点(不包括根节点)。而一个优先级队列中并不是存储了译码树上所有节点的位置信息和路径度量值。若当前的路径搜索宽度设置为L,则译码树的每一层只选择L个节点向下一层扩展。对于N长的比特序列,则每个优先级队列中最多存储(N·L)个节点的位置信息和路径度量值。一个优先级队列中路径度量值的存储过程可以参见下文的表1。
本文中所说的一个节点的路径度量值,是指从译码树的根节点到该节点的路径度量值。
参见图6,图6是优先级队列的结构示意图。在每个优先级队列中,路径度量值是升序排列的。也即,靠近队列前端的路径度量值小于靠近队列后端的路径度量值,或者说,从队列的首节点到队列末节点的方向,前一个节点的路径对度量值小于后一个节点的路径度量值。
需要注意的是,一个节点的位置信息包括该节点在译码树上所处的层、该节点所处的层的扩展次序和该节点的父节点在译码树上的扩展次序的信息。
上文已经介绍过,译码树是一个满二叉树。而父节点和叶子节点的概念在计算机的二叉树这种数据结构中是公知的概念,本申请实施例中不作赘述。
继续参见图2,节点A位于译码树的第二层,节点A的父节点记作节点B。也可以说,节点A是节点B的一个后继节点。节点B的另一个后继节点记作节点C。和节点B处于同一层的还有一个节点D。假定在路径扩展过程中,第1层向第2层扩展时,优先扩展节点B,其次扩展节点D。而从第2层向第3层扩展时,优先扩展节点C,其次扩展节点A。那么,节点A的位置信息就包括了节点A处于译码树的第2层,节点A在第2层的扩展次序为2,节点A的父节点在第1层的扩展次序为1这些信息。
第二数据结构共需要O(N·L)的空间复杂度。
在本申请提出的ADPSCL译码算法涉及第一数据结构和第二数据结构的交互,下文会结合实施例作详细说明。
通过对第一数据结构和第二数据结构的介绍,可以知道第一数据结构中存储有对待译码的比特序列中的每个比特进行比特判决所需的中间数据,第二数据结构中存储有译码树上的一部分节点的位置信息和度量值。由此,如果在对第一比特序列进行译码的过程中,选择的某一候选译码路径未通过CRC,需要重新计算译码路径,那么可以直接从第一数据结构和第二数据结构中读取所需的数据,避免现有技术中的重复计算导致的计算量大且耗时的问题。
330、根据从第一数据结构和第二数据结构中读取的数据,在译码树上计算第二候选译码路径。
根据从第一数据结构和第二数据结构中读取的数据,计算第二候选译码路径。
这里需要说明是,本申请实施例中,将进行CRC之前选择的路径都称作候选译码路径,只有在某一个候选译码路径通过CRC,表明译码成功之后,才将这个通过CRC的候选译码路径确定为译码路径。
340、在第二候选译码路径通过CRC的情况下,将第二候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果。
如果计算的第二候选译码路径通过CRC,则将第二候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果。
比特估计序列是指对待译码的第一比特序列中的每个比特的比特判决完成之后,输出的对第一比特序列的估计结果,容易理解,这个估计结果是一个比特序列,因此,称作比特估计序列。
以图2为例,假定计算的第二候选译码路径为根节点到节点F对应的路径,则根节点到节点F对应的比特估计序列为[0 0 1 1]。
可以理解的是,在步骤340中,如果计算的第二候选译码路径通过了CRC,则表明译码成功,此时将第二候选译码路径在译码树上对应的比特估计序列输出,即是待译码的第一比特序列的译码结果。那么,如果第二候选译码路径未通过CRC,表明在计算的第一候选译码路径失败的情况下,计算的第二候选译码路径仍然失败。此种情况下,译码端需要重新计算候选译码路径,计算的过程与计算第二候选译码路径的过程是相同的。
还应理解的是,本申请提出的ADPSCL译码算法是在现有的ADSCL算法的基础上,引入了第一数据结构和第二数据结构。因此,与上文图3中描述的ADSCL算法相同,在译码开始是,是以路径搜索宽度L=1开始计算。在译码失败的情况下,将路径搜索宽度加倍,重新计算译码路径。在L>1的情况下(例如,L=2或4),在译码树的每一层会同时保留L条路径作为幸存路径向下一层进行扩展(具体过程可以参考现有技术)。由此可以理解的是,如果上文的第一候选译码路径是L=1时选择的路径,则第一候选译码路径未通过CRC之后,会以L=2计算候选路径,则最终计算的候选路径会有2条。如果第一候选译码路径是L>1时选择的路径,则第一候选译码路径未通过CRC之后,以2L作为路径搜索宽度,最终计算的候选路径会有2L条。即这里所说的第二候选译码路径可以是多条。
350、输出译码结果。
输出步骤340中的比特估计序列[0 0 1 1],即是第一比特序列的译码结果。
本申请实施例的技术方案,通过采用第一数据结构和第二数据结构存储译码所需的中间数据,即使在一次译码失败的情况下重新计算译码路径,也不需要大量的重复计算,在各种信噪比的信道条件(例如,中高信噪比的信道条件或低信噪比的信道条件)下都可以在译码性能不受损失的情况下,降低了计算复杂度。
下面结合图7,对本申请的ADPSCL译码算法的整体流程进行说明。
参见图7,图7是本申请的ADPSCL译码算法的整体流程图。
401、对译码器进行初始化。
对译码器进行初始化包括预先设置最大路径搜索宽度L max,设置当前的路径搜索宽度L=1,激活两个优先级队列中的一个,并在被激活的优先级队列中存入一个空节点。
为了便于描述,以下将被激活的优先级队列称作第一优先级队列,将未被激活的第二优先级队列称作第二优先级队列。
402、判断L是否超过L max
若L>L max,则表明译码失败,结束译码。
若L<L max,或者L=L max,则执行步骤403。
403、以路径搜索宽度为L进行ADPSCL译码。
ADPSCL译码即是本申请提出的自适应优先级串行抵消列表(adaptive priority  successive cancellation list,ADPSCL)译码算法,下文会对ADPSCL译码算法作详细说明。
可以理解,通过步骤403,会确定一个候选译码路径。
404、判断候选译码路径是否通过CRC。
若候选译码路径通过CRC,表明在路径搜索宽度为L时译码成功,输出译码结果,结束译码。
若候选译码路径未通过CRC,则执行步骤405、406。
405、将路径搜索宽度加倍。
例如,第一轮译码时的L设置为1,则此时的路径搜索宽度L将设置为2。又例如,若译码失败的这一轮的路径搜索宽度为4,则此时的路径搜索宽度L将设置为8。
406、将两个优先级队列的激活状态互换。
两个优先级队列的激活状态互换,是指将被激活的第一优先级队列设置为未激活,而将未被激活的第二优先级队列激活。在激活状态互换的同时,还需要将第一优先级队列中未读取的节点的度量值信息按序全部插入第二优先级队列中。
参见图8,图8是交换两个优先级队列的激活状态的示意图。
应理解,第一优先级队列和第二优选级队列中的路径度量值都是升序排列的,在将第一优先级队列中的路径度量值按序插入第二优先级队列中后,第二优先级队列中的存储的路径度量值依然是升序排列的。同时,第一优先级队列中剩余的已经被访问的节点的路径度量值也是升序排列的。
还应理解,步骤405和步骤406之间并没有先后顺序,这里仅是将“当前的路径搜索宽度加倍”和“两个优先级队列的激活状态互换”这两个过程分别编号进行说明,也可以在流程图中合并为一个步骤,这里并不限定。
从图7中可以看出,步骤406之后返回到步骤402,重新判断当前的路径搜索宽度L与最大路径搜索宽度L max的大小关系。后续的过程与上述步骤402-406是相同的,这里不再赘述。
下面结合图9,对本申请的ADPSCL译码算法的详细流程进行说明。
参见图9,图9是本申请的ADPSCL译码算法的详细流程图。
501、译码开始。
502、将当前的路径搜索宽度L设置为1。
503、读取被激活的优先级队列的首节点。
为了避免混淆,以下将被激活的优先级队列称作第一优先级队列,将未被激活的优先级队列称作第二优先级队列,并将第一优先级队列的首节点称作第一节点。
504、判断第一节点是否为译码树的叶子节点。
如果第一节点是译码树的叶子节点,则执行步骤505。
应理解,如果读取的第一节点为译码树的叶子节点,表明待译码的比特序列的最后一个比特已经完成比特判决(或者说估计),整个候选译码路径已经选出了。此时,可以直接获取候选译码路径,并对其进行CRC即可。
505、获取候选译码路径。
506、判断获取的候选译码路径是否可以通过CRC。
如果候选译码路径通过CRC,表明译码成功,译码器执行步骤507-508。
507、输出译码结果。
508、结束译码。
在步骤506中,如果候选译码路径未通过CRC,表明译码失败。译码器执行步骤509以及后续步骤。
509、判断已访问的叶子节点的数目是否超过了当前的路径搜索宽度L。
如果已访问的叶子节点的数目超过了当前的路径搜索宽度,执行步骤510以及后续步骤。
510、将路径搜索宽度加倍,并交换优先级队列的激活状态。
关于交换第一优先级队列和第二优选级队列的激活状态的说明可以参见上文,这里不再赘述。
511、判断加倍后的路径搜索宽度L是否大于预先设置的最大路径搜索宽度L max
如果L>L max,表明译码失败,结束译码。
如果L=L max或L<L max,返回执行步骤503。
需要注意的是,由步骤511返回执行步骤503时,步骤503中所说的被激活的优先级队列是指第二优先级队列。因为在步骤510中,第一优先级队列和第二优先级队列的激活状态进行了互换。
因此,由步骤511返回执行步骤503时,读取的是第二优先级队列中的首节点(以下称作第二节点)。
以上从步骤510-511,对读取的第一优先级队列中的首节点为叶子节点的情况进行了说明。下面再来说明步骤504中,如果读取的第一节点不是译码树上的叶子节点的情况。
步骤504中,如果第一节点不是译码树上的叶子节点,则执行步骤512以及后续步骤。
512、根据第一优先级队列中存储的第一节点的位置信息,指导Trellis图中的LLR值的迭代计算以及部分和数值的更新。
当计算到下一个信源层的节点时进行路径扩展。若对应信息位,则扩展为2个节点。若对应冻结位,则扩展为1个节点(以下将扩展的节点称作扩展节点)。
在步骤512中,涉及到优先级队列和Trellis图的交互,下面结合图8对优先级队列和Trellis图之间的交互过程进行说明。
参见图10,图10是优先级队列和Trellis图的交互示意图。
在每一个译码循环中,总是首先读出被激活的优先级队列的首节点,之后,根据优先级队列中存储的首节点的位置信息,指导Trellis图中进行相应的中间LLR值的迭代计算以及部分和数值的更新。这个迭代过程会得到一个扩展节点(对应冻结位)或两个扩展节点(对应信息位)。得到扩展节点后,统计译码树上扩展节点所在层的已访问的节点的数量(以下记作Z),再根据扩展节点所在层的已访问的节点的数量与当前的路径搜索宽度L以及预先设置的最大路径搜索宽度L max之间的大小关系,确定将扩展节点存储在优先级队列中还是丢弃。
以下将这几种情况分别进行说明。
(1)若Z≤L,则将扩展节点按序存入被激活的优先级队列中。
被激活的优先级队列如图10中所示的优先级队列1。
应理解,上文已经介绍过,优先级队列中存储有译码树上部分节点的位置信息以及根 节点到该部分节点的路径度量值。因此,将扩展节点按序存入优先级队列1,是指按照扩展节点的路径度量值的大小,将扩展插入到优先级队列1中,同时还需要记录扩展节点的位置信息。
(2)若L<Z≤L max,则将扩展节点按序存入未被激活的优先级队列中。
未被激活的优先级队列如图10中所示的优选级队列2。
将扩展节点存入优先级队列2中的过程与上述情况(1)类似,不再详述。
(3)若Z>L max,则将扩展节点丢弃。
通过两个优先级队列和Trellis图两种数据结构交互进行译码,相对于ADSCL算法,保证了在译码性能无损失的条件下,极大降低了计算复杂度,且在高信噪比的信道条件下,时延和计算复杂度接近于传统SC算法。对应ADPSCL算法,采用与SCL算法一样的度量值计算方式,当信噪比较高时,更容易读取正确路径对应的优先级节点并不断进行扩展,从而极大降低ADPSCL算法的计算复杂度。在信噪比足够高时,ADPSCL算法几乎按照正确的延伸方向进行译码,因此可以达到SC算法的时延和计算复杂度,即在低信噪比时,ADPSCL算法的计算复杂度也不会超过SCL算法,这与ADPSCL算法不同。因为ADPSCL算法避免了重复计算。另外,ADPSCL译码算法可以保证与传统ADSCL译码算法性能完全一致。此外,ADPSCL的Trellis图和优先级队列共同作用的存储结构保证了在O(L·N·log 2N)的空间复杂度下即能实现,因此ADPSCL算法具有可实现性。而Trellis图中的数据只需要进行读取和计算存储,并没有传统SCL或ADSCL算法的路径复制等操作,因此,ADPSCL算法的维护开销也很小。
下面结合图11,对本申请实施例中优先级队列中的存储过程作示例说明。
参见图11,图11是一个优先级队列中存储过程的示意图。
图11中的圆圈表示节点,圆圈中的数字表示这个节点对应的路径度量值。存储的过程可以如下表1所示。
表1
Figure PCTCN2019082856-appb-000012
513、判断扩展节点所在层的已访问节点的数量是否超过当前的路径搜索宽度。
这里,扩展节点所在层是指扩展节点在译码树上所在的层。
以下将扩展节点所在层的已访问节点的数量记作Z。
若Z未超过当前的路径搜索宽度L(即Z≤L),译码器执行步骤514。
若Z超过了当前的路径搜索宽度L(即Z>L),译码器执行步骤515。
514、将扩展节点按照路径度量值的大小插入第一优先级队列中,返回执行步骤503。
515、判断扩展节点所在层的已访问节点数是否超过预先设置的最大路径搜索宽度。
若Z未超过预先设置的最大路径搜索宽度L max(即Z≤L max),则执行步骤516。
若Z超过预先设置的最大路径搜索宽度L max(即Z>L max),则执行步骤517。
516、将扩展节点按照度量值的大小插入第二优先级队列中,返回执行步骤503。
517、丢弃扩展节点,返回执行步骤503。
在上面的过程中,可以理解的是,每从激活的优先级队列中读取一个节点,根据Trellis图中记录的中间数据,可以得到该节点的比特判决结果。不断地从激活的优选级队列中读取首节点,并进行比特判决,直到整条候选译码路径选择出以后,待译码的第一比特序列中每个比特的比特判决结果就全部获知了。
本申请的技术方案,针对传统的SCL算法的计算复杂度高、计算量大、时延大,ADSCL算法重复计算多、在低信噪比的信道条件时计算复杂度甚至超过传统SCL算法的问题,提出一种自适应优先级串行抵消列表(adaptive priority successive cancellation list,ADPSCL)算法。在传统的基于LLR的搜索宽度为L的SCL算法基础上进行深度优先搜索,避免了不可能路径的继续延伸,省去了不必要的计算量,并结合ADSCL算法,从一个极低的搜索宽度开始译码,直到译码成功或超过一个预设的最大路径搜索宽度为止。同时,采用两个优先级队列和Trellis图两种数据结构分别存储译码过程的中间数据,即使译码失败需要重新计算译码路径,也不需要大量的重复计算,降低了计算复杂度。
以上对本申请实施例的极化码的译码方法作了详细说明。
下面给出本申请的译码方法(称作ADPSCL译码算法)与传统SCL译码算法的译码性能对比。
图12是码长N=256的ADPSCL算法与现有算法的复杂度对比图。
图13是码长N=512的ADPSCL算法与现有算法的复杂度对比图。
图14是码长N=1028的ADPSCL算法与现有算法的复杂度对比图。
图15是码长N=2048的ADPSCL算法与现有算法的复杂度对比图。
以上12-15分别对比了码长为256,512,1024和2048时,ADPSCL、PSCL算法、传统ADSCL、SCL算法的计算复杂度。在以上各图中,横坐标为不同信噪比条件(单位为分贝,即dB),纵坐标通过乘加操作数量反映算法的平均复杂度。从图12-15中可以看出,信噪比越高,ADPSCL译码算法的复杂度越低,优势越明显。低信噪比时由于避免了重复计算,ADPSCL算法计算复杂度趋近于SCL算法。而传统ADSCL算法由于需要重复计算,在低信噪比条件下复杂度甚至高于传统SCL算法,而高信噪比时ADPSCL算法的计算复杂度趋近于SC算法。此外,在任意信噪比下,ADPSCL算法的计算复杂度都不高于传统的ADSCL算法。
从以上各图中看出,相同搜索宽度,码长越短,复杂度降低程度越大。对于中短码长的极化码,使用常用的路径搜索宽度配置下,相比传统SCL算法,ADPSCL算法的计算复杂度至少能降低20%。且大部分译码器工作条件下,ADPSCL算法的计算复杂度可以降低50%以上。在高信噪比下,ADPSCL算法的计算复杂度降低能够达到90%以上。相比传统ADSCL算法,ADPSCL算法在各信噪比下的计算复杂度都更低。在低信噪比下尤其 明显。总体来说,ADPSCL算法可以显著降低计算复杂度,并且译码性能不会损失,因此ADPSCL算法是一种在传统SCL和ADSCL算法基础上的降低复杂度的高效改进算法。
图16是码率为0.5,码长N=256,路径搜索宽度L=32,采用长度为8的CRC校验时的ADPSCL算法和传统SCL算法的译码性能对比图。
图17是是码率为0.5,码长N=512,路径搜索宽度L=32,采用长度为8的CRC校验时的ADPSCL算法和传统SCL算法的译码性能对比图。
需要说明的是,图16和图17是相同复杂度下的译码性能的比较。图16和图17中,从左到右的圆圈分别表示该圆圈对应的信噪比下,SCL算法在路径列表宽度大小分别为16,8,4和2时误码率。可以看出,在复杂度相同的情况下,ADPSCL算法比SCL算法有显著的性能优势,并且这种优势随着信噪比的升高和码长的增长还会加大。
值得注意的是,PSCL算法与SCL算法,在PSCL的搜索宽度与SCL的list大小相同的情况下,性能完全一致,没有任何损失。同样,ADPSCL的最大路径列表搜索宽度与ADSCL的最大路径列表搜索宽度相同的情况下,性能完全一致,没有任何损失。而ADPSCL的复杂度总是低于ADSCL的复杂度。由于ADPSCL算法和ADSCL算法的计算复杂度都是随着信噪比的升高而减小(即是浮动的),且ADPSCL算法与ADSCL算法在相同信噪比下译码性能总是相同,复杂度总是低于ADSCL算法,复杂度没有交叉点。因此,本文的图16和图17通过ADPSCL与传统SCL算法在同一杂度下的性能对比来说明ADPSCL算法的性能优势。
下面对本申请实施例的译码装置进行说明。
图18是本申请实施例的译码装置600的示意性框图。译码装置600主要包括第一通信单元610、处理单元620和第二通信单元630。
第一通信单元610,用于获取待译码的第一比特序列;
处理单元620,用于在选取的第一候选译码路径未通过循环冗余校验CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据,其中,第一数据结构中存储有对第一比特序列中的每个比特进行比特判决所需的中间数据,第二数据结构中存储有第一比特序列对应的译码树上的部分节点的位置信息、译码树上的根节点到该部分节点中每个节点的路径度量值以及该部分节点的译码判决结果,译码树为一个满二叉树;根据从第一数据结构和第二数据结构中读取的数据,在译码树上计算第二候选译码路径;在第二候选译码路径通过CRC的情况下,将第二候选译码路径在译码树上对应的比特估计序列作为第一比特序列的译码结果;
第二通信单元630,用于输出该译码结果。
以上第一通信单元610和第二通信单元630可以不同,也可以是同一个通信单元。
本申请实施例的译码装置600中的各单元和上述其它操作或功能分别为了实现本申请实施例的极化码的译码算法的相应流程。为了简洁,此处不再赘述。
在一个可能的设计中,译码装置600的上述功能可以部分或全部通过软件实现。当全部通过软件实现时,译码装置600可以包括存储器和处理。其中,存储器用于存储计算机程序,处理器从存储器中读取并运行该计算机程序,以实现本申请极化码的译码方法。
在一个可能的设计中,译码装置600的部分或全部通过软件实现时,译码装置600包括处理器。用于存储计算机程序的存储器位于译码装置600之外,处理器通过电路/电线 与存储器连接,用于读取并执行所述存储器中存储的计算机程序。
在一个可能的设计中,译码装置600的上述功能的部分或全部通过硬件实现时,译码装置600包括:输入接口电路,用于获取待译码的第一比特序列;逻辑电路,用于执行上述实施例中的译码方法;输出接口电路,用于输出译码结果。
可选的,所述译码装置可以是芯片或者集成电路。
可选地,译码装置600可以是一个译码器或芯片。
图19为本申请实施例的译码器700的示意性结构图。如图19所示,译码器700包括:一个或多个处理器701,一个或多个存储器702和一个或多个通信接口703。通信接口703用于获取待译码的第一比特序列,存储器702用于存储计算机程序,处理器701用于从存储器702中调用并运行该计算机程序,使得译码器700执行本申请实施例的译码方法,完成对第一比特序列的译码。进一步地,通信接口703还用于输出第一比特序列的译码结果。其中,接收待译码的第一比特序列的通信接口可以与输出译码结果的通信接口不同。
图18中所示的译码装置600可以通过图19中所示的译码器700实现。例如,第一通信单元610和第二通信单元630可以由图19中的通信接口703实现,处理单元620可以由处理器701实现等。
可选地,存储器和处理器可以集成在一起,也可以物理上相互单独的单元。
此外,本申请提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在计算机上运行时,使得计算机执行本申请实施例的译码方法中的相应流程。
本申请还提供一种计算机程序产品,该计算机程序产品包括计算机程序代码,当该计算机程序代码在计算机上运行时,使得计算机执行本申请实施例的译码方法中的相应流程。
本申请还提供一种芯片(或者,芯片系统),包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行该计算机程序,使得安装有该芯片的通信设备执行使本申请实施例的译码方法中的相应流程。
本申请还提供一种通信设备,包括上述译码器700。
以上实施例中,处理器可以为中央处理器(central processing unit,CPU)、通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件、微处理器或一个或多个用于控制本申请方案程序执行的集成电路等。例如,处理器可以包括数字信号处理器设备、微处理器设备、模数转换器、数模转换器等。处理器可以根据这些设备各自的功能而在这些设备之间分配移动设备的控制和信号处理的功能。此外,处理器可以包括操作一个或多个软件程序的功能,软件程序可以存储在存储器中。处理器的所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元。
存储器可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备。也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only  memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种极化码的译码方法,其特征在于,包括:
    获取待译码的第一比特序列;
    在选取的第一候选译码路径未通过循环冗余校验CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据,其中,所述第一数据结构中存储有对所述第一比特序列中的每个比特进行比特判决所需的中间数据,所述第二数据结构中存储有所述第一比特序列对应的译码树上的部分节点的位置信息、所述译码树上的根节点到所述部分节点中每个节点的路径度量值以及所述部分节点的译码判决结果,所述译码树为一个满二叉树;
    根据从所述第一数据结构和所述第二数据结构中读取的数据,在所述译码树上计算所述第二候选译码路径;
    在所述第二候选译码路径通过所述CRC的情况下,将所述第二候选译码路径在所述译码树上对应的比特估计序列作为所述第一比特序列的译码结果;
    输出所述译码结果。
  2. 根据权利要求1所述的方法,其特征在于,所述第二数据结构包括两个优先级队列,所述部分节点的位置信息和所述路径度量值信息存储在所述两个优先级队列中,所述部分节点的路径度量值在每个优选级队列中升序排列,其中,靠近队列前端的路径度量值小于靠近队列后端的路径度量值,所述至少一个节点的位置信息与所述至少一个路径度量值之间具有一一映射关系。
  3. 根据权利要求2所述的方法,其特征在于,优先级队列中任意一个节点的位置信息包括所述节点在所述译码树上所处的层、所述节点在所处的层的扩展次序和所述节点的父节点在所述译码树上的扩展次序。
  4. 根据权利要求2或3所述的方法,其特征在于,在对所述第一候选译码路径进行CRC之前,所述方法还包括:
    根据所述第一数据结构和第二数据结构中存储的数据,以预先设置的第一路径搜索宽度在所述译码树上计算所述第一候选译码路径;
    以及,在所述第一候选译码路径未通过所述CRC的情况下,所述根据从所述第一数据结构和所述第二数据结构中读取的数据,在所述译码树上计算所述第二候选译码路径,包括:
    根据所述第一数据结构和第二数据结构中读取的数据,以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,其中,所述第二路径搜索宽度是所述第一路径搜索宽度的两倍,且所述第二路径搜索宽度小于或等于预先设置的最大路径搜索宽度。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一数据结构和第二数据结构中存储的数据,以预先设置的第一路径搜索宽度在所述译码树上搜索第一候选译码路径,包括:
    激活所述两个优先级队列中的第一优先级队列,并从所述第一优先级队列中读取第一节点,所述第一节点为所述第一优先级队列中的首节点;
    根据所述第一节点的位置信息,确定所述第一节点是否为所述译码树上的叶子节点;
    在所述第一节点为所述译码树上的叶子节点的情况下,输出所述译码树的根节点到所述第一节点之间的比特估计序列,作为所述第一候选译码序列;
    以及,所述方法还包括:
    在所述第一候选译码路径通过所述CRC的情况下,将所述第一候选译码路径在译码树上对应的比特估计序列作为所述第一比特序列的译码结果。
  6. 根据权利要求4或5所述的方法,其特征在于,在所述第一候选译码路径未通过所述CRC的情况下,根据所述第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在所述译码树上搜索第二候选译码路径之前,所述方法还包括:
    确定已访问的叶子节点是否大于第一搜索宽度;
    在已访问的叶子节点大于所述第一搜索宽度且未超过所述预先设置的最大路径搜索宽度的情况下,交换所述第一优先级队列和第二优先级队列的激活状态,其中,交换所述第一优先级队列和所述第二优选级队列的激活状态包括将所述第二优先级队列激活,将所述第一优先级队列置为非激活,并将所述第一优先级队列中未读取的全部节点按照路径度量值插入激活的所述第二优先级队列中;
    以及,所述以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,包括:
    从激活的所述第二优先级队列中读取第二节点,并根据所述第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,其中,所述第二节点为所述第二优先级队列中的首节点。
  7. 根据权利要求6所述的方法,其特征在于,在已访问的叶子节点小于或等于所述第一路径搜索宽度的情况下,所述以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,包括:
    继续从所述第一优先级队列中读取节点,并根据所述第一数据结构和第二数据结构中存储的数据,以所述第二路径搜索宽度在所述译码树上计算所述第二候选译码路径。
  8. 根据权利要求5-7中任一项所述的方法,其特征在于,在所述第一节点非所述译码树上的叶子节点的情况下,所述方法还包括:
    根据所述第一节点的位置信息,确定所述第一节点在所述译码树上所处的层;
    根据所述第一节点在所述译码树上所处的层的已访问节点、所述第一搜索宽度和所述最大搜索宽度之间的大小关系,确定所述第一节点的扩展节点在所述第二数据结构中的存储位置。
  9. 根据权利要求8所述的方法,其特征在于,所述确定所述第一节点的扩展节点在所述第二数据结构中的存储位置,包括:
    若所述第一节点在所述译码树上所处的层的已访问节点数小于或等于所述第一搜索宽度,则根据所述扩展节点的路径度量值将所述扩展节点插入所述第一优选级队列中;
    若所述第一节点在所述译码树上所处的层的已访问节点数大于所述第一搜索宽度,且小于或等于所述最大搜索宽度,则根据所述扩展节点的路径度量值将所述扩展节点插入所述第二优先级队列中;
    若所述第一节点在所述码树上所处的层的已访问节点大于所述最大搜索宽度,则不存储所述扩展节点。
  10. 根据权利要求4-8中任一项所述的方法,其特征在于,所述方法还包括:
    从所述第一数据结构中获取对所述第一比特序列中的每个比特进行比特判决所需的中间数据;
    根据对所述第一比特序列中的每个比特进行比特判决所述的中间数据,以及承载所述第一比特序列中每个比特的子信道为信息位或冻结位,确定所述第一比特序列中每个比特的译码判决结果;
    将所述第一比特序列中每个比特的译码判决结果保存在所述第一优先级队列或第二优先级队列中,其中,所述每个比特的译码判决结果与所述每个比特在所述第一优先级队列或第二优先级队列中对应的节点的位置信息和路径度量值对应。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述第一数据结构中存储的所述中间数据包括所有扩展路径中每个节点的译码中间对数似然比和硬判决值,以及所述每个节点的译码中间对数似然比和硬判决值所属的扩展路径。
  12. 一种译码装置,其特征在于,包括:
    第一通信单元,用于接收待译码的第一比特序列;
    处理单元,用于在选取的第一候选译码路径未通过循环冗余校验CRC的情况下,从第一数据结构和第二数据结构中读取计算第二候选译码路径所需的数据,其中,所述第一数据结构中存储有对所述第一比特序列中的每个比特进行比特判决所需的中间数据,所述第二数据结构中存储有所述第一比特序列对应的译码树上的部分节点的位置信息、所述译码树上的根节点到所述部分节点中每个节点的路径度量值以及所述部分节点的译码判决结果,所述译码树为一个满二叉树;
    所述处理单元,还用于从所述第一数据结构和所述第二数据结构中读取的数据,在所述译码树上计算所述第二候选译码路径;
    所处单元,还用于在所述第二候选译码路径通过所述CRC的情况下,将所述第二候选译码路径在所述译码树上对应的比特估计序列作为所述第一比特序列的译码结果;
    第二通信单元,用于输出所述处理单元确定的所述译码结果。
  13. 根据权利要求12所述的译码装置,其特征在于,所述第二数据结构包括两个优先级队列,所述部分节点的位置信息和所述路径度量值信息存储在所述两个优先级队列中,所述部分节点的路径度量值在每个优选级队列中升序排列,其中,靠近队列前端的路径度量值小于靠近队列后端的路径度量值,所述至少一个节点的位置信息与所述至少一个路径度量值之间具有一一映射关系。
  14. 根据权利要求13所述的译码装置,其特征在于,优先级队列中任意一个节点的位置信息包括所述节点在所述译码树上所处的层、所述节点在所处的层的扩展次序和所述节点的父节点在所述译码树上的扩展次序。
  15. 根据权利要求13或14所述的译码装置,其特征在于,所述处理单元在对所述第一译码路径进行CRC之前,所述处理单元还用于:
    根据所述第一数据结构和第二数据结构中存储的数据,以预先设置的第一路径搜索宽度在所述译码树上计算所述第一候选译码路径;
    以及,所述处理单元具体用于根据所述第一数据结构和第二数据结构中读取的数据,以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,其中,所述第二路径搜 索宽度是所述第一路径搜索宽度的两倍,且所述第二路径搜索宽度小于或等于预先设置的最大路径搜索宽度。
  16. 根据权利要求15所述的译码装置,其特征在于,所述处理单元具体用于:
    激活所述两个优先级队列中的第一优先级队列,并从所述第一优先级队列中读取第一节点,所述第一节点为所述第一优先级队列中的首节点;
    根据所述第一节点的位置信息,确定所述第一节点是否为所述译码树上的叶子节点;
    在所述第一节点为所述译码树上的叶子节点的情况下,输出所述译码树的根节点到所述第一节点之间的比特估计序列,作为所述第一候选译码序列;
    以及,所述处理单元还用于:
    在所述第一候选译码路径通过所述CRC的情况下,将所述第一候选译码路径在译码树上对应的比特估计序列作为所述第一比特序列的译码结果。
  17. 根据权利要求15或16所述的译码装置,其特征在于,所述处理单元还用于:
    根据所述第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在所述译码树上搜索第二候选译码路径之前,确定已访问的叶子节点是否大于第一搜索宽度;
    在已访问的叶子节点大于所述第一搜索宽度且未超过所述预先设置的最大路径搜索宽度的情况下,交换所述第一优先级队列和第二优先级队列的激活状态,其中,交换所述第一优先级队列和所述第二优选级队列的激活状态包括将所述第二优先级队列激活,将所述第一优先级队列置为非激活,并将所述第一优先级队列中未读取的全部节点按照路径度量值插入激活的所述第二优先级队列中;
    以及,所述处理单元具体用于:
    从激活的所述第二优先级队列中读取第二节点,并根据所述第一数据结构和第二数据结构中存储的数据,以第二路径搜索宽度在所述译码树上计算所述第二候选译码路径,其中,所述第二节点为所述第二优先级队列中的首节点。
  18. 根据权利要求17所述的译码装置,其特征在于,所述处理单元具体用于:
    在已访问的叶子节点小于或等于所述第一路径搜索宽度的情况下,继续从所述第一优先级队列中读取节点,并根据所述第一数据结构和第二数据结构中存储的数据,以所述第二路径搜索宽度在所述译码树上计算所述第二候选译码路径。
  19. 根据权利要求16-18中任一项所述的译码装置,其特征在于,所述处理单元还用于:
    在所述第一节点非所述译码树上的叶子节点的情况下,根据所述第一节点的位置信息,确定所述第一节点在所述译码树上所处的层;
    根据所述第一节点在所述译码树上所处的层的已访问节点、所述第一搜索宽度和所述最大搜索宽度之间的大小关系,确定所述第一节点的扩展节点在所述第二数据结构中的存储位置。
  20. 根据权利要求19所述的译码装置,其特征在于,所述处理单元具体用于:
    若所述第一节点在所述译码树上所处的层的已访问节点数小于或等于所述第一搜索宽度,则根据所述扩展节点的路径度量值将所述扩展节点插入所述第一优选级队列中;
    若所述第一节点在所述译码树上所处的层的已访问节点数大于所述第一搜索宽度,且小于或等于所述最大搜索宽度,则根据所述扩展节点的路径度量值将所述扩展节点插入所 述第二优先级队列中;
    若所述第一节点在所述码树上所处的层的已访问节点大于所述最大搜索宽度,则不存储所述扩展节点。
  21. 根据权利要求15-19中任一项所述的译码装置,其特征在于,所述处理单元具体用于:
    从所述第一数据结构中获取对所述第一比特序列中的每个比特进行比特判决所需的中间数据;
    根据对所述第一比特序列中的每个比特进行比特判决所述的中间数据,以及承载所述第一比特序列中每个比特的子信道为信息位或冻结位,确定所述第一比特序列中每个比特的判决结果;
    所述译码装置还包括:
    存储单元,用于将所述第一比特序列中每个比特的译码判决结果保存在所述第一优先级队列或第二优先级队列中,其中,所述每个比特的译码判决结果与所述每个比特在所述第一优先级队列或第二优先级队列中对应的节点的位置信息和路径度量值对应。
  22. 根据权利要求12-21中任一项所述的译码装置,其特征在于,所述第一数据结构中存储的所述中间数据包括所有扩展路径中每个节点的译码中间对数似然比和硬判决值,以及所述每个节点的译码中间对数似然比和硬判决值所属的扩展路径。
  23. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如权利要求1-11中任一项所述的方法。
  24. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如权利要求1-11中任一项所述的方法。
  25. 一种芯片,其特征在于,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器用于从所述存储器中调用并运行所述计算机程序,使得安装有所述芯片的通信设备执行如权利要求1-11中任一项所述的方法。
PCT/CN2019/082856 2018-04-17 2019-04-16 极化码的译码方法和装置 WO2019201233A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810344057.1 2018-04-17
CN201810344057.1A CN110391817B (zh) 2018-04-17 2018-04-17 极化码的译码方法和装置

Publications (1)

Publication Number Publication Date
WO2019201233A1 true WO2019201233A1 (zh) 2019-10-24

Family

ID=68239078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082856 WO2019201233A1 (zh) 2018-04-17 2019-04-16 极化码的译码方法和装置

Country Status (2)

Country Link
CN (1) CN110391817B (zh)
WO (1) WO2019201233A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111200481B (zh) * 2019-12-18 2020-12-29 清华大学 Polar码译码过程中提高计算单元通用性的方法
CN111181573B (zh) * 2020-03-09 2023-08-18 北京华力创通科技股份有限公司 数据译码方法、装置及电子设备
CN113630126B (zh) * 2020-05-07 2023-11-14 大唐移动通信设备有限公司 一种极化码译码处理方法、装置及设备
CN113131950B (zh) * 2021-04-23 2024-02-13 南京大学 一种极化码的自适应连续消除优先译码方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140365842A1 (en) * 2012-01-20 2014-12-11 Huawei Technologies Co., Ltd. Decoding method and decoding device for polar code cascaded with cyclic redundancy check
CN106506009A (zh) * 2016-10-31 2017-03-15 中国石油大学(华东) 一种极化码的译码方法
CN106877884A (zh) * 2017-02-01 2017-06-20 东南大学 一种减少译码路径分裂的极化码译码方法
US20170353193A1 (en) * 2016-06-01 2017-12-07 Samsung Electronics Co., Ltd. Apparatus and method for encoding with cyclic redundancy check and polar code
CN104143991B (zh) * 2013-05-06 2018-02-06 华为技术有限公司 极性Polar码的译码方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105978577B (zh) * 2016-05-03 2019-11-01 西安电子科技大学 一种基于比特翻转的串行列表译码方法
CN107819545B (zh) * 2016-09-12 2020-02-14 华为技术有限公司 极化码的重传方法及装置
CN106849960B (zh) * 2017-01-19 2019-11-12 东南大学 基于极化码的分段crc校验堆栈译码方法及架构

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140365842A1 (en) * 2012-01-20 2014-12-11 Huawei Technologies Co., Ltd. Decoding method and decoding device for polar code cascaded with cyclic redundancy check
CN104143991B (zh) * 2013-05-06 2018-02-06 华为技术有限公司 极性Polar码的译码方法和装置
US20170353193A1 (en) * 2016-06-01 2017-12-07 Samsung Electronics Co., Ltd. Apparatus and method for encoding with cyclic redundancy check and polar code
CN106506009A (zh) * 2016-10-31 2017-03-15 中国石油大学(华东) 一种极化码的译码方法
CN106877884A (zh) * 2017-02-01 2017-06-20 东南大学 一种减少译码路径分裂的极化码译码方法

Also Published As

Publication number Publication date
CN110391817A (zh) 2019-10-29
CN110391817B (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
WO2019201233A1 (zh) 极化码的译码方法和装置
JP4038518B2 (ja) 低密度パリティ検査コードを効率的に復号する方法及び装置
US10425107B2 (en) Partial sum computation for polar code decoding
WO2014173133A1 (zh) 极性码的译码方法和译码装置
US8433004B2 (en) Low-latency viterbi survivor memory architecture and method using register exchange, trace-back, and trace-forward
KR20080098391A (ko) 양방향 슬라이딩 윈도우 아키텍처를 갖는 map 디코더
WO2018171401A1 (zh) 一种信息处理方法、装置及设备
US8589758B2 (en) Method and system for cyclic redundancy check
CN110635808B (zh) 极化码译码方法和译码装置
CN110730007B (zh) 极化码sscl译码路径分裂方法、存储介质和处理器
TWI748739B (zh) 決定待翻轉比特位置的方法及極化碼解碼器
CN111224676B (zh) 一种自适应串行抵消列表极化码译码方法及系统
RU2739582C1 (ru) Устройство и способ кодирования
US20050071726A1 (en) Arrangement and method for iterative decoding
CN110324111B (zh) 一种译码方法及设备
WO2018064924A1 (zh) 基于软输出维特比译码算法sova的译码方法和装置
CN112187409B (zh) 译码方法和装置、终端、芯片及存储介质
KR102158312B1 (ko) Sc-파노 복호 장치 및 이를 이용한 sc-파노 복호 방법
EP2362549B1 (en) Low-latency viterbi survivor memory architecture and method using register exchange, trace-back, and trace-forward
CA2730991C (en) Method and system for cyclic redundancy check
CN102291198A (zh) 信道译码方法和装置
CN106533453B (zh) 一种译码方法及译码器
CN112703687B (zh) 信道编码方法及装置
Song et al. Efficient adaptive successive cancellation list decoders for polar codes
US9866240B2 (en) Map algorithm-based turbo decoding method and apparatus, and computer storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19788135

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19788135

Country of ref document: EP

Kind code of ref document: A1