CN110798228A - Polarization code turning decoding method and system based on deep learning - Google Patents

Polarization code turning decoding method and system based on deep learning Download PDF

Info

Publication number
CN110798228A
CN110798228A CN201911042171.XA CN201911042171A CN110798228A CN 110798228 A CN110798228 A CN 110798228A CN 201911042171 A CN201911042171 A CN 201911042171A CN 110798228 A CN110798228 A CN 110798228A
Authority
CN
China
Prior art keywords
decoding
unit
neural network
deep learning
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911042171.XA
Other languages
Chinese (zh)
Inventor
李丽
宋文清
傅玉祥
何书专
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Ningqi Intelligent Computing Chip Research Institute Co Ltd
Original Assignee
Nanjing Ningqi Intelligent Computing Chip Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ningqi Intelligent Computing Chip Research Institute Co Ltd filed Critical Nanjing Ningqi Intelligent Computing Chip Research Institute Co Ltd
Priority to CN201911042171.XA priority Critical patent/CN110798228A/en
Publication of CN110798228A publication Critical patent/CN110798228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • H03M13/1111Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
    • H03M13/1125Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms using different domains for check node and bit node processing, wherein the different domains include probabilities, likelihood ratios, likelihood differences, log-likelihood ratios or log-likelihood difference pairs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention discloses a polarization code turning decoding method and system based on deep learning, and belongs to the field of wireless communication. Firstly, constructing a neural network unit and training, and then inputting soft information output by a decoding unit into the neural network unit to obtain a possible decoding error position; and then feeding back the possible decoding error position to the decoding unit, turning over the decoding result of the decoding error position by the decoding unit, and then restarting decoding until the decoding result is obtained. The system comprises a neural network unit and a decoding unit, wherein the neural network unit is connected with the decoding unit; the decoding unit comprises a belief propagation unit, a verification unit and a turnover unit. The invention solves the defects of high complexity and unsatisfactory performance of the traditional overturn decoding algorithm in the prior art, reduces the decoding complexity and meets the requirements of different channel environments and configurations of a communication system.

Description

Polarization code turning decoding method and system based on deep learning
Technical Field
The invention relates to the technical field of wireless communication, in particular to a polarization code turning decoding method and system based on deep learning.
Background
Channel coding has been developed for over 70 years to date. Polar codes attract increasing attention as the first codeword that can achieve symmetric binary or non-binary input discrete memoryless channels (B-DMC) channel capacity. In the 5G standardization process, the polarization code is selected as a Forward Error Correction (FEC) code of a 5G enhanced channel scenario (eMBB) control channel due to its great performance improvement effect. In order to meet the requirements of low delay and high speed of 5G, it is urgent and necessary to research a satisfactory efficient polar code decoder with high parallelism and low complexity.
Currently, the most widely applied decoding algorithms include Successive Cancellation (SC) decoding algorithm and Belief Propagation (BP) decoding algorithm. As a decoding algorithm with better polarization codes, although SC decoding algorithm may show its advantages in terms of complexity, it is limited by a bit-by-bit decoding strategy, and thus a satisfactory finite length error correction performance cannot be obtained. By making more attempts in the decoding process, a Successive Cancellation List (SCL) decoding algorithm and a Successive Cancellation Stack (SCS) decoding algorithm are proposed as improvements of the SC decoding algorithm. Simulation results show that both improved decoders based on SC can achieve performance close to a Maximum Likelihood (ML) decoder.
BP decoders are more suitable for low-latency and high-throughput applications than SC decoding algorithms because they are capable of processing log-likelihoodoratios (LLRs) of received codewords in parallel. However, similar to conventional SC decoding, the performance of BP decoding needs to be improved. Inspired by an SC bit-flipping (SCF) decoding algorithm, a critical set-based BP bit-flipping (BPF) decoding algorithm is proposed to identify possible erroneous bits and perform the flipping operation. The existing BP algorithm has the following problems: existing BPF-based work effectively identifies unreliable bits, but the decoding delay is longer compared to conventional BP decoders, so an optimal flip strategy that can accurately identify unreliable bits with fewer BP decoding flip attempts remains an open question.
In recent years, Deep Learning (DL) has been used in many fields to achieve performance improvement due to its strong ability to solve complex tasks, and is also applied in the field of polar code decoding. The traditional deep learning-based decoder obtains the near-optimal Bit Error Rate (BER) performance by learning a large number of code words and directly fitting a decoding algorithm by using a neural network. However, due to its exponentially increasing complexity in training, the proposed decoder is only suitable for shorter codewords. Therefore, the higher training complexity and the more difficult long code fitting are two main obstacles that prevent the application of deep learning as a polar code decoding performance improvement scheme.
The Chinese patent application discloses a polarization code belief propagation decoding method based on bit flipping, which is applied for a polarization code belief propagation decoding method based on bit flipping, is the application number CN201811423884.6, and is published on 2019, 6, month and 4, and relates to the technical field of channel coding in wireless communication. The omega-order key set provided by the invention is obtained by converting the concept of the existing key set, and the problem that the tentative decoding quantity is exponentially increased in the existing CS-based decoding method is solved. According to the method, under the condition that the decoding result of the traditional BP decoder does not pass CRC, information bits in CS-omega in the polar code are inverted by constructing CS-omega (bit inversion in the invention refers to setting the prior log likelihood ratio of the inverted bits to be infinite), errors in the traditional BP decoder can be corrected, and therefore the error group rate performance of the BP decoder is improved. The key set of the invention is based on the code rate 1 node of the code tree, only utilizes the property of the polarization code, but not the output information of the BP decoder, and the performance is not ideal particularly when the signal to noise ratio is low.
Based on the above analysis, the existing decoding method of polarization code is not enough to meet the requirement of practical application.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the defect that the existing BPF decoding method in the prior art has high calculation complexity, the invention provides a polarization code turning decoding method and system based on deep learning, which can improve the decoding performance of the system, reduce the decoding complexity and better meet the application requirement of a communication system.
2. Technical scheme
The purpose of the invention is realized by the following technical scheme.
The invention relates to a polarization code turning decoding method based on deep learning, which comprises the steps of firstly constructing a neural network unit and training, and then inputting soft information output by a decoding unit into the trained neural network unit to obtain possible decoding error positions; and then feeding back the possible decoding error position to the decoding unit, turning over the decoding result of the decoding error position by the decoding unit, and restarting decoding according to the information of the turning bit until the decoding result is obtained.
Further, the method comprises the following steps:
step one, constructing a neural network unit,
constructing a neural network unit and training the neural network unit, wherein the neural network unit comprises an input layer, a hidden layer and an output layer;
step two, the decoding unit is initialized,
two kinds of log-likelihood ratio information required in decoding processAndperforming initialization, wherein i represents the ith order of the decoding unit, and j represents the jth node,i∈(1,...,n+1),i∈(1,...,N),n=log2N, N represents the code length, t represents the iteration number of the decoding unit;
step three, decoding the code,
the decoding unit executes decoding operation, log-likelihood ratio information is continuously updated in the iteration process until the maximum iteration times is reached, decoding is finished, decoded soft information is transmitted to a check unit in the decoding unit for checking, and if the soft information cannot pass the check, the neural network unit is activated to predict bit positions which are possibly wrong;
step four, predicting the data of the target,
inputting the decoded soft information LLRi into a neural network unit to obtain a bit position Zi which is possible to make an error, and then inputting the bit position Zi which is possible to make an error into a decoding unit; wherein i ∈ (1,.., Q), Q being the maximum number of sets of erroneous bit positions;
step five, turning over and decoding,
and according to the bit position Zi which is possible to make mistakes, sequentially turning the R value corresponding to Zi to be positive infinity or negative infinity by a turning unit in the decoding unit, continuing decoding until the R value passes the verification, and repeating the fourth step and the fifth step until the maximum turning bit number w is reached if the bit corresponding to Z is turned over and is not verified.
Furthermore, in step two, the initialization formula of the decoding unit is as follows:
Figure BDA0002251826530000031
the remaining soft information is set to 0.
Further, log likelihood ratio information in step threeAnd
Figure BDA0002251826530000033
the update formula of (c) is as follows:
Figure BDA0002251826530000034
where g performs a box-plus operation, also referred to as
Figure BDA0002251826530000037
The operation is as follows:
Figure BDA0002251826530000035
further, the bit positions Zi that are likely to be erroneous in step four are calculated according to the following formula:
Figure BDA0002251826530000036
wherein, WijIs a weight, bjTo be offset, xjFor each layer corresponding input value.
Furthermore, the node number of the input layer of the neural network unit is equal to the information bit length K, the input vector is the decoded soft information LLRi, the node number of the hidden layer is 2 × N, N is the code length, and the node number of the output layer is K.
Further, the neural network was trained using a TensorFlow platform, where the maximum number of iterations Tepoch was 100.
A polar code reversal decoding system based on deep learning uses the polar code reversal decoding method based on deep learning, the system includes neural network unit and decoding unit, the said neural network unit connects with decoding unit; the neural network unit is used for calculating bit positions which are possible to make mistakes, and the decoding unit is used for decoding path information to be input and executing bit overturning operation.
Furthermore, the decoding unit comprises a belief propagation unit, a verification unit and a turnover unit; the belief propagation unit is connected with the verification unit and the overturning unit, the data of the neural network unit is sent to the overturning unit in the decoding unit, the data is processed by the overturning unit and then is transmitted to the belief propagation unit, the belief propagation unit sends the data to the verification unit for verification, and the data is sent to the neural network unit through the belief propagation unit.
Further, the neural network element includes an input layer, a hidden layer, and an output layer.
The invention realizes a polarization code turning decoding method based on deep learning, which comprises the steps of firstly constructing a neural network unit and training, then inputting soft information output by a decoding unit into the neural network unit to obtain possible decoding error positions, then feeding back the possible decoding errors to the decoding unit, turning over a decoding result by the decoding unit according to the decoding error positions, and restarting decoding until a decoding result is obtained. The invention solves the defects of high complexity and unsatisfactory performance of the traditional overturn decoding algorithm in the prior art, reduces the decoding complexity and meets the requirements of different channel environments and configurations of a communication system.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) the invention realizes a polar code turning decoding method based on deep learning, which obtains the bit position which is possible to make mistakes through a neural network and turns over the decoding error position, so the decoding method is suitable for various signal-to-noise ratio scenes and has strong applicability;
(2) according to the polarization code turning decoding method based on deep learning, error positions are predicted through the neural network based on the output of the BP decoder, and due to the fact that the accuracy rate of neural network training is high, compared with a traditional BP turning algorithm, a set of bits to be turned is smaller, the turning times of decoding at each time are reduced, and the time and space complexity of the decoder is further reduced;
(3) the invention realizes a polar code turning decoding system based on deep learning, which combines deep learning with the traditional polar code belief propagation decoder by configuring a neural network unit and a decoding unit, and executes a turning strategy by predicting wrong decoding bits, thereby improving the decoding performance, and simultaneously reducing the decoding complexity by early interruption of an error path.
In conclusion, the invention can effectively improve the communication performance and the data processing capability of the system and reduce the decoding complexity, and has good practical application value.
Drawings
FIG. 1 is a schematic diagram of a decoding system according to the present invention;
FIG. 2 is a block diagram of a decoding system according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a comparison between the performance of a decoding system according to an embodiment of the present invention and that of a conventional SC and SCL.
Detailed Description
The invention is described in detail below with reference to the drawings and specific examples.
Example 1
As shown in fig. 2, a deep learning-based polar code flipping decoding system includes a decoding unit and a neural network unit, where the decoding unit is connected to the neural network unit, the neural network unit is used to predict an error position index in a decoding process, and the decoding unit is used to decode path information to be decoded and perform a bit flipping operation when a decoding error occurs; specifically, the neural network unit calculates an error position index in the decoding process according to the received decoded soft information.
The neural network unit of the system comprises an input layer, a hidden layer and an output layer; the node number of the input layer is equal to the code length K, and the input vector is soft information output by the decoding unit, namely the log-likelihood ratio LLRi of the left end; the node number of the hidden layer is 2 x K, and the node number of the output layer is K-2. As shown in fig. 2, the decoding unit includes a belief propagation unit, a verification unit, and a flipping unit, the belief propagation unit connects the verification unit and the flipping unit, data of the neural network unit is sent to the flipping unit in the decoding unit, the data is processed by the flipping unit and then transmitted to the belief propagation unit, the belief propagation unit sends the data to the verification unit for verification, and the data is sent to the neural network unit through the belief propagation unit.
In the decoding unit, the belief propagation unit is used for reading information to be decoded, parallelly propagating paths to be decoded and calculating corresponding log-likelihood ratios, and then inputting the soft information obtained by calculation into verification; the check unit converts the soft information into a decoded code word through hard decision, performs CRC check, and transmits the soft information to the neural network unit if the check fails; and the overturning unit receives the error position index set output by the neural network unit, executes bit overturning operation, and transmits the overturned R information to the belief propagation unit for continuous decoding. According to the polarization code reversal decoding system based on deep learning, the reversal strategy is executed by predicting the wrong decoding bit, so that the decoding performance is improved, and meanwhile, the early interruption of the wrong path enables the decoding complexity to be reduced.
According to the polarization code turning decoding method based on deep learning, the neural network is constructed and trained, then the soft information output by the decoder is input into the trained neural network to obtain possible decoding error positions, the trained neural network is suitable for various signal-to-noise ratio scenes, and the applicability of the method is improved. The system feeds back the possible decoding error position to the decoder, the decoder inverts the decoding result of the decoding error position, decoding is restarted according to the information of the inversion bit, compared with the traditional BP inversion algorithm, the small bit set to be inverted reduces the inversion times of each decoding, and further time and space complexity of the decoder is reduced. And in the decoding process, if the decoding passes the verification, the decoding stops outputting the decoding result, otherwise, the decoding is continuously carried out until the maximum turning times is reached.
It should be noted that, the received information is yi, the length is N, the number of information bits is K, the set of information bits is a, and the two types of log-likelihood ratio information transmitted in the decoding process are
Figure BDA0002251826530000051
And
Figure BDA0002251826530000052
information from right to left and from left to right are indicated, respectively, where i denotes the ith order of the decoder and j denotes the jth node.
i∈(1,...,n+1),i∈(1,...,N),n=log2N, t tableAnd (4) indicating the iteration times of the decoder, wherein the soft information output by the decoder is LLRi.
The decoding method comprises the following specific steps:
step one, constructing a neural network,
constructing a neural network and training, wherein the neural network unit comprises an input layer, a hidden layer and an output layer; the node number of the input layer is equal to the information bit length K, the input vector is soft information LLRi output by the decoder, the node number of the hidden layer is 2 x K, and the node number of the output layer is K-2; in this embodiment, a tensrflow platform is used to train a neural network, and when a neural network unit is trained, a training set is composed of 240000 groups of codewords generated under different signal-to-noise ratios, each 120 is a batch, a learning rate is set to 0.001, and a maximum iteration number, Tepoch, is 100.
Step two, the decoding unit is initialized,
two kinds of log-likelihood ratio information required in decoding process
Figure BDA0002251826530000061
And
Figure BDA0002251826530000062
and initializing according to the information bits and the received information, wherein i represents the ith order of the decoding unit, and j represents the jth node.
i∈(1,...,n+1),i∈(1,...,N),n=log2N, N represents the code length, and t represents the iteration number of the decoding unit.
The specific initialization method is as follows:
Figure BDA0002251826530000063
the remaining soft information is set to 0.
Step three, decoding the code,
and the decoding unit executes decoding operation, the log-likelihood ratio information is continuously updated in the iteration process until the maximum iteration time Tmax is reached, decoding is finished, the decoded soft information LLRi is transmitted to the checking unit for CRC checking, and if the soft information LLRi cannot pass the CRC checking, the neural network unit is activated to predict the bit position which is possibly wrong.
Wherein log likelihood ratio information
Figure BDA0002251826530000064
And
Figure BDA0002251826530000065
the update formula of (c) is as follows:
in which g performs a box-plus operation, also known asAnd (3) operation, wherein the operation formula is as follows:
Figure BDA0002251826530000067
step four, predicting the data of the target,
inputting the decoded soft information LLRi into a neural network unit to obtain a bit position Zi which is possible to make an error, and then inputting the bit position Zi which is possible to make an error into a decoding unit; where i ∈ (1.,. Q), Q is the maximum number of sets of bit positions in error. The bit position index that is likely to be in error is calculated according to the following formula:
wherein, WijIs a weight, bjTo be offset, xjFor each layer corresponding input value, σ is an activation function sigmoid,
Figure BDA0002251826530000072
wherein a ═ ΣjWijxj+bj
Step five, turning over and decoding,
according to the bit position Zi which is possible to make mistakes, a turning unit in the decoder sequentially turns the R value corresponding to Zi to be positive infinity or negative infinity, decoding is continued until the R value passes the verification, and if the bit corresponding to Z is turned over and is not checked, the fourth step and the fifth step are repeated until the maximum turning bit number w is reached; compared with the traditional BP (back propagation) inversion algorithm, the bit set to be inverted is smaller, so that the inversion times of each decoding are reduced, and the time and space complexity of a decoder is further reduced; and the good approximation performance of the neural network ensures the decoding performance.
According to the polarization code turning decoding method based on deep learning, the error position information under different code lengths and code rates can be predicted by modifying the number of layers, the number of input and output nodes and the input vector of the neural network in the process of training the neural network before decoding, different channel environments and configuration requirements of a communication system can be met, and the polarization code turning decoding method based on deep learning has rich flexibility and configurability; the invention can effectively improve the communication performance and the data processing capacity of the system and can reduce the decoding complexity.
With reference to the structure diagram of the decoding system shown in fig. 2, fig. 1 is a decoding system with a code length N of 8 and a number K of information bits of 4, and the black nodes at the left end are information bits, i.e., nodes accessed by the neural network unit of the system. The input vector of the neural network unit of this embodiment is a received log-likelihood ratio with a length of 4, and the output is each bit error probability with a length of 4, and is transmitted to the decoding unit. Assuming that the size of the error set is Q and w bits are flipped together, in the decoding unit, when Q is 2, that is, the maximum number of flips per time is 2, and w is 2, that is, when w is 2 bits are flipped together, neural network prediction is performed twice, after the prediction is finished each time, flipping is performed according to the index in the error set, that is, decoding is performed four times in total, and two bits with the maximum error probability are selected for output by the neural network unit to be flipped each time.
Fig. 3 shows the performance comparison of different BP-based decoding algorithms when the code length N is 128 and the number K of information bits is 19, and the abscissa represents the signal-to-noise ratio Eb/N01.0-4.0dB, the ordinate represents the Frame Error Rate (FER). The invention selects the same calculation complexity to turn 1 bit 6 times (w is 1)Q6) and a BPF with 3 bit inversions (w 2, Q3), the decoding system performance of the present invention is better than that of the conventional BP decoder under various signal-to-noise ratios, and under a high signal-to-noise ratio, the decoding system performance of the present invention is close to that of a decoder with SCL and a list length of 8, wherein the list length is the number of extension paths per layer of the conventional SCL decoder, and the decoding system of the present invention has excellent performance. In addition, compared with the traditional BPF decoding algorithm, the decoding system of the invention can achieve better performance under the condition of less turnover times, and has certain advantages in the aspect of complexity reduction.
In summary, the deep learning-based polar code flipping decoding method and system of the present invention combine deep learning with the conventional polar code BP decoder, and perform the flipping strategy by predicting the erroneous decoding bit, so as to improve the decoding performance, and at the same time, the early interruption of the erroneous path reduces the decoding complexity. The embodiment presents its advantages in both complexity and performance, and its high hardware adaptability also shows great potential for practical applications.
The invention and its embodiments have been described above schematically, without limitation, and the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The representation in the drawings is only one of the embodiments of the invention, the actual construction is not limited thereto, and any reference signs in the claims shall not limit the claims concerned. Therefore, if a person skilled in the art receives the teachings of the present invention, without inventive design, a similar structure and an embodiment to the above technical solution should be covered by the protection scope of the present patent. Furthermore, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Several of the elements recited in the product claims may also be implemented by one element in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A polarization code turning decoding method based on deep learning is characterized in that a neural network unit is constructed and trained, and then soft information output by a decoding unit is input into the trained neural network unit to obtain possible decoding error positions; and then feeding back the possible decoding error position to the decoding unit, turning over the decoding result of the decoding error position by the decoding unit, and restarting decoding according to the information of the turning bit until the decoding result is obtained.
2. The method for decoding polarization code reversal based on deep learning of claim 1, comprising the following steps:
step one, constructing a neural network unit
Constructing a neural network unit and training the neural network unit, wherein the neural network unit comprises an input layer, a hidden layer and an output layer;
step two, initialization of decoding unit
Two kinds of log-likelihood ratio information required in decoding process
Figure FDA0002251826520000011
Andinitialization is performed, wherein i denotes the ith order of the decoding unit, j denotes the jth node, i ∈ (1., N +1), i ∈ (1., N), and N ═ log2N, N represents the code length, t represents the iteration number of the decoding unit;
step three, decoding
The decoding unit executes decoding operation, log-likelihood ratio information is continuously updated in the iteration process until the maximum iteration times is reached, decoding is finished, decoded soft information is transmitted to a check unit in the decoding unit for checking, and if the soft information cannot pass the check, the neural network unit is activated to predict bit positions which are possibly wrong;
step four, forecasting
Inputting the decoded soft information LLRi into a neural network unit to obtain a bit position Zi which is possible to make an error, and then inputting the bit position Zi which is possible to make an error into a decoding unit; wherein i ∈ (1,.., Q), Q being the maximum number of sets of erroneous bit positions;
step five, turning over decoding
And according to the bit position Zi which is possible to make mistakes, sequentially turning the R value corresponding to Zi to be positive infinity or negative infinity by a turning unit in the decoding unit, continuing decoding until the R value passes the verification, and repeating the fourth step and the fifth step until the maximum turning bit number w is reached if the bit corresponding to Z is turned over and is not verified.
3. The deep learning-based polar code flipping decoding method according to claim 2, wherein in step two, the initialization formula of the decoding unit is as follows:
Figure FDA0002251826520000013
the remaining soft information is set to 0.
4. The deep learning-based polar code flipping decoding method according to claim 2, wherein log-likelihood ratio information in step three
Figure FDA0002251826520000014
Andthe update formula of (c) is as follows:
Figure FDA0002251826520000021
where g performs a box-plus operation, the formula is shown below:
Figure FDA0002251826520000022
5. the deep learning-based polar code flipping decoding method according to claim 2, wherein the bit positions Zi that are likely to be erroneous in step four are calculated according to the following formula:
Figure FDA0002251826520000023
wherein, WijIs a weight, bjTo be offset, xjFor each layer corresponding input value.
6. The deep learning-based polar code flipping decoding method according to claim 3, 4 or 5, wherein the node number of the input layer of the neural network unit is equal to the information bit length K, the input vector is decoded soft information LLRi, the node number of the hidden layer is 2 × N, N is the code length, and the node number of the output layer is K.
7. The deep learning-based polar code flipping decoding method according to claim 6, wherein a neural network is trained by using a TensorFlow platform, wherein the maximum iteration number Tepoch is 100.
8. A deep learning-based polar code reversal decoding system, which uses a deep learning-based polar code reversal decoding method according to any one of claims 1-7, wherein the system comprises a neural network unit and a decoding unit, and the neural network unit is connected with the decoding unit; the neural network unit is used for calculating bit positions which are possible to make mistakes, and the decoding unit is used for decoding path information to be input and executing bit overturning operation.
9. The deep learning-based polar code reversal decoding system according to claim 8, wherein the decoding unit includes a belief propagation unit, a verification unit and a reversal unit; the belief propagation unit is connected with the verification unit and the overturning unit, the data of the neural network unit is sent to the overturning unit in the decoding unit, the data is processed by the overturning unit and then is transmitted to the belief propagation unit, the belief propagation unit sends the data to the verification unit for verification, and the data is sent to the neural network unit through the belief propagation unit.
10. The deep learning-based polar code reversal decoding system according to claim 9, wherein the neural network unit includes an input layer, a hidden layer and an output layer.
CN201911042171.XA 2019-10-29 2019-10-29 Polarization code turning decoding method and system based on deep learning Pending CN110798228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911042171.XA CN110798228A (en) 2019-10-29 2019-10-29 Polarization code turning decoding method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911042171.XA CN110798228A (en) 2019-10-29 2019-10-29 Polarization code turning decoding method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN110798228A true CN110798228A (en) 2020-02-14

Family

ID=69442132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911042171.XA Pending CN110798228A (en) 2019-10-29 2019-10-29 Polarization code turning decoding method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN110798228A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111200441A (en) * 2020-03-27 2020-05-26 苏州科达科技股份有限公司 Polar code decoding method, device, equipment and readable storage medium
CN111541517A (en) * 2020-04-17 2020-08-14 北京交通大学 List polarization code propagation decoding method
CN112118015A (en) * 2020-09-11 2020-12-22 山东云海国创云计算装备产业创新中心有限公司 Decoding method, device, equipment and storage medium
CN112332863A (en) * 2020-10-27 2021-02-05 东方红卫星移动通信有限公司 Polar code decoding algorithm, receiving end and system under low signal-to-noise ratio scene of low earth orbit satellite
CN113872610A (en) * 2021-10-08 2021-12-31 华侨大学 LDPC code neural network training and decoding method and system
WO2022071642A1 (en) * 2020-09-29 2022-04-07 엘지전자 주식회사 Method and apparatus for performing channel coding of ue and base station in wireless communication system
CN116073958A (en) * 2023-03-14 2023-05-05 南京创芯慧联技术有限公司 Decoding method, decoding device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107241106A (en) * 2017-05-24 2017-10-10 东南大学 Polarization code decoding algorithm based on deep learning
CN109286405A (en) * 2018-09-10 2019-01-29 山东科技大学 A kind of progressive bit reversal SC interpretation method of the polarization code of low complex degree
CN109586730A (en) * 2018-12-06 2019-04-05 电子科技大学 It is a kind of based on the polarization code BP decoding algorithm intelligently post-processed
CN110212922A (en) * 2019-06-03 2019-09-06 南京宁麒智能计算芯片研究院有限公司 A kind of polarization code adaptive decoding method and system
CN110278001A (en) * 2019-06-19 2019-09-24 北京交通大学 Polarization code subregion interpretation method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107241106A (en) * 2017-05-24 2017-10-10 东南大学 Polarization code decoding algorithm based on deep learning
CN109286405A (en) * 2018-09-10 2019-01-29 山东科技大学 A kind of progressive bit reversal SC interpretation method of the polarization code of low complex degree
CN109586730A (en) * 2018-12-06 2019-04-05 电子科技大学 It is a kind of based on the polarization code BP decoding algorithm intelligently post-processed
CN110212922A (en) * 2019-06-03 2019-09-06 南京宁麒智能计算芯片研究院有限公司 A kind of polarization code adaptive decoding method and system
CN110278001A (en) * 2019-06-19 2019-09-24 北京交通大学 Polarization code subregion interpretation method based on deep learning

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111200441A (en) * 2020-03-27 2020-05-26 苏州科达科技股份有限公司 Polar code decoding method, device, equipment and readable storage medium
CN111200441B (en) * 2020-03-27 2022-07-15 苏州科达科技股份有限公司 Polar code decoding method, device, equipment and readable storage medium
CN111541517A (en) * 2020-04-17 2020-08-14 北京交通大学 List polarization code propagation decoding method
WO2021208243A1 (en) * 2020-04-17 2021-10-21 北京交通大学 Polar code belief propagation decoding method based on multi-flip bit set
CN112118015A (en) * 2020-09-11 2020-12-22 山东云海国创云计算装备产业创新中心有限公司 Decoding method, device, equipment and storage medium
WO2022071642A1 (en) * 2020-09-29 2022-04-07 엘지전자 주식회사 Method and apparatus for performing channel coding of ue and base station in wireless communication system
CN112332863A (en) * 2020-10-27 2021-02-05 东方红卫星移动通信有限公司 Polar code decoding algorithm, receiving end and system under low signal-to-noise ratio scene of low earth orbit satellite
CN112332863B (en) * 2020-10-27 2023-09-05 东方红卫星移动通信有限公司 Polar code decoding algorithm, receiving end and system under low signal-to-noise ratio scene of low orbit satellite
CN113872610A (en) * 2021-10-08 2021-12-31 华侨大学 LDPC code neural network training and decoding method and system
CN116073958A (en) * 2023-03-14 2023-05-05 南京创芯慧联技术有限公司 Decoding method, decoding device, electronic equipment and storage medium
CN116073958B (en) * 2023-03-14 2023-06-13 南京创芯慧联技术有限公司 Decoding method, decoding device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110798228A (en) Polarization code turning decoding method and system based on deep learning
CN108282264B (en) Polar code decoding method based on bit flipping serial elimination list algorithm
CN109660264B (en) High performance polar code decoding algorithm
CN110278002B (en) Bit-flipping-based polarization code belief propagation list decoding method
CN109286405B (en) Low-complexity polarization code progressive bit flipping SC decoding method
Kim et al. Physical layer communication via deep learning
CN108847848B (en) BP decoding algorithm of polarization code based on information post-processing
CN110336567B (en) Joint iterative decoding method applied to G-LDPC coding cooperation
CN107565978B (en) BP decoding method based on Tanner graph edge scheduling strategy
KR20060032464A (en) Efficient decoding method and apparatus of low density parity code
Teng et al. Convolutional neural network-aided tree-based bit-flipping framework for polar decoder using imitation learning
CN110299921B (en) Model-driven Turbo code deep learning decoding method
He et al. A machine learning based multi-flips successive cancellation decoding scheme of polar codes
CN110730011A (en) Recursive grouping Markov superposition coding method based on partial superposition
Zhou et al. Segmented successive cancellation list polar decoding with tailored CRC
CN113595693A (en) Hybrid automatic repeat request method based on improved effective signal-to-noise ratio
CN111130567B (en) Polarization code belief propagation list decoding method added with noise disturbance and bit inversion
Kestel et al. Polar code decoder exploration framework
Teng et al. Convolutional neural network-aided bit-flipping for belief propagation decoding of polar codes
CN110212922B (en) Polarization code self-adaptive decoding method and system
CN114073024A (en) Convolutional precoding and decoding of polar codes
CN113556135B (en) Polarization code belief propagation bit overturn decoding method based on frozen overturn list
Cai et al. An improved simplified soft cancellation decoding algorithm for polar codes based on frozen bit check
Chen et al. Belief propagation decoding of polar codes using intelligent post-processing
US20210203364A1 (en) Apparatuses and methods for mapping frozen sets between polar codes and product codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination