CN109525253B - Convolutional code decoding method based on deep learning and integration method - Google Patents

Convolutional code decoding method based on deep learning and integration method Download PDF

Info

Publication number
CN109525253B
CN109525253B CN201811250493.9A CN201811250493A CN109525253B CN 109525253 B CN109525253 B CN 109525253B CN 201811250493 A CN201811250493 A CN 201811250493A CN 109525253 B CN109525253 B CN 109525253B
Authority
CN
China
Prior art keywords
convolutional code
neural network
decoding
information
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811250493.9A
Other languages
Chinese (zh)
Other versions
CN109525253A (en
Inventor
姜小波
张帆
梁冠强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811250493.9A priority Critical patent/CN109525253B/en
Publication of CN109525253A publication Critical patent/CN109525253A/en
Application granted granted Critical
Publication of CN109525253B publication Critical patent/CN109525253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention provides a convolutional code decoding method based on a deep learning and integration method, which sets weak classifiers and the number of the weak classifiers; the weak classifier adopts a deep neural network or a perceptron to decode the convolutional code and sets the depth of the deep neural network; finally, voting the decoding result of the weak classifier by adopting an integration method to obtain decoding output; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM. The method of the invention adopts a deep learning algorithm and an integration method to decode the convolutional code and restores the transmitted information bit sequence from the noisy soft information sequence.

Description

Convolutional code decoding method based on deep learning and integration method
Technical Field
The invention relates to the technical field of electronic communication, in particular to a convolutional code decoding method based on a deep learning and integration method.
Background
In order to improve the reliability of signal transmission in a channel, various error correction code techniques are widely used in digital communication, a convolutional code (convolutional code) is a coding method which is widely used and has good performance, and is used in various data transmission systems, particularly in a satellite communication system, and viterbi is a decoding method for a convolutional code.
Convolutional codes were proposed by ericis (Elias) in 1955, and are cyclic codes, which differ from block codes in that in the encoding and decoding processes of block codes, the n-k check symbols of the group are only related to the k information symbols of the group, and are not related to other groups. In convolutional code encoding and decoding, the n-k parity elements of the group are not only related to the k information elements of the group, but also to the information group input to the encoder at the previous time. Just because the correlation between groups is fully utilized in the encoding process of the convolutional code, and k and n are smaller, the performance of the convolutional code is proved to be at least not worse than that of the block code under the same code rate and equipment complexity conditions as the block code in theory and practice.
In the conventional Viterbi decoding method of the convolutional code, the Viterbi decoding has a space for improving the balance of decoding efficiency and decoding performance, and when the decoding window of the Viterbi is fixed, the Viterbi decoding obtains an optimal path by calculating a Hamming distance, so that the decoding efficiency is greatly reduced.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings in the prior art and provides a convolutional code decoding method based on a deep learning and integration method.
In order to achieve the purpose, the invention is realized by the following technical scheme: a convolutional code decoding method based on deep learning and integration method is characterized in that: setting the number of weak classifiers and weak classifiers; the weak classifier adopts a deep neural network or a perceptron to decode the convolutional code and sets the depth of the deep neural network; finally, voting the decoding result of the weak classifier by adopting an integration method to obtain decoding output; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM.
The weak classifier decodes the convolutional code by adopting a deep neural network and sets the depth of the deep neural network; finally, voting the decoding result of the weak classifier by adopting an integration method to obtain decoding output refers to: in a weak classifier, a deep neural network model is established, a semi-infinite convolution code sequence is segmented into a training set which accords with a deep neural network structure, and after the deep neural network model is trained, the segmented noisy convolution codes are decoded in different dimensions; finally, voting is carried out by an integration method to convert the code words into decoding output of all code words.
The method comprises the following steps:
firstly, determining model parameters of a deep neural network in a weak classifier, and establishing a deep neural network model;
secondly, establishing a data sample set of convolutional code decoding;
thirdly, training a deep neural network model by using the data sample set in the second step and adopting a softmax classification mode and a batch gradient descent method;
fourthly, inputting the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to the convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the convolutional code decoding is completed.
In the first step, the determining the model parameters of the deep neural network and establishing the deep neural network model refers to: for any one (n)0,k0M) convolutional code, setting the output layer dimension of the deep neural network model as n and the input layer dimension as n0×n/k0(ii) a Setting an activation function of the hidden layer as f (x) ═ relu (x); and establishing a deep neural network model according to the output layer dimension, the input layer dimension and the activation function of the hidden layer.
The establishing of the data sample set of the convolutional code decoding refers to:
first, an information sequence of length L is randomly generated and passed through (n)0,k0M) after encoding the convolutional code, the length n is obtained by Gaussian white noise plus noise0×L/k0The information sequence of the noisy convolutional code;
secondly, adding 00 as a state bit in front of the noisy convolutional code information sequence, and segmenting the noisy convolutional code information sequence according to the input dimension of the deep neural network model in the first step to form a noisy convolutional code information code field corresponding to the size of the deep neural network model; wherein, for any one (n)0,k0M) the start states of the convolutional codes are all 00;
and finally, carrying out sample construction on the noisy convolutional code information code word segment, and generating a data sample set conforming to the deep neural network model in batches.
The step of carrying out sample construction on the noisy convolutional code information code segment and generating a data sample set conforming to the deep neural network model in batches is as follows:
(1) in the information code field of the noisy convolutional code, front k0X m bits are the status bits of the original codeword, the last n0×n/k0The bits are information code fields of the noisy convolutional codes and are used as first training samples;
(2) setting the size of a sample acquisition information bit window to be N, when a second training sample is taken, sliding a sample window backwards by one bit along the sequence direction for a code field of noisy convolutional code information, taking a second 0 of a state bit of a previous code field and a first bit of the code field as state bits, and adding code word bits added in the sample window after sliding to serve as a second training sample;
(3) and in the same way, generating a data sample set conforming to the deep neural network model in batch according to the information code field of the full-section noisy convolutional code and the corresponding information bits.
In the third step, the optimal weight is obtained by adopting a mode of updating the weight in two processes of feedforward calculation and backward propagation during the training of the deep neural network model, so that the model has classification capability.
In the fourth step, the convolutional code to be decoded is input into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the completion of convolutional code decoding means that:
(1) coding and denoising a convolutional code to be decoded to obtain a noisy convolutional code information sequence, and finally adding a zero information bit to an information bit of the noisy convolutional code information sequence and inputting the information bit into a deep neural network model;
(2) setting the initial state bit as 00, and decoding the first information bit;
(3) updating the state of the information sequence of the noisy convolutional code, sliding one bit behind the information sequence of the noisy convolutional code, and obtaining once decoding output after sliding one bit each time; when the number of sliding times after the information sequence of the noisy convolutional code reaches the dimension size n of an output layer of the deep neural network model, acquiring n weak classifiers comprising the information bit, wherein the n weak classifiers classify the information bit in different dimensions and decode and output for n times; voting the information bit by adopting an integrated method to generate a strong classifier: counting the n decoding outputs, voting to obtain a large number of decoding outputs as the decoding result of the information bit;
(4) and repeating the decoding step for the subsequent noisy convolutional code information sequence to finish the convolutional code decoding.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the convolutional code decoding method based on the deep learning and integration method adopts the deep learning algorithm and the integration method to decode the convolutional code, and restores the sent information bit sequence from the noisy soft information sequence.
2. According to the invention, a plurality of weak classifiers are obtained through a deep neural network model and are integrated into a strong classifier, so that the decoding performance of the deep neural network model is greatly improved.
Drawings
FIG. 1 is a flowchart of a convolutional code decoding method based on deep learning and integration method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a deep neural network model according to a first embodiment of the present invention;
FIG. 3 is a diagram of the decoding process of the weak classifier of a method of an embodiment of the present invention;
FIG. 4 is a graph comparing decoding performance with Viterbi decoding performance according to one embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Example one
The present invention describes a convolutional code decoding method based on a deep learning and integration method in detail by taking (2, 1, 2) convolutional code as an embodiment, wherein the convolutional code is encoded in such a manner that the start state of each segment of the convolutional code can be represented by two input bits before the segment, and the start state at the head of the encoding can be represented by 00. G (D) ([ 1+ D2,1+ D2 ].
As shown in fig. 1 to 4, the convolutional code decoding method based on the deep learning and integration method of the present invention is such that: setting the number of weak classifiers and weak classifiers; the weak classifier decodes the convolutional code by adopting a deep neural network and sets the depth of the deep neural network; finally, voting the decoding result of the weak classifier by adopting an integration method to obtain decoding output; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM.
Specifically, a deep neural network model is built, a semi-infinite convolutional code sequence is segmented into a training set which accords with a deep neural network structure, after the deep neural network model is trained, the segmented noisy convolutional codes are decoded in different dimensions, and finally, the segmented convolutional codes are voted by an integration method and converted into decoding output of all code words.
The method comprises the following steps:
firstly, determining model parameters of a deep neural network in a weak classifier, and establishing a deep neural network model;
secondly, establishing a data sample set of convolutional code decoding;
thirdly, training a deep neural network model by using the data sample set in the second step and adopting a softmax classification mode and a batch gradient descent method;
fourthly, inputting the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify information bits corresponding to the convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote the information bits to generate a strong classifier, and the strong classifier is used for decoding to complete the convolutional code decoding.
The method comprises the following specific steps:
(1) firstly, determining model parameters of the deep neural network, and establishing a deep neural network model. The output layer dimension of the deep neural network may be set to 8, and the input layer dimension of the corresponding deep neural network is 8 × 2 — 18, where 2 is the start state of the small segment of convolutional code. Since the (2, 1, 2) convolutional code is relatively simple in structure, one hidden layer is sufficient, where the hidden layer size is set to 512 and the activation function of the hidden layer is set to f (x) relu (x). And establishing a deep neural network model according to the output layer dimension, the input layer dimension and the activation function of the hidden layer.
(2) And carrying out sample construction on the noisy convolutional code information code word segment, and generating a data sample set conforming to the deep neural network model in batches.
And randomly generating an information sequence with the length of L, and after (2, 1, 2) convolutional code coding, adding noise by Gaussian white noise with the range of 1 db-7 db to obtain a noisy convolutional code information sequence with the length of 2 xL. And taking L as 1000, wherein for the convenience of decoding, the last 7 bits can be set as zero information bits so that the decoding process is finished after the eighth last bit is decoded, and when a sample is constructed, the information sequence of the noisy convolutional code is segmented according to the input dimension of the deep neural network model in the first step to form a noisy convolutional code information code field corresponding to the size of the deep neural network model. And taking the size of an information bit sliding window as 8, and adding Gaussian noise after coding to obtain a 16-bit code word sample. The first training sample status bit starts at 00 and is followed by a 2 x 8 information sequence, which is a noisy convolutional code information code field, as the first training sample.
When a second training sample is taken, a window slides backwards by one bit along the sequence direction for the information code field of the convolutional code with noise, the second 0 of the state bit of the previous code field and the first bit of the code field are used as state bits, the code word bit added in the window after sliding is used as input, the label is an onehot form of the code field before encoding to form the second training sample, and by analogy, the information sequence with the length of L is completely converted into a training sample set of a neural network corresponding to the input layer with the size of 16 and the input layer with the size of 8, wherein the input comprises the state bit and the code field, and the output is onehot after decoding the code field.
(3) After a data sample set is obtained, the hidden layer takes f (x) relu (x) as an activation function, and a softmax classification mode and a batch gradient descent method are adopted to train the deep neural network model. And during deep neural network model training, the optimal weight is obtained by adopting a mode of updating the weight in a feedforward calculation process and a backward propagation process, so that the model has classification capability. The above steps are a complete training process, and after training, the error will be reduced continuously, that is to say, the deep neural network learns the noisy convolutional code information sequence to decode gradually. And training for multiple times until the accuracy and the error of the deep neural network are stable, and stopping training. Here, 2000 training times were selected.
(4) Inputting the convolutional code to be decoded into a trained deep neural network model for decoding, and finishing the decoding of the convolutional code: and (3) randomly generating an information sequence for the trained deep neural network model, coding by a (2, 1, 2) convolutional code and adding noise to Gaussian white noise with the range of 1 db-7 db, and inputting the obtained soft information with noise into the neural network according to the size of an input layer of the deep neural network. The initial state bit is 00, the first training sample is decoded by using the first information bit example of the decoded convolutional code to obtain a decoding result with the length of 8, and the last bit of the decoding output of the first training sample is the decoding result of the first information bit of the convolutional code at the first time. And taking a second training sample, namely sliding one bit after the code word with the noise is carried out to obtain an output, wherein the first information bit of the convolutional code corresponds to the decoding output of the position last to the second on the second code word. In this example, the sliding window size is 8, and when the sliding times reach the output layer size of 8 of the deep neural network, 8 weak classifiers including the information bit are obtained. These 8 weak classifiers classify the information bit in different dimensions and have 8 decoding outputs, and the information bit is voted by an integration method to generate a strong classifier: that is, 8 times of decoding outputs of 8 weak classifiers are counted, and a large number of decoding output results are voted to be used as the decoding result of the information bit, so that better decoding performance of the information bit is obtained. The whole process is as shown in fig. 3, the positions of the information bits with the same color on the convolutional code are the same, and each column is the decoded output of one sample, so that the 8-time decoding result is correspondingly integrated. This step is repeated for subsequent codewords, thereby completing the full decoding of the convolutional code.
Example two
The convolutional code decoding method based on the deep learning and integration method of the embodiment is as follows: setting the number of weak classifiers and the number of weak classifiers, wherein the weak classifiers adopt perceptrons to decode convolutional codes and set the depth of a deep neural network; finally, voting the decoding result of the weak classifier by adopting an integration method to obtain decoding output; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (4)

1. A convolutional code decoding method based on deep learning and integration method is characterized in that: the method comprises the following steps:
firstly, setting the number of weak classifiers and weak classifiers; determining model parameters of a deep neural network in a weak classifier, and establishing a deep neural network model; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM;
secondly, establishing a data sample set of convolutional code decoding;
thirdly, training a deep neural network model by using the data sample set in the second step and adopting a softmax classification mode and a batch gradient descent method;
fourthly, inputting the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the convolutional code decoding is completed;
the establishing of the data sample set of the convolutional code decoding refers to:
first, an information sequence of length L is randomly generated and passed through (n)0,k0M) after encoding the convolutional code, the length n is obtained by Gaussian white noise plus noise0×L/k0The information sequence of the noisy convolutional code;
secondly, adding 00 as a state bit in front of the noisy convolutional code information sequence, and segmenting the noisy convolutional code information sequence according to the input dimension of the deep neural network model in the first step to form a noisy convolutional code information code field corresponding to the size of the deep neural network model; wherein, for any one (n)0,k0M) the start states of the convolutional codes are all 00;
finally, carrying out sample construction on the noisy convolutional code information code word segment, and generating a data sample set conforming to the deep neural network model in batches;
the step of carrying out sample construction on the noisy convolutional code information code segment and generating a data sample set conforming to the deep neural network model in batches is as follows:
(1) in the information code field of the noisy convolutional code, front k0X m bits are the status bits of the original codeword, the last n0×n/k0The bits are information code fields of the noisy convolutional codes and are used as first training samples;
(2) setting the size of a sample acquisition information bit window to be N, when a second training sample is taken, sliding a sample window backwards by one bit along the sequence direction for a code field of noisy convolutional code information, taking a second 0 of a state bit of a previous code field and a first bit of the previous code field as state bits, and adding code bits added in the sample window after sliding to serve as a second training sample;
(3) and in the same way, generating a data sample set conforming to the deep neural network model in batch according to the information code field of the full-section noisy convolutional code and the corresponding information bits.
2. The convolutional code decoding method based on deep learning and integration method as claimed in claim 1, wherein: in the first placeIn one step, the determining the model parameters of the deep neural network and establishing the deep neural network model includes: for any one (n)0,k0M) convolutional code, setting the output layer dimension of the deep neural network model as n and the input layer dimension as n0×n/k0(ii) a Setting an activation function of the hidden layer as f (x) ═ relu (x); and establishing a deep neural network model according to the output layer dimension, the input layer dimension and the activation function of the hidden layer.
3. The convolutional code decoding method based on deep learning and integration method as claimed in claim 1, wherein: in the third step, the optimal weight is obtained by adopting a mode of updating the weight in two processes of feedforward calculation and backward propagation during the training of the deep neural network model, so that the model has classification capability.
4. The convolutional code decoding method based on deep learning and integration method as claimed in claim 2, wherein: in the fourth step, the convolutional code to be decoded is input into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the completion of convolutional code decoding means that:
(1) coding and denoising a convolutional code to be decoded to obtain a noisy convolutional code information sequence, and finally adding a zero information bit to an information bit of the noisy convolutional code information sequence and inputting the information bit into a deep neural network model;
(2) setting the initial state bit as 00, and decoding the first information bit;
(3) updating the state of the information sequence of the noisy convolutional code, sliding one bit behind the information sequence of the noisy convolutional code, and obtaining once decoding output after sliding one bit each time; when the number of sliding times behind the information sequence of the noisy convolutional code reaches the dimension size n of an output layer of the deep neural network model, acquiring n weak classifiers comprising the information bits, classifying the information bits on different dimensions by the n weak classifiers and outputting n decoding times; voting the information bit by adopting an integrated method to generate a strong classifier: counting the n decoding outputs, voting to obtain a large number of decoding outputs as the decoding result of the information bit;
(4) and repeating the decoding step for the subsequent noisy convolutional code information sequence to finish the convolutional code decoding.
CN201811250493.9A 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method Active CN109525253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811250493.9A CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811250493.9A CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Publications (2)

Publication Number Publication Date
CN109525253A CN109525253A (en) 2019-03-26
CN109525253B true CN109525253B (en) 2020-10-27

Family

ID=65774135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811250493.9A Active CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Country Status (1)

Country Link
CN (1) CN109525253B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110739977B (en) * 2019-10-30 2023-03-21 华南理工大学 BCH code decoding method based on deep learning
CN110912566B (en) * 2019-11-28 2023-09-29 福建江夏学院 Digital audio broadcasting system channel decoding method based on sliding window function
CN112953565B (en) * 2021-01-19 2022-06-14 华南理工大学 Return-to-zero convolutional code decoding method and system based on convolutional neural network
CN115424262A (en) * 2022-08-04 2022-12-02 暨南大学 Method for optimizing zero sample learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374385B1 (en) * 1998-05-26 2002-04-16 Nokia Mobile Phones Limited Method and arrangement for implementing convolutional decoding
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374385B1 (en) * 1998-05-26 2002-04-16 Nokia Mobile Phones Limited Method and arrangement for implementing convolutional decoding
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的LDPC译码算法研究;李杰;《中国优秀硕士学位论文全文数据库》;20180615(第06期);第38-46页 *

Also Published As

Publication number Publication date
CN109525253A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109525253B (en) Convolutional code decoding method based on deep learning and integration method
CN109525254B (en) Convolutional code soft decision decoding method based on deep learning
CN106571831B (en) LDPC hard decision decoding method and decoder based on deep learning
CN1132320C (en) Optimal soft-output decoder for tail-biting trellis codes
CN109586730B (en) Polarization code BP decoding algorithm based on intelligent post-processing
CN110278002A (en) Polarization code belief propagation list decoding method based on bit reversal
CN1430815A (en) TURBO decoder with decision feedback equalization
CN109728824B (en) LDPC code iterative decoding method based on deep learning
CN1421085A (en) Method and apparatus for combined soft-decision based on interference cancellation and decoding
Wen et al. Utilizing soft information in decoding of variable length codes
CN105846827B (en) Iterative joint message source and channel interpretation method based on arithmetic code and low density parity check code
CN109547032B (en) Confidence propagation LDPC decoding method based on deep learning
CN1333969A (en) Systems and methods for receiving modulated signal containing encoded and unencoded bits using multi-pass demodulation
CN104242957A (en) Decoding processing method and decoder
CN110299921B (en) Model-driven Turbo code deep learning decoding method
US20070033478A1 (en) System and method for blind transport format detection with cyclic redundancy check
KR100515472B1 (en) Channel coding and decoding method and multiple-antenna communication systems performing the same
Teng et al. Convolutional neural network-aided bit-flipping for belief propagation decoding of polar codes
EP2174422B1 (en) Decoding of recursive convolutional codes by means of a decoder for non-recursive convolutional codes
CN111130567B (en) Polarization code belief propagation list decoding method added with noise disturbance and bit inversion
CN112953565B (en) Return-to-zero convolutional code decoding method and system based on convolutional neural network
CN101662294A (en) Decoding device based on MAP decoder and decoding method thereof
Haeb-Umbach et al. Soft features for improved distributed speech recognition over wireless networks.
CN112929036A (en) Confidence propagation dynamic flip decoding method based on log-likelihood ratio
CN112332866A (en) Method for identifying cascade code parameters based on DVB-S and DVB-S2 signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant