CN114598332A - Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel - Google Patents
Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel Download PDFInfo
- Publication number
- CN114598332A CN114598332A CN202210287727.7A CN202210287727A CN114598332A CN 114598332 A CN114598332 A CN 114598332A CN 202210287727 A CN202210287727 A CN 202210287727A CN 114598332 A CN114598332 A CN 114598332A
- Authority
- CN
- China
- Prior art keywords
- parameter
- convolutional code
- data
- updating
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/23—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/01—Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/015—Simulation or testing of codes, e.g. bit error rate [BER] measurements
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/65—Purpose and implementation aspects
- H03M13/6508—Flexibility, adaptability, parametrability and configurability of the implementation
- H03M13/6516—Support of multiple code parameters, e.g. generalized Reed-Solomon decoder for a variety of generator polynomials or Galois fields
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0001—Systems modifying transmission characteristics according to link quality, e.g. power backoff
- H04L1/0036—Systems modifying transmission characteristics according to link quality, e.g. power backoff arrangements specific to the receiver
- H04L1/0038—Blind format detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0059—Convolutional codes
Abstract
The invention discloses a blind identification method of convolutional code parameters of a 1/n code rate suitable for an AWGN channel, which mainly solves the problem that the identification performance is reduced when the constraint length of a convolutional code is increased and the signal to noise ratio is low in the prior art; the method comprises the following specific steps: (1) calculating a posterior probability of each bit using the received data; (2) constructing a cross entropy loss function by utilizing the relation between the convolutional code word and the generator polynomial coefficient; (3) on the basis of ADAM (adaptive motion estimation) algorithm, introducing a segmentation and multiframe mechanism to update parameters; (4) compared with the prior art, the method improves the identification accuracy, reduces the time complexity, and can be used for identifying the 1/n code rate convolutional code generation polynomial coefficient under the AWGN channel.
Description
Technical Field
The invention belongs to the technical field of communication, and further relates to a blind identification method for convolutional code parameters with 1/n code rate in the technical field of blind identification of channel coding, which can be used for blind identification of a coefficient of a convolutional code generator polynomial under an AWGN (additive White Gaussian noise) channel.
Background
The channel coding technology is an important component in a communication system and is a key means for improving the reliability of communication transmission. In some applications, however, the encoding parameters are often unknown to the receiving party, and in order to decode the transmitted data, encoding parameter identification techniques need to be employed. Therefore, channel coding identification is considered as an important technology in digital communication. Convolutional codes are one of the most commonly used error correcting codes at present, and blind identification thereof is of great interest in cognitive radio and uncooperative environments. However, some current convolutional code identification techniques are based on algebraic algorithms, which only use hard decision information of received data, and thus are only suitable for noise-free environments or low-noise environments, and when noise is enhanced, the identification performance of these algorithms is greatly reduced.
The patent document of the university of electronic science and technology of Xian discloses a blind identification method for coding parameters of convolutional codes with any code rate and high error code (application date: 2016, 6, 28, 201610496231.5, application publication number: CN 106059712B). The method comprises the following specific steps: in the first step, the obtained convolutional code information bit stream VsCode rate analysis matrix C arranged in p × qqAnalyzing the number of non-relevant columns c of each matrixiDetermining an effective column set N; secondly, taking the greatest common divisor of all elements in the column set N to obtain the code length N of the convolutional code; thirdly, according to the code length n of the convolutional code and the known information bit stream V of the convolutional codesIdentifying the information bit length and the convolution code check sequence; fourthly, according to the code length n, the information bit length k and the check sequence P of the convolution coden-kThe register length m and the generator matrix G of the convolutional code are identified. The method has the following defects: since the method only uses the hard decision information of the received data, it is only suitable for low noise environment, and when the noise is enhanced, the recognition performance is greatly reduced and it is not suitable for AWGN channel.
Yu et al, in its published paper A Least Square Method for Parameter Estimation of RSC Sub-Codes of Turbo Codes 2014, proposes a Method for identifying coefficients of a Turbo code RSC (regenerative Systematic Convolitional) encoder under an AWGN channel, which uses soft information of received data to generate a relationship between polynomial coefficients and codewords through an RSC encoder, constructs an LSM (Least Square Method) error function, then searches for an optimal solution in a global scope, and solves for the polynomial coefficients through continuous iteration. However, the performance of this method is drastically reduced when the encoder constraint length is increased, and its simple search method easily falls into a saddle point during the search, resulting in poor recognition performance at low snr.
Disclosure of Invention
The invention aims to provide a convolutional code parameter blind identification method suitable for 1/n code rate of an AWGN channel aiming at the defects in the prior art, which is suitable for the AWGN channel, has good identification performance when the medium-low signal-to-noise ratio and the constraint length of a convolutional code are increased, and reduces the time complexity compared with the prior art.
In order to achieve the above purpose, the blind ADAM-based convolutional code parameter identification method of the present invention specifically comprises the following steps:
(1) calculating a posterior probability of each bit using the received data;
receiving data r transmitted through AWGN channel and having frame length L before demodulator, and according to noise variance sigma2Calculating the posterior probability of each bit;
(2) defining the coefficient of a convolutional code generator polynomial as an unknown quantity parameter q, and constructing a cross entropy loss function by utilizing the relation between the code word of the convolutional code and the coefficient of the generator polynomial:
(3) updating the unknown quantity parameter q by adopting an ADAM algorithm;
(3a) initializing a random value with an unknown quantity parameter q between 0 and 1, and defining an updating step length alpha;
(3b) the loss function is used for deriving q to obtain a gradient g, and the first moment estimation of the gradient is further calculatedAnd second moment estimation
(3c) First order moment estimation using gradientAnd second moment estimationCompleting the updating of the unknown quantity parameter q with the updating step length alpha;
(4) when the unknown quantity parameter q is updated in the step (3), a segmented and multi-frame mechanism is adopted;
(4a) defining the segmentation size SegSize, dividing the data with the frame length of L into L/SegSize segments, and updating q in the step (3) on the basis of each data segment;
(4b) performing the updating of q in (4a) for each frame of data for the received M frames of data, cycling M times in total;
(5) defining the number of iterations iternumCircularly executing the operation iter in the step (4)numThen, completing the updating of the parameter q;
(6) and (4) judging the parameter q updated in the step (5), if q is more than or equal to 0 and less than or equal to 0.5, judging the parameter q to be 0, if q is more than 0.5 and less than or equal to 1, judging the parameter q to be 1, and finishing the estimation of the polynomial coefficient generated by the convolutional code.
Compared with the prior art, the invention has the following advantages:
first, the present invention constructs a cross entropy loss function using the relationship between the convolutional code codeword and the generator polynomial coefficient, and can better utilize the soft information of the received data.
Secondly, the ADAM algorithm is adopted during parameter updating, the first moment estimation and the second moment estimation of the gradient are utilized to update the parameters, different parameters have different self-adaptive learning rates, and the whole updating process has faster convergence and lower time complexity.
Thirdly, because the invention introduces a segmentation mechanism in the parameter updating process, and the gradient calculation is based on each data segment rather than the whole frame of data, the randomness is introduced in the gradient updating process, and the performance reduction caused by the saddle point falling in the parameter updating process is avoided, thereby improving the identification performance.
Fourthly, because the multi-frame mechanism is introduced in the parameter updating process, and the gradient updating is based on multi-frame data instead of one frame, the method has good identification performance when the signal-to-noise ratio is low and the constraint length of the convolutional code is increased.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a simulation of the present invention on a convolutional code (2,1,5), with a generator polynomial of (53, 75);
FIG. 3 is a simulation of the present invention on a convolutional code (2,1,7), with a generator polynomial of (247,371);
FIG. 4 is a contour plot during an update of the cross-entropy loss function of the present invention;
Detailed Description
Embodiments and effects of the present invention will be described in detail below with reference to the accompanying drawings.
The invention discloses a blind identification method of convolutional code parameters suitable for a 1/n code rate, which is used for blind identification of polynomial coefficients generated by convolutional codes under an AWGN channel.
Referring to fig. 1, the implementation steps of this example are as follows:
the coded codeword is c, and the AWGN channel noise variance is sigma2And the data after channel noise transmission is r, the posterior probability of each bit is expressed as:
defining the coefficient of a convolutional code generator polynomial as an unknown parameter q, and expressing a cross entropy loss function as:
wherein the content of the first and second substances,m is the length of the convolutional code register, n isThe number of output convolutional codes, L is the length of a received data frame;
and 3, introducing a segmentation and multiframe mechanism on the basis of the ADAM algorithm to update the parameters.
3.1) initializing a random value with an unknown quantity parameter q between 0 and 1, and defining an updating step length alpha and an iteration number iternum(ii) a 3.2) defining the segment size SegSize, and dividing the data with the frame length of L into L/SegSize data segments;
3.2.1) on each data segment, deriving q from a cross entropy loss function C (q) to obtain a gradient g [ k ] of the kth iteration:
wherein the content of the first and second substances,
3.2.2) calculate the first order moment estimate u [ k ] and the second order moment estimate v [ k ] of the gradient: (ii) a
u[k]=β1·u[k-1]+(1-β1)·g[k]
v[k]=β2·v[k-1]+(1-β2)·g2[k];
Wherein, beta1And beta2Exponential decay rate, usually beta, estimated for moment1=0.9,β2=0.999;
3.2.3) correction of the first order moment estimate u [ k ] and the second order moment estimate v [ k ] of the gradient, i.e.
3.2.4) estimation of first order moment and second order moment of gradient by calculation on unknown quantityParameter(s)Is updated, i.e.
Where ε is a very small number, typically 10E-08;
3.3) repeating the steps 3.2.1) -3.2.4) for the L/SegSize data segments;
3.4) repeatedly executing the steps 3.2) -3.3) on each frame of received M frames of data;
3.5) let k be k +1, and iteratively perform the above steps 3.2) -3.4) until the iteration number k reaches the defined maximum iteration number iternum;
3.6) if q converges to a full zero solution in the iteration process, setting k to be 1, and restarting to execute the steps 3.1) -3.5); step 4, judging the updated parameters
And (3) judging the parameter q after iteration is completed, if q is more than or equal to 0 and less than or equal to 0.5, judging the parameter q to be 0, if q is more than 0.5 and less than or equal to 1, judging the parameter q to be 1, and finishing the estimation of the polynomial coefficient generated by the convolutional code.
The effect of the present invention can be further illustrated by the following simulation results:
simulation conditions
The parameters adopted by the simulation experiment of the invention are as follows: the information source sends an information sequence with the length of 1000, a coding sequence with the length of 2000 is obtained after coding by a convolutional coder, a modulation sequence is obtained after BPSK modulation, and received data with the frame length L of 2000 is obtained after AWGN channel transmission. Maximum number of iterations iter in parameter updating using ADAM algorithmnumSegment size SegSize 30, SegSize 40. When the multi-frame mechanism is not adopted, M is set to 1, and the update step α is 0.1, and when the multi-frame mechanism is adopted, M is set to 40, and the update step α is 0.01.
The method is compared with the LSM identification method in performance.
Second, simulation content and result analysis
Simulation 1: there are four experiments for simulation 1. The identified convolutional code is (2,1, 5). The generator polynomial is (53, 75). The first simulation experiment was the identification performance curve of the LSM method described above. The second simulation is a performance curve identified in the present invention using only the ADAM method. The third simulation experiment is a performance curve of the ADAM method and the segmentation mechanism in the recognition of the invention. The fourth simulation experiment is a performance curve of the present invention when the ADAM method and the segmentation and multiframe mechanism are used for identification. The results of the four simulation experiments are shown in fig. 2. The abscissa in fig. 2 represents the signal-to-noise ratio. The unit is dB. The ordinate represents the recognition accuracy.
As can be seen from FIG. 2, the ADAM and segmentation and multiframe mechanism provided by the present invention has the best recognition performance, which can reach more than 90% of recognition accuracy at-1 dB, and is far superior to the LSM recognition method, and in addition, as can be seen from FIG. 2, the ADAM method alone and the ADAM and segmentation mechanism are both adopted to improve the recognition performance, and are both superior to the LSM recognition method;
simulation 2: there are four experiments for simulation 2, the identified convolutional code is (2,1,7), and the generator polynomial is (247,371). The first simulation experiment is a performance curve of the LSM method, the second simulation experiment is a performance curve of the invention which is only identified by adopting the ADAM method, the third simulation experiment is a performance curve of the invention which is identified by adopting the ADAM method and the segmentation mechanism, the fourth simulation experiment is a performance curve of the invention which is identified by adopting the ADAM method and the segmentation and multi-frame mechanism, the result curves of the four simulation experiments are shown in figure 3, the abscissa in figure 3 represents the signal-to-noise ratio, the unit dB and the ordinate represents the identification accuracy;
as can be seen from fig. 3, when the constraint length of the convolutional code increases, the identification performance of the LSM method is drastically reduced, but the ADAM and the segmentation and multiframe mechanisms proposed by the present invention still have an identification accuracy rate close to 90% at 1dB, and in addition, it can also be seen from the figure that when the constraint length of the convolutional code increases, the ADAM alone cannot complete the identification, and the segmentation and multiframe mechanisms can improve the identification accuracy rate to different degrees;
simulation 3: simulation 3 takes (2,1,5) as an example, a convolutional code with a generator polynomial of (53,75), which is expressed as g0(D)=1+D2+D4+D5,g1(D)=1+D+D2+D3+D5Let q stand for0,0=q0,4=q0,5=1,q0,1=q0,3=0,q1,0=q1,1=q1,3=q1,5=1,q1,4When q is equal to 0, mixing0,2,q1,2As variables, contour plots of the loss functions at-3 dB with M1 and M20, respectively, are shown in fig. 4;
as can be seen from fig. 4, when M is 1, the loss function converges to (0,0.6), when q is present0,2Will be judged as 0, q1,2Will be judged to be 1, and in fact q0,2And q is1,2Are all 1; when M is 20, the loss function will converge to (1,1), when q is present0,2And q is1,2All will be judged to be 1, and the improvement of the identification performance by the multi-frame mechanism can be seen.
In conclusion, the method solves the problem that the identification performance is poor in the existing method when the constraint length of the convolutional code is increased and the signal-to-noise ratio is low, and the time complexity is lower.
Claims (3)
1. A1/n code rate convolutional code parameter blind identification method suitable for AWGN channel is characterized in that cross entropy is adopted as a loss function, an ADAM (adaptive motion estimation) algorithm is adopted during parameter updating, a segmentation and multiframe mechanism is introduced in the updating process, gradient calculation is based on each data segment rather than whole frame data, and parameter updating is based on multiframe data rather than a frame; the method comprises the following specific steps:
(1) calculating a posterior probability of each bit using the received data;
receiving data r transmitted through AWGN channel and having frame length L before demodulator, and according to noise variance sigma2Calculating the posterior probability of each bit;
(2) defining the coefficient of a convolutional code generator polynomial as an unknown quantity parameter q, and constructing a cross entropy loss function by utilizing the relation between the code word of the convolutional code and the coefficient of the generator polynomial:
(3) updating the unknown quantity parameter q by adopting an ADAM algorithm;
(3a) initializing a random value with an unknown quantity parameter q between 0 and 1, and defining an updating step length alpha;
(3b) the loss function is used for deriving q to obtain a gradient g, and the first moment estimation of the gradient is further calculatedAnd second moment estimation
(3c) First order moment estimation using gradientAnd second moment estimationCompleting the updating of the unknown quantity parameter q with the updating step length alpha;
(4) when the unknown quantity parameter q is updated in the step (3), a segmented and multi-frame mechanism is adopted;
(4a) defining the segmentation size SegSize, dividing the data with the frame length of L into L/SegSize segments, and updating q in the step (3) on the basis of each data segment;
(4b) performing the updating of q in (4a) for each frame of data for the received M frames of data, cycling M times in total;
(5) defining the number of iterations iternumCircularly executing the operation iter in the step (4)numThen, completing the updating of the parameter q;
(6) and (4) judging the parameter q updated in the step (5), if q is more than or equal to 0 and less than or equal to 0.5, judging the parameter q to be 0, if q is more than 0.5 and less than or equal to 1, judging the parameter q to be 1, and finishing the estimation of the polynomial coefficient generated by the convolutional code.
2. The method of claim 1, wherein (2) a generator polynomial coefficient of the convolutional code is defined as an unknown parameter q, and a cross entropy loss function is constructed by using a relationship between a codeword of the convolutional code and the generator polynomial coefficient, which is expressed as follows:
3. The method according to claim 1, wherein (4) when updating the unknown quantity parameter q, a segmentation and multiframe mechanism is adopted, and the method comprises the following steps:
4.1) initializing a random value with the unknown quantity parameter q between 0 and 1, and defining an updating step length alpha and an iteration number iternum;
4.2) defining the segment size SegSize, and dividing the data with the frame length of L into L/SegSize data segments;
4.2.1) on each data segment, deriving q from a cross entropy loss function C (q) to obtain a gradient g [ k ] of the kth iteration:
wherein the content of the first and second substances,
4.2.2) calculate the first order moment estimate u [ k ] and the second order moment estimate v [ k ] of the gradient:
u[k]=β1·u[k-1]+(1-β1)·g[k]
v[k]=β2·v[k-1]+(1-β2)·g2[k]
wherein, beta1And beta2Exponential decay rate, usually beta, estimated for moment1=0.9,β2=0.999;
4.2.3) correction of the first order moment estimate u [ k ] and the second order moment estimate v [ k ] of the gradient, i.e.
4.2.4) estimation of unknown quantity parameters using the first moment and second moment of the calculated gradientIs updated, i.e.
Where ε is a very small number, typically 10E-08;
4.3) repeating the steps 4.2.1) -4.2.4) for the L/SegSize data segments;
4.4) repeatedly executing the steps 4.2) -4.3) on each frame of received M frames of data;
4.5) let k be k +1, iteratively performing the above steps 4.2) -4.4) until the number of iterations k reaches the defined maximum number of iterations iternum。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210287727.7A CN114598332A (en) | 2022-03-23 | 2022-03-23 | Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210287727.7A CN114598332A (en) | 2022-03-23 | 2022-03-23 | Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114598332A true CN114598332A (en) | 2022-06-07 |
Family
ID=81810945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210287727.7A Pending CN114598332A (en) | 2022-03-23 | 2022-03-23 | Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114598332A (en) |
-
2022
- 2022-03-23 CN CN202210287727.7A patent/CN114598332A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU716761B2 (en) | An optimal soft-output decoder for tail-biting trellis codes | |
Park et al. | Joint source-channel decoding for variable-length encoded data by exact and approximate MAP sequence estimation | |
US7539920B2 (en) | LDPC decoding apparatus and method with low computational complexity algorithm | |
US6829313B1 (en) | Sliding window turbo decoder | |
US7657819B2 (en) | Method and apparatus for termination of iterative turbo decoding | |
US6999531B2 (en) | Soft-decision decoding of convolutionally encoded codeword | |
KR20010015542A (en) | Product code iterative decoding | |
Wu et al. | A maximum cosinoidal cost function method for parameter estimation of RSC turbo codes | |
US7464316B2 (en) | Modified branch metric calculator to reduce interleaver memory and improve performance in a fixed-point turbo decoder | |
CN109525253B (en) | Convolutional code decoding method based on deep learning and integration method | |
US8230307B2 (en) | Metric calculations for map decoding using the butterfly structure of the trellis | |
US10461776B2 (en) | Device and method of controlling an iterative decoder | |
US6614858B1 (en) | Limiting range of extrinsic information for iterative decoding | |
CN114598332A (en) | Convolutional code parameter blind identification method suitable for 1/n code rate of AWGN channel | |
CN110798224A (en) | Compression coding, error detection and decoding method | |
US20020094038A1 (en) | Error-correcting code decoding method and error-correcting code decoding apparatus | |
US6757859B1 (en) | Parallel turbo trellis-coded modulation | |
CN115021765A (en) | Low-complexity Turbo product code decoding algorithm based on code word reliability | |
Andersen | 'Turbo'coding for deep space applications | |
CN113114274A (en) | Simplified polar code continuous elimination list decoder based on segmented key set | |
JP2006509465A (en) | Turbo decoder using parallel processing | |
JP2006509465A5 (en) | ||
CN114553370B (en) | Decoding method, decoder, electronic device and storage medium | |
CN115642924B (en) | Efficient QR-TPC decoding method and decoder | |
CN113556135B (en) | Polarization code belief propagation bit overturn decoding method based on frozen overturn list |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |