CN116155453A - Decoding method and related equipment for dynamic signal-to-noise ratio - Google Patents

Decoding method and related equipment for dynamic signal-to-noise ratio Download PDF

Info

Publication number
CN116155453A
CN116155453A CN202310437871.9A CN202310437871A CN116155453A CN 116155453 A CN116155453 A CN 116155453A CN 202310437871 A CN202310437871 A CN 202310437871A CN 116155453 A CN116155453 A CN 116155453A
Authority
CN
China
Prior art keywords
decoding
signal
noise ratio
neural network
product code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310437871.9A
Other languages
Chinese (zh)
Other versions
CN116155453B (en
Inventor
陈斌
张秦山
黄钰钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202310437871.9A priority Critical patent/CN116155453B/en
Publication of CN116155453A publication Critical patent/CN116155453A/en
Application granted granted Critical
Publication of CN116155453B publication Critical patent/CN116155453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/336Signal-to-interference ratio [SIR] or carrier-to-interference ratio [CIR]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a decoding method and related equipment for dynamic signal to noise ratio, wherein the method comprises the following steps: acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code; the neural product code is passed through an additive Gaussian white noise channel to obtain a noisy received signal; classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals; selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity; and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result. The invention adaptively selects the decoding flows with different calculation complexity according to the estimated signal-to-noise ratio, gives consideration to the error correction performance of decoding, and realizes the reduction of the overall calculation amount and the reduction of the decoding time.

Description

Decoding method and related equipment for dynamic signal-to-noise ratio
Technical Field
The present invention relates to the field of electronic communications technologies, and in particular, to a decoding method, system, terminal and computer readable storage medium for dynamic signal to noise ratio.
Background
Traditional channel coding technology relies on coding theory to construct coding and decoding algorithm, and human is required to participate in coding and decoding process design in the whole course. With the development of deep learning, neural networks with end-to-end learning capability are also used in the study of automated construction channel coding and decoding algorithms, in an attempt to optimize the coding and decoding algorithms using neural networks to obtain performance beyond classical coding and decoding algorithms.
Currently, for example, product ae utilizes a self-encoder architecture to form the coding and decoding modules of channel coding, which are trained in an end-to-end fashion, resulting in a neural product code and corresponding decoder. The resulting neural product code of this structure exhibits a competitive effect in error correction performance compared to turbo ae and classical coding, which also use a self-encoder structure. However, the decoding module of the product ae ignores the signal to noise ratio information, and adopts a decoding process with the same computational complexity for all codewords at the same time. In the scene of signal-to-noise ratio change of the channel, the code words passing through the low signal-to-noise ratio channel can realize good error correction with less calculated amount, and the unified decoding flow increases the calculated amount of the decoding process, reduces the processing speed and improves the decoding delay.
Accordingly, existing neural product codes and decoders are in need of improvement and development.
Disclosure of Invention
The invention mainly aims to provide a decoding method, a system, a terminal and a computer readable storage medium for dynamic signal to noise ratio, and aims to solve the problems that in the prior art, the decoding flow increases the calculated amount of a decoding process, the processing speed is low and the decoding delay is high.
In order to achieve the above object, the present invention provides a decoding method for a dynamic snr, the decoding method for a dynamic snr includes the following steps:
acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code;
the neural product code is subjected to an additive Gaussian white noise channel to obtain a noisy received signal;
classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals;
selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity;
and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result.
Optionally, the decoding method for a dynamic signal to noise ratio, wherein the obtaining an original information bit string, and using a neural network encoder to encode the information bit string, obtains a neural product code, specifically includes:
for a pair of
Figure SMS_1
Message bit matrix->
Figure SMS_2
Coding the nerve product code, said message bit matrix +.>
Figure SMS_3
Each element in (2) is 0 or 1, each row is considered as one +.>
Figure SMS_4
Dimension row vectors, each column is considered as one +.>
Figure SMS_5
Column vectors of dimensions; />
wherein ,
Figure SMS_6
representing the message bit matrix->
Figure SMS_7
Line number of->
Figure SMS_8
Representing the messageBit matrix->
Figure SMS_9
The number of columns of (a);
the neural network encoder comprises two cascaded neural network encoders, namely a first neural network encoder and a second neural network encoder;
using the first neural network encoder to matrix the message bits
Figure SMS_10
Is->
Figure SMS_11
Encoding rows, each row being encoded by the original +.>
Figure SMS_12
The dimension vector is encoded as a +.>
Figure SMS_13
Real value vector of dimension, get
Figure SMS_14
Is a matrix of (a);
using the second neural network encoder to encode
Figure SMS_15
In matrix +.>
Figure SMS_16
Columns are encoded, each column being encoded by the original +.>
Figure SMS_17
The dimension vector is encoded as a +.>
Figure SMS_18
Real value vector of dimension, get +.>
Figure SMS_19
Matrix of nerve product code words->
Figure SMS_20
wherein ,
Figure SMS_21
representing the said nerve product code word +.>
Figure SMS_22
Line number of->
Figure SMS_23
Representing the said nerve product code word +.>
Figure SMS_24
Is a column number of columns.
Optionally, the decoding method for a dynamic signal to noise ratio, wherein the step of passing the neural product code through an additive white gaussian noise channel to obtain a noisy received signal specifically includes:
for the following
Figure SMS_25
Matrix of nerve product code words->
Figure SMS_26
Through the additive white Gaussian noise channel, a noisy received signal is obtained at the receiving end>
Figure SMS_27
Wherein the received signal
Figure SMS_28
Likewise a +.>
Figure SMS_29
Is a matrix of (a) in the matrix.
Optionally, in the decoding method for a dynamic signal to noise ratio, the classifying, by using a neural network classifier, the received signal based on a signal to noise ratio to obtain a classification result specifically includes:
range of signal to noise ratio
Figure SMS_30
Average divide into->
Figure SMS_31
The subintervals are: />
Figure SMS_32
The received signals are set to be +.sup.th in order of signal to noise ratio level, respectively>
Figure SMS_33
Class, wherein->
Figure SMS_34
A threshold value for dividing the interval;
the received signal is processed
Figure SMS_35
Inputting to the neural network classifier, extracting features using convolution layer and outputting a +.>
Figure SMS_36
Dimension probability vector->
Figure SMS_37
Each element in the probability vector represents the probability of classifying the received signal into a class II, respectively, satisfying +.>
Figure SMS_38
Optionally, the decoding method for dynamic signal to noise ratio, wherein the selecting the corresponding branch of the decoding module according to the classification result specifically includes:
the coding module comprises candidate coding branches, which are respectively called branches
Figure SMS_39
According to the classification result output by the neural network classifier
Figure SMS_40
In the decoding branch->
Figure SMS_41
Selecting branch with highest probability +.>
Figure SMS_42
Optionally, the decoding method for a dynamic signal to noise ratio, wherein the decoding the received signal using a corresponding branch in the decoding module, to obtain a decoding result, specifically includes:
the received signal is processed
Figure SMS_43
Input to +.>
Figure SMS_44
The decoding branches perform decoding operation, the +.>
Figure SMS_45
The output result of each decoding branch is the final decoding result.
Optionally, the decoding method facing to dynamic signal to noise ratio, wherein the decoding method facing to dynamic signal to noise ratio further includes:
for the received signal
Figure SMS_46
The rows and columns of (a) alternate decoding operations using different neural networks.
In addition, in order to achieve the above object, the present invention further provides a decoding system for a dynamic snr, where the decoding system for a dynamic snr includes:
the information coding module is used for acquiring an original information bit string, and coding the information bit string by using the neural network coder to obtain a neural product code;
the signal processing module is used for enabling the nerve product code to pass through an additive Gaussian white noise channel to obtain a noisy received signal;
the signal classification module is used for classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals;
the branch selection module is used for selecting corresponding branches of the decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity;
and the signal decoding module is used for decoding the received signal by using the corresponding branch in the decoding module to obtain a decoding result.
In addition, to achieve the above object, the present invention also provides a terminal, wherein the terminal includes: the decoding method comprises the steps of a memory, a processor and a decoding program which is stored in the memory and can run on the processor and is oriented to the dynamic signal to noise ratio, wherein the decoding program which is oriented to the dynamic signal to noise ratio is executed by the processor to realize the decoding method which is oriented to the dynamic signal to noise ratio.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a decoding program for a dynamic signal to noise ratio, where the decoding program for a dynamic signal to noise ratio implements the steps of the decoding method for a dynamic signal to noise ratio as described above when the decoding program for a dynamic signal to noise ratio is executed by a processor.
From the above, in the scheme of the invention, the original information bit string is obtained, and the information bit string is encoded by using a neural network encoder to obtain a neural product code; the neural product code is subjected to an additive Gaussian white noise channel to obtain a noisy received signal; classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals; selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity; and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result. The invention adaptively selects the decoding flows with different calculation complexity according to the estimated signal-to-noise ratio, gives consideration to the error correction performance of decoding, and realizes the reduction of the overall calculation amount and the reduction of the decoding time.
Drawings
FIG. 1 is a flow chart of a decoding method for dynamic SNR according to a preferred embodiment of the present invention;
FIG. 2 is a diagram of a model architecture of the decoding method of the present invention for dynamic signal to noise ratio;
FIG. 3 is a schematic diagram showing a comparison of bit error rate and block error rate between a model of the decoding method for dynamic SNR of the present invention and an original product AE model;
FIG. 4 is a schematic diagram showing a comparison of bit error rate and block error rate between a model of the decoding method for dynamic signal to noise ratio and an original product AE model of the present invention;
FIG. 5 is a schematic diagram of a decoding system for dynamic SNR according to a preferred embodiment of the present invention;
FIG. 6 is a schematic diagram of the operating environment of a preferred embodiment of the terminal of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The decoding method for the dynamic signal to noise ratio according to the preferred embodiment of the present invention, as shown in fig. 1 and 2, comprises the following steps:
and S10, acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code.
Specifically, to
Figure SMS_47
Message bit matrix->
Figure SMS_48
Coding the nerve product code, said message bit matrix +.>
Figure SMS_49
Each element in (2) is 0 or 1, each row is considered as one +.>
Figure SMS_50
Dimension row vectors, each column is considered as one +.>
Figure SMS_51
Column vectors of dimensions; wherein (1)>
Figure SMS_52
Representing the number of rows of the message bit matrix and representing the number of columns of the message bit matrix;
the neural network encoder comprises two cascaded neural network encoders (such as an encoding module shown in fig. 2), namely a first neural network encoder (such as an encoder 1 shown in fig. 2) and a second neural network encoder (such as an encoder 2 shown in fig. 2); using the first neural network encoder to matrix the message bits
Figure SMS_54
Is->
Figure SMS_59
Performing line coding, wherein each line is coded into a +.>
Figure SMS_62
Real value vector of dimension, get +.>
Figure SMS_55
Is a matrix of (a); reusing the second neural network encoder will +.>
Figure SMS_58
In matrix +.>
Figure SMS_63
Columns are encoded, each column being encoded by the original +.>
Figure SMS_65
The dimension vector is encoded as a +.>
Figure SMS_53
Real value vector of dimension, namely obtaining nerve product code word of matrix>
Figure SMS_57
; wherein ,/>
Figure SMS_61
Representing the said nerve product code word +.>
Figure SMS_66
Line number of->
Figure SMS_56
Representing the said nerve product code word +.>
Figure SMS_60
Is a column number of columns. It is noted that the code word of the nerve product code +.>
Figure SMS_64
Each element in the matrix of (a) is a real value.
And step S20, the nerve product code passes through an additive Gaussian white noise channel to obtain a noisy received signal.
In particular, for
Figure SMS_67
Matrix of nerve product code words->
Figure SMS_68
Through an additive white gaussian noise channel (AWGN channel as shown in fig. 2), a noisy received signal is obtained at the receiving end>
Figure SMS_69
The method comprises the steps of carrying out a first treatment on the surface of the Wherein the received signal->
Figure SMS_70
Also is one
Figure SMS_71
Is a matrix of (a) in the matrix.
And step S30, classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals.
Specifically, the signal to noise ratio is ranging
Figure SMS_72
Average divide into->
Figure SMS_73
The subintervals are:
Figure SMS_74
the received signals are set to be +.sup.th in order of signal to noise ratio level, respectively>
Figure SMS_75
Class, wherein->
Figure SMS_76
Threshold value for dividing the interval, +.>
Figure SMS_77
and />
Figure SMS_78
Is the lower and upper limits of the signal-to-noise ratio range.
In one implementation, the received signal is processed
Figure SMS_79
Input to the neural network classifier (classification module as shown in fig. 2), then the features are extracted using convolution layer and passed through a full connection layer, finally outputting a +.>
Figure SMS_80
Dimension probability vector
Figure SMS_81
The probability vectorEach element of->
Figure SMS_82
Respectively represent the received signal +.>
Figure SMS_83
Divide into->
Figure SMS_84
Probability of class, satisfy->
Figure SMS_85
And S40, selecting a corresponding branch of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity.
In particular, the decoding module (such as the decoding module shown in fig. 2) includes
Figure SMS_88
Candidate coding branches, respectively designated as branches +.>
Figure SMS_91
The method comprises the steps of carrying out a first treatment on the surface of the In this embodiment, for example, let +.>
Figure SMS_95
The basis of the classification of the neural network classifier is the signal-to-noise ratio of the newly received signal, and the real signal-to-noise ratio level of the received signal is used as a label when the neural network classifier is trained. Define the signal-to-noise ratio as +.>
Figure SMS_89
and />
Figure SMS_92
Wherein the average power of the encoded codeword is normalized to 1,/or->
Figure SMS_94
Is the variance of noise subject to gaussian distribution. In this embodiment, because +.>
Figure SMS_97
So that there is only one threshold value of division +.>
Figure SMS_86
Is recorded as
Figure SMS_90
. To->
Figure SMS_93
To distinguish the threshold of the signal to noise ratio, less than or equal to +.>
Figure SMS_96
More complex decoding branches (branch 1) are taken, with a tag of 0; is greater than->
Figure SMS_87
A simpler decoding branch (branch 2) is taken, labeled 1. The neural network classifier is trained using a cross entropy loss function. At the time of testing, the neural network classifier will not get accurate information of the signal-to-noise ratio, but will extract features independently and classify the received signals according to the signal-to-noise ratio level.
According to the classification result output by the neural network classifier
Figure SMS_98
In the decoding branch->
Figure SMS_99
Selecting branch with highest probability +.>
Figure SMS_100
And S50, decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result.
Specifically, the received signal is processed
Figure SMS_101
Input to +.>
Figure SMS_102
The decoding branches perform decoding operation, the +.>
Figure SMS_103
The output result of each decoding branch is the final decoding result.
Further, the decoding the received signal using a corresponding branch in the decoding module, where the decoding operation includes:
the implementation mode of each decoding branch is to cascade a plurality of sub-neural networks to realize the difference of complexity among the decoding branches, and the method specifically comprises the following steps:
for the received signal
Figure SMS_104
The two adopted neural networks are respectively called a column decoder and a row decoder, and form a cascade decoder pair. In an independent decoding branch, a plurality of decoder pairs will be concatenated, the +.>
Figure SMS_105
The number of decoding branches will be->
Figure SMS_106
The pairs of decoders are concatenated to form "1 st column decoder-1 st row decoder-2 nd column decoder-2 nd row decoder" -th>
Figure SMS_107
A structure of a column decoder-a first row decoder ". In the present embodiment
Figure SMS_108
. For each column decoder and row decoder, one implementation method is:
decoding each column of the matrix of the neural product code or intermediate process using a neural network decoder, in the following cases: first, if the decoder is the 1 st column decoder, the received signal is directly receivedIs transposed of (a)
Figure SMS_110
As input, and output a +.>
Figure SMS_112
Wherein each column to be input is represented (one +.>
Figure SMS_115
Dimension vector) is increased to the original multiple through the neural network, and the decoding effect can be improved by increasing the dimension in the middle decoding process; second, if the decoder is 2 nd to +.>
Figure SMS_111
The input of each column decoder is +.>
Figure SMS_114
And the output of the last decoder in cascade (i.e. the output of the last row decoder, which will be subjected to rearrangement and transposition operations to obtain a +.>
Figure SMS_116
Matrix of) that will be spliced +.>
Figure SMS_118
Is output as a matrix +.>
Figure SMS_109
Is a matrix of (a); third, if the decoder is the +.>
Figure SMS_113
The input is the same as the second case, but the output is +.>
Figure SMS_117
The reason for the change in the output dimension here is to facilitate the subsequent restoration to the size of the original message matrix. After each column decoder outputs the result, the result is sent to the next column decoder to perform an advanced decoding operationAnd (3) doing so.
For nerve product code using a neural network decoder
Figure SMS_119
Or each row of the intermediate matrix is decoded, in the following cases:
first, if the decoder is 1 st to 1 st
Figure SMS_121
In the individual row decoder, the input is a received signal
Figure SMS_123
And the output of the last neural network (i.e. the last column decoder output, which will be rearranged and transposed to obtain a +.>
Figure SMS_126
Matrix of) that will be spliced +.>
Figure SMS_122
Is output as a matrix +.>
Figure SMS_125
Is a matrix of (a); second, if the decoder is the +.>
Figure SMS_127
A row decoder with an input of +.>
Figure SMS_129
Is the matrix of +.>
Figure SMS_120
The output of the column decoder in the decoder, which is obtained by rearranging and transposing operations, is finally the output of a +>
Figure SMS_124
Is>
Figure SMS_128
ObtainingThe recovered original message.
Evaluating decoding performance by comparing recovered message bit matrix
Figure SMS_130
And a message bit matrix to be transmitted
Figure SMS_131
If each element is the same, the error probability can be obtained, namely, the lower the error rate is, the better the error correction performance is; by changing the number of concatenated row and column decoder pairs, decoding branches of different complexity can be obtained; each decoding branch receiving a signal +>
Figure SMS_132
As input, decoding error correction of the received signal can be implemented independently.
The model training strategy of the decoding method for dynamic signal to noise ratio in this embodiment is as follows:
joint pre-training of the encoder with the first decoding component, followed by fixing of the encoder parameters, joint training of the second decoding branch with the encoder, the process being repeated until the first
Figure SMS_133
The pre-training of the decoding branches is finished;
training a classifier, namely encoding the generated message matrix samples by using a pre-trained encoder, adding additive Gaussian white noise, and obtaining a label by taking the added noise level, namely the signal-to-noise ratio, as a reference, as described above;
the joint fine-tuning encoder, classifier and all decoding branches. Since the selection of the decoding branches is an unpredictable operation, the back propagation cannot be directly exploited. One implementation is to take "soft decisions":
Figure SMS_134
;
wherein ,
Figure SMS_136
is->
Figure SMS_138
The received signal of each branch->
Figure SMS_140
Decoding result of->
Figure SMS_137
Is the selection of the classifier output +.>
Figure SMS_139
Probability of each decoding branch. Since the decoded output is +.>
Figure SMS_141
A weighted sum of the outputs of the branches. During the test, only the selected branch participates in the calculation, which is equivalent to the branch weight of 1 and the rest branch weights of 0, so the loss function is +>
Figure SMS_142
The action of the term is in particular to let +.>
Figure SMS_135
As close to 1 as possible.
And training a model of a decoding method facing to the dynamic signal to noise ratio by using the model training strategy and the loss function. In one implementation, the loss function is the sum of the weights:
Figure SMS_143
;
wherein the first term is the binary cross entropy loss:
Figure SMS_144
wherein ,
Figure SMS_145
to be sent outOriginal message matrix sent->
Figure SMS_149
Line->
Figure SMS_154
The individual elements are->
Figure SMS_148
;/>
Figure SMS_150
For the message bit matrix recovered at the receiver by means of a decoder +.>
Figure SMS_152
Line->
Figure SMS_156
The individual elements are->
Figure SMS_147
。/>
Figure SMS_151
For sigmoid function, for a certain scalar input
Figure SMS_153
。/>
Figure SMS_155
As an optimization objective for decoding error correction performance for measuring final decoding result +.>
Figure SMS_146
The bit error rates of the (message bit matrix) and original message matrices should be as high as possible to ensure that the value at each bit is consistent with the original message.
The second term is cross entropy loss, which is used for measuring the classification precision of the classifier based on the signal-to-noise ratio of the receiving end, and in the embodiment, the number of decoding branches
Figure SMS_157
. For total +.>
Figure SMS_158
Is classified by +.>
Figure SMS_159
The output probability vector of the individual samples (received signal) as input is +.>
Figure SMS_160
The corresponding class labels are the sums, and the specific formulas are as follows:
Figure SMS_161
;
third item
Figure SMS_162
For measuring the confidence of the classifier on the classification result of a certain sample. Wherein->
Figure SMS_163
The probability vector result is output for a classifier on a certain sample. For the probability vector to be output, it is desirable to have the element with the largest probability value as close to 1 as possible, i.e. to have any two different elements in the vector +.>
Figure SMS_164
and />
Figure SMS_165
The difference of the values of the components is as large as possible, and the addition of the components can optimize the performance of the classifier, and the specific formula is as follows:
Figure SMS_166
as shown in fig. 3 and fig. 4, fig. 3 and fig. 4 are a first schematic diagram and a second schematic diagram of a comparison of a bit Error Rate and a block Error Rate of a model of the dynamic SNR-oriented decoding method of the present invention and an original product model, specifically, SNR is the SNR, error Rate is the Error probability, BER is the bit Error Rate, and BLER is the block Error Rate.
Further, as shown in fig. 5, based on the above decoding method facing to the dynamic signal-to-noise ratio, the present invention further correspondingly provides a decoding system facing to the dynamic signal-to-noise ratio, where the decoding system facing to the dynamic signal-to-noise ratio includes:
an information encoding module 61, configured to obtain an original information bit string, and encode the information bit string by using a neural network encoder to obtain a neural product code;
a signal processing module 62, configured to pass the neural product code through an additive white gaussian noise channel to obtain a noisy received signal;
a signal classification module 63, configured to classify the received signal based on a signal-to-noise ratio by using a neural network classifier, to obtain a classification result, where a basis of the classification is a signal-to-noise ratio level of the received signal;
a branch selection module 64, configured to select a corresponding branch of a decoding module according to the classification result, where the decoding module includes a plurality of independent branches with different complexity;
and the signal decoding module 65 is configured to decode the received signal by using a corresponding branch in the decoding module, so as to obtain a decoding result.
Further, as shown in fig. 6, based on the above decoding method and system for dynamic snr, the present invention further provides a terminal, which includes a processor 10, a memory 20, and a display 30. Fig. 6 shows only some of the components of the terminal, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data, such as program codes of the installation terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a decoding program 40 facing the dynamic snr, and the decoding program 40 facing the dynamic snr can be executed by the processor 10, so as to implement the decoding method facing the dynamic snr in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for executing program code or processing data stored in the memory 20, for example performing the dynamic snr oriented decoding method, etc.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 30 is used for displaying information at the terminal and for displaying a visual user interface. The components 10-30 of the terminal communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the decoding program 40 in the memory 20, which is directed to the dynamic signal to noise ratio:
acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code;
the neural product code is subjected to an additive Gaussian white noise channel to obtain a noisy received signal;
classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals;
selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity;
and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result.
The method for obtaining the original information bit string includes the steps of using a neural network encoder to encode the information bit string to obtain a neural product code, and specifically includes:
for a pair of
Figure SMS_167
Message bit matrix->
Figure SMS_168
Coding the nerve product code, said message bit matrix +.>
Figure SMS_169
Each element in (2) is 0 or 1, each row is considered as one +.>
Figure SMS_170
Dimension row vectors, each column is considered as one +.>
Figure SMS_171
Column vectors of dimensions;
wherein ,
Figure SMS_172
representing the message bit matrix->
Figure SMS_173
Line number of->
Figure SMS_174
Representing the message bit matrix->
Figure SMS_175
The number of columns of (a);
the neural network encoder comprises two cascaded neural network encoders, namely a first neural network encoder and a second neural network encoder;
using the first neural network encoder to matrix the message bits
Figure SMS_176
Is->
Figure SMS_177
Encoding rows, each row being encoded by the original +.>
Figure SMS_178
The dimension vector is encoded as a +.>
Figure SMS_179
Real value vector of dimension, get
Figure SMS_180
Is a matrix of (a);
using the second neural network encoder to encode
Figure SMS_181
In matrix +.>
Figure SMS_182
Columns are encoded, each column being encoded by the original +.>
Figure SMS_183
The dimension vector is encoded as a +.>
Figure SMS_184
Real value vector of dimension to obtain matrix
Figure SMS_185
Is a code word of the nerve product code->
Figure SMS_186
wherein ,
Figure SMS_187
representing the said nerve product code word +.>
Figure SMS_188
Line number of->
Figure SMS_189
Representing the said nerve product code word +.>
Figure SMS_190
Is a column number of columns.
The step of obtaining a noisy received signal by passing the neural product code through an additive Gaussian white noise channel specifically comprises the following steps:
for the following
Figure SMS_191
Matrix of nerve product code words->
Figure SMS_192
Through the additive white Gaussian noise channel, a noisy received signal is obtained at the receiving end>
Figure SMS_193
Wherein the received signal
Figure SMS_194
Likewise a +.>
Figure SMS_195
Is a matrix of (a) in the matrix.
The method for classifying the received signals by using the neural network classifier based on the signal-to-noise ratio to obtain classification results specifically comprises the following steps:
range of signal to noise ratio
Figure SMS_196
Average divide into->
Figure SMS_197
The subintervals are: />
Figure SMS_198
The received signals are set to be +.sup.th in order of signal to noise ratio level, respectively>
Figure SMS_199
Class, wherein->
Figure SMS_200
To divide the areaA divided threshold value;
the received signal is processed
Figure SMS_201
Inputting to the neural network classifier, extracting features using convolution layer and outputting a +.>
Figure SMS_202
Dimension probability vector->
Figure SMS_203
Each element in the probability vector represents the received signal +.>
Figure SMS_204
Divide into->
Figure SMS_205
Probability of class, satisfy->
Figure SMS_206
The selecting a corresponding branch of the decoding module according to the classification result specifically includes:
the coding module comprises
Figure SMS_207
Candidate coding branches, respectively designated as branches +.>
Figure SMS_208
According to the classification result output by the neural network classifier
Figure SMS_209
In the decoding branch->
Figure SMS_210
Selecting branch with highest probability +.>
Figure SMS_211
The decoding module decodes the received signal by using a corresponding branch in the decoding module to obtain a decoding result, and specifically includes:
the received signal is processed
Figure SMS_212
Input to +.>
Figure SMS_213
The decoding branches perform decoding operation, the +.>
Figure SMS_214
The output result of each decoding branch is the final decoding result.
Wherein for the received signal
Figure SMS_215
The rows and columns of (a) alternate decoding operations using different neural networks.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a decoding program facing to a dynamic signal to noise ratio, and the decoding program facing to the dynamic signal to noise ratio realizes the steps of the decoding method facing to the dynamic signal to noise ratio when being executed by a processor.
In summary, the present invention provides a decoding method and related equipment for dynamic signal to noise ratio, where the method includes: acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code; the neural product code is subjected to an additive Gaussian white noise channel to obtain a noisy received signal; classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals; selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity; and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result. The invention adaptively selects the decoding flows with different calculation complexity according to the estimated signal-to-noise ratio, gives consideration to the error correction performance of decoding, and realizes the reduction of the overall calculation amount and the reduction of the decoding time.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal comprising the element.
Of course, those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by a computer program for instructing relevant hardware (e.g., processor, controller, etc.), the program may be stored on a computer readable storage medium, and the program may include the above described methods when executed. The computer readable storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (10)

1. The decoding method facing to the dynamic signal-to-noise ratio is characterized by comprising the following steps of:
acquiring an original information bit string, and encoding the information bit string by using a neural network encoder to obtain a neural product code;
the neural product code is subjected to an additive Gaussian white noise channel to obtain a noisy received signal;
classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals;
selecting corresponding branches of a decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity;
and decoding the received signal by using a corresponding branch in the decoding module to obtain a decoding result.
2. The method for decoding a dynamic signal to noise ratio according to claim 1, wherein the obtaining an original information bit string, and encoding the information bit string by using a neural network encoder, to obtain a neural product code, specifically comprises:
for a pair of
Figure QLYQS_1
Message bit matrix->
Figure QLYQS_2
Coding the nerve product code, said message bit matrix +.>
Figure QLYQS_3
Each element in (2) is 0 or 1, each row is considered as one +.>
Figure QLYQS_4
Dimension row vectors, each column is considered as one +.>
Figure QLYQS_5
Column vectors of dimensions;
wherein ,
Figure QLYQS_6
representing the message bit matrix->
Figure QLYQS_7
Line number of->
Figure QLYQS_8
Representing the message bit matrix->
Figure QLYQS_9
The number of columns of (a);
the neural network encoder comprises two cascaded neural network encoders, namely a first neural network encoder and a second neural network encoder;
using the first neural network encoder to matrix the message bits
Figure QLYQS_10
Is->
Figure QLYQS_11
Encoding rows, each row being encoded by the original +.>
Figure QLYQS_12
The dimension vector is encoded as a +.>
Figure QLYQS_13
Real value vector of dimension, get
Figure QLYQS_14
Is a matrix of (a);
using the second neural network encoder to encode
Figure QLYQS_15
In matrix +.>
Figure QLYQS_16
Columns are encoded, each column being encoded by the original +.>
Figure QLYQS_17
The dimension vector is encoded as a +.>
Figure QLYQS_18
Real value vector of dimension, get +.>
Figure QLYQS_19
Matrix of nerve product code words->
Figure QLYQS_20
wherein ,
Figure QLYQS_21
representing the said nerve product code word +.>
Figure QLYQS_22
Line number of->
Figure QLYQS_23
Representing the said nerve product code word +.>
Figure QLYQS_24
Is a column number of columns.
3. The method for decoding a dynamic signal-to-noise ratio according to claim 2, wherein said passing the neural product code through an additive white gaussian noise channel to obtain a noisy received signal comprises:
for the following
Figure QLYQS_25
Matrix of nerve product code words->
Figure QLYQS_26
Through the additive white Gaussian noise channel, a noisy received signal is obtained at the receiving end>
Figure QLYQS_27
Wherein the received signal
Figure QLYQS_28
Also is one/>
Figure QLYQS_29
Is a matrix of (a) in the matrix.
4. The method for decoding a dynamic signal-to-noise ratio according to claim 3, wherein the classifying the received signal by using a neural network classifier based on a signal-to-noise ratio to obtain a classification result specifically comprises:
range of signal to noise ratio
Figure QLYQS_30
Average divide into->
Figure QLYQS_31
The subintervals are: />
Figure QLYQS_32
The received signals are set to be +.sup.th in order of signal to noise ratio level, respectively>
Figure QLYQS_33
Class, wherein,
Figure QLYQS_34
a threshold value for dividing the interval;
the received signal is processed
Figure QLYQS_35
Inputting to the neural network classifier, extracting features using convolution layer and outputting a +.>
Figure QLYQS_36
Dimension probability vector->
Figure QLYQS_37
Each element in the probability vector +.>
Figure QLYQS_38
Respectively represent the received signal +.>
Figure QLYQS_39
Divide into->
Figure QLYQS_40
Probability of class, satisfy->
Figure QLYQS_41
5. The method for decoding a dynamic signal to noise ratio according to claim 4, wherein selecting a corresponding branch of a decoding module according to the classification result specifically comprises:
the coding module comprises
Figure QLYQS_42
Candidate coding branches, respectively designated as branches +.>
Figure QLYQS_43
According to the classification result output by the neural network classifier
Figure QLYQS_44
In the decoding branch->
Figure QLYQS_45
Selecting branch with highest probability +.>
Figure QLYQS_46
6. The method for decoding a dynamic snr according to claim 4, wherein decoding the received signal using a corresponding branch in the decoding module to obtain a decoding result comprises:
the received signal is processed
Figure QLYQS_47
Input to +.>
Figure QLYQS_48
The decoding branches perform decoding operation, the +.>
Figure QLYQS_49
The output result of each decoding branch is the final decoding result.
7. The method for dynamic snr oriented decoding of claim 3, further comprising:
for the received signal
Figure QLYQS_50
The rows and columns of (a) alternate decoding operations using different neural networks.
8. A dynamic signal-to-noise ratio oriented decoding system, the dynamic signal-to-noise ratio oriented decoding system comprising:
the information coding module is used for acquiring an original information bit string, and coding the information bit string by using the neural network coder to obtain a neural product code;
the signal processing module is used for enabling the nerve product code to pass through an additive Gaussian white noise channel to obtain a noisy received signal;
the signal classification module is used for classifying the received signals based on the signal-to-noise ratio by using a neural network classifier to obtain classification results, wherein the basis of the classification is the signal-to-noise ratio level of the received signals;
the branch selection module is used for selecting corresponding branches of the decoding module according to the classification result, wherein the decoding module comprises a plurality of independent branches with different complexity;
and the signal decoding module is used for decoding the received signal by using the corresponding branch in the decoding module to obtain a decoding result.
9. A terminal, the terminal comprising: memory, a processor and a dynamic signal to noise ratio oriented decoding program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the dynamic signal to noise ratio oriented decoding method according to any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a dynamic signal to noise ratio oriented decoding program, which when executed by a processor implements the steps of the dynamic signal to noise ratio oriented decoding method according to any of claims 1-7.
CN202310437871.9A 2023-04-23 2023-04-23 Decoding method and related equipment for dynamic signal-to-noise ratio Active CN116155453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310437871.9A CN116155453B (en) 2023-04-23 2023-04-23 Decoding method and related equipment for dynamic signal-to-noise ratio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310437871.9A CN116155453B (en) 2023-04-23 2023-04-23 Decoding method and related equipment for dynamic signal-to-noise ratio

Publications (2)

Publication Number Publication Date
CN116155453A true CN116155453A (en) 2023-05-23
CN116155453B CN116155453B (en) 2023-07-07

Family

ID=86339291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310437871.9A Active CN116155453B (en) 2023-04-23 2023-04-23 Decoding method and related equipment for dynamic signal-to-noise ratio

Country Status (1)

Country Link
CN (1) CN116155453B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117118462A (en) * 2023-09-07 2023-11-24 重庆大学 Neural network BP decoding method based on coding distributed computation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105207682A (en) * 2015-09-22 2015-12-30 西安电子科技大学 Polarization code belief propagation decoding method based on dynamic check matrix
CN110719112A (en) * 2019-09-12 2020-01-21 天津大学 Deep learning-based parameter adaptive RS code decoding method
CN111555760A (en) * 2020-05-21 2020-08-18 天津大学 Multi-system symbol-level product code method for correcting random errors and long burst erasures
US20210012767A1 (en) * 2020-09-25 2021-01-14 Intel Corporation Real-time dynamic noise reduction using convolutional networks
WO2021041551A2 (en) * 2019-08-26 2021-03-04 Board Of Regents, The University Of Texas System Autoencoder-based error correction coding for low-resolution communication
CN113473149A (en) * 2021-05-14 2021-10-01 北京邮电大学 Semantic channel joint coding method and device for wireless image transmission
CN114268328A (en) * 2021-12-02 2022-04-01 哈尔滨工业大学 Convolutional code decoding method based on bidirectional LSTM and convolutional code encoding and decoding method
WO2022092353A1 (en) * 2020-10-29 2022-05-05 엘지전자 주식회사 Method and apparatus for performing channel encoding and decoding in wireless communication system
US20220284282A1 (en) * 2021-03-05 2022-09-08 Qualcomm Incorporated Encoding techniques for neural network architectures
WO2023031632A1 (en) * 2021-09-06 2023-03-09 Imperial College Innovations Ltd Encoder, decoder and communication system and method for conveying sequences of correlated data items from an information source across a communication channel using joint source and channel coding, and method of training an encoder neural network and decoder neural network for use in a communication system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105207682A (en) * 2015-09-22 2015-12-30 西安电子科技大学 Polarization code belief propagation decoding method based on dynamic check matrix
WO2021041551A2 (en) * 2019-08-26 2021-03-04 Board Of Regents, The University Of Texas System Autoencoder-based error correction coding for low-resolution communication
CN110719112A (en) * 2019-09-12 2020-01-21 天津大学 Deep learning-based parameter adaptive RS code decoding method
CN111555760A (en) * 2020-05-21 2020-08-18 天津大学 Multi-system symbol-level product code method for correcting random errors and long burst erasures
US20210012767A1 (en) * 2020-09-25 2021-01-14 Intel Corporation Real-time dynamic noise reduction using convolutional networks
WO2022092353A1 (en) * 2020-10-29 2022-05-05 엘지전자 주식회사 Method and apparatus for performing channel encoding and decoding in wireless communication system
US20220284282A1 (en) * 2021-03-05 2022-09-08 Qualcomm Incorporated Encoding techniques for neural network architectures
CN113473149A (en) * 2021-05-14 2021-10-01 北京邮电大学 Semantic channel joint coding method and device for wireless image transmission
WO2023031632A1 (en) * 2021-09-06 2023-03-09 Imperial College Innovations Ltd Encoder, decoder and communication system and method for conveying sequences of correlated data items from an information source across a communication channel using joint source and channel coding, and method of training an encoder neural network and decoder neural network for use in a communication system
CN114268328A (en) * 2021-12-02 2022-04-01 哈尔滨工业大学 Convolutional code decoding method based on bidirectional LSTM and convolutional code encoding and decoding method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JAMALI M V ET AL.: "ProductAE: Towards Training Larger Channel Codes based on Neural Product Codes", 《PRODUCTAE: TOWARDS TRAINING LARGER CHANNEL CODES BASED ON NEURAL PRODUCT CODES》 *
JING C ET AL.: "A Low-Complexity MIMO Detector Based on Fast Dual-Lattice Reduction Algorithm", 《2016 IEEE 84TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL)》 *
杨亚宁: "高动态低信噪比条件下的跳频信息回传技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
胡启蕾 等: "信噪比自适应Turbo自编码器信道编译码技术", 《无线电通信技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117118462A (en) * 2023-09-07 2023-11-24 重庆大学 Neural network BP decoding method based on coding distributed computation

Also Published As

Publication number Publication date
CN116155453B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN110474716B (en) Method for establishing SCMA codec model based on noise reduction self-encoder
Li et al. Designing near-optimal steganographic codes in practice based on polar codes
Be’Ery et al. Active deep decoding of linear codes
CN111128137A (en) Acoustic model training method and device, computer equipment and storage medium
CN106059712B (en) High-error-code arbitrary-code-rate convolutional code coding parameter blind identification method
CN110278001B (en) Polarization code partition decoding method based on deep learning
CN116155453B (en) Decoding method and related equipment for dynamic signal-to-noise ratio
TWI744827B (en) Methods and apparatuses for compressing parameters of neural networks
Ye et al. Circular convolutional auto-encoder for channel coding
CN109728824B (en) LDPC code iterative decoding method based on deep learning
CN110233628B (en) Self-adaptive belief propagation list decoding method for polarization code
Cao et al. Deep learning-based decoding of constrained sequence codes
CN111460800B (en) Event generation method, device, terminal equipment and storage medium
CN107451106A (en) Text method and device for correcting, electronic equipment
CN111898482A (en) Face prediction method based on progressive generation confrontation network
CN109983705B (en) Apparatus and method for generating polarization code
CN110138390A (en) A kind of polarization code SSCL algorithm decoder based on deep learning
Gao et al. ResNet-like belief-propagation decoding for polar codes
CN108171325B (en) Time sequence integration network, coding device and decoding device for multi-scale face recovery
CN111130567B (en) Polarization code belief propagation list decoding method added with noise disturbance and bit inversion
Zhang et al. A RNN decoder for channel decoding under correlated noise
CN110798224A (en) Compression coding, error detection and decoding method
CN1741614A (en) Method and system for decoding video, voice, and speech data using redundancy
CN112953565B (en) Return-to-zero convolutional code decoding method and system based on convolutional neural network
Cavarec et al. A learning-based approach to address complexity-reliability tradeoff in OS decoders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant