CN109495211B - Channel coding and decoding method - Google Patents

Channel coding and decoding method Download PDF

Info

Publication number
CN109495211B
CN109495211B CN201811154122.0A CN201811154122A CN109495211B CN 109495211 B CN109495211 B CN 109495211B CN 201811154122 A CN201811154122 A CN 201811154122A CN 109495211 B CN109495211 B CN 109495211B
Authority
CN
China
Prior art keywords
symbol
probability
sequence
model
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811154122.0A
Other languages
Chinese (zh)
Other versions
CN109495211A (en
Inventor
王杰林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Ruilide Information Technology Co ltd
Original Assignee
Hunan Ruilide Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Ruilide Information Technology Co ltd filed Critical Hunan Ruilide Information Technology Co ltd
Priority to CN201811154122.0A priority Critical patent/CN109495211B/en
Publication of CN109495211A publication Critical patent/CN109495211A/en
Application granted granted Critical
Publication of CN109495211B publication Critical patent/CN109495211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0076Distributed coding, e.g. network coding, involving channel coding

Abstract

The invention has provided a channel coding and decoding method, has adopted the total probability coding based on probability model, add the error-checking and error-correcting relation of the sign during the code, realize the error-checking and error-correcting unity, can improve the error correction or error detection ability effectively, closer to the theoretical value of entropy, the self-test experiment has simulated AWGN channel, begin to decode in the 27 th byte, the error appears in 27 bytes at the same time and is less than 3 bits, the situation of more than or equal to 3 bits can be checked errors; in addition, the method provided by the invention belongs to linear error correction and has no time delay; the error correction rate and the adaptive error rate are high, and under the condition that the error rate is less than 0.00001, 100% of correction decoding can be carried out at one time; the code rate is low, the traditional method needs 1/2 code rate, and the method only needs 1/1.5849625 code rate; for uncorrectable data, only 100 bits need to be retransmitted, and the whole data packet does not need to be retransmitted, so that the coding and decoding speed is higher.

Description

Channel coding and decoding method
Technical Field
The invention relates to the technical field of data transmission and communication, in particular to a channel coding and decoding method.
Background
Referring to fig. 3, according to shannon's theory of information, a classical digital information transmission system generally comprises a source, a source encoder, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, a source decoder, and a sink.
Due to the existence of interference and the randomness of information code elements, a receiving end cannot predict and identify whether an error exists in the information code elements, and meanwhile, in the prior art, a coding method based on a convolutional code or an algebraic method is generally adopted, but the methods have the problem of poor error correction or error detection capability.
Disclosure of Invention
In view of the above situation, the present invention provides a method for channel coding to solve the problem of poor error correction or error detection capability in the prior art.
A channel encoding method, comprising:
step 1, preprocessing a random binary sequence to enable a symbol 0 in the random binary sequence to be a symbol 1, 0, 1 and enable a symbol 1 in the random binary sequence to be a symbol 0, 1;
step 2, initializing parameters and setting the probability of a symbol 0
Figure BDA0001818542190000011
Probability of symbol 1
Figure BDA0001818542190000012
Figure BDA0001818542190000013
First order static coefficient
Figure BDA0001818542190000014
Obtaining H according to a probabilistic model expression0=p0=1,L 00, wherein L0、H0、p0Respectively a lower limit initial value, an upper limit initial value and an interval length initial value of the probability interval; acquiring the length Len of a random binary sequence; setting a loop variable i to 1, wherein i is the ith symbol currently processed, and when i is Len, the coding is completed; setting subscript V of probability interval after all symbols are coded as 0; x is the number ofiIs the ith symbol waiting for encoding; p is a radical of1=p2=p3=0;
And 3, step 3: if the ith symbol is symbol 0, entering the step 4; if the ith symbol is symbol 1, entering the step 5;
and 4, step 4: 3 symbols 1, 0, 1 are coded separately, as follows:
for the purpose of encoding the symbol 1,
Figure BDA0001818542190000021
V=V+p1
for the coded symbols 0, p2=rp(0)p1,V=V+0;
For the coded symbols 1, p3=rp(1)p2,V=V+p3
Entering the step 6;
and 5, step 5: the method comprises the following steps of respectively coding 2 symbols 0 and 1:
for the coded symbol 0, the code is,
Figure BDA0001818542190000022
V=V+0;
for the coded symbols 1, p2=rp(1)p1,V=V+p2
Entering the step 6;
and 6, step 6: adding 1 to a cyclic variable i, namely i is i + 1; if i is judged to be less than or equal to Len, returning to the step 3 for coding the next symbol; if i is more than Len, ending the coding and outputting V and Len.
According to the method provided by the invention, the probability model-based full-probability coding is adopted, the error checking and correcting relation of the symbols is added during coding, the error checking and correcting are realized, the error correcting or detecting capability can be effectively improved, the entropy theoretical value is closer, an AWGN channel is simulated by a self-test experiment, the decoding is started in the 27 th byte, the error is less than 3 bits in the 27 bytes, and the error can be detected under the condition that the error is more than or equal to 3 bits; in addition, the method provided by the invention belongs to linear error correction and has no time delay; the error correction rate and the adaptive error rate are high, and under the condition that the error rate is less than 0.00001, 100% of correction decoding can be carried out at one time; the code rate is low, the traditional method needs 1/2 code rate, and the method only needs 1/1.5849625 code rate; for uncorrectable data, only 100 bits need to be retransmitted, and the whole data packet does not need to be retransmitted, so that the coding and decoding speed is higher.
In addition, the channel coding method according to the present invention may further include the following additional features:
further, in step 1, the preprocessing the random binary sequence specifically includes:
adding 1 symbol 1 after each symbol 0 to obtain a sequence A;
adding 1 symbol 0 behind each symbol 1 of the sequence A to obtain a sequence B;
bitwise negation of sequence B to obtain sequence C, i.e.
Figure BDA0001818542190000023
Further, in step 1, the preprocessing the random binary sequence specifically includes:
if the symbol 0 is taken from the original string, actually coding three symbols of 1, 0 and 1 in sequence; if symbol 1 is taken from the original string, two symbols 0, 1 are actually encoded in sequence.
Further, the probability model satisfies the following condition:
constructing a contractible or expansive probability model and defining time tnN is a natural number of 1 or more, and the probability contraction or expansion coefficient of the sign is ωnAnd defines the probability of all symbols at time tnAccording to the same coefficient omeganThe stochastic process of variation is a generic process with three basic probabilistic models: if at any time tnHas omeganWhen is identical to 1, the model is defined as a standard model; if 0 < omega at any moment n1 or less, and omega is presentnIf < 1, defining the model as a contraction model; if there is omega at any momentnNot less than 1, and omega existsnIf the value is more than 1, defining the expansion model;
setting the fixed probabilities of the symbol 0 and the symbol 1 in the random binary sequence to be p (0) and p (1), if
Figure BDA0001818542190000031
And the first-order static coefficient r of the expansion model satisfies:
Figure BDA0001818542190000032
if the number of consecutive 1 s in the random sequence
Figure BDA0001818542190000033
The distribution function of the expansion model can keep the mathematical property of the random sequence and can completely restore the random sequence;
the probability function and the distribution function of the random binary sequence are as follows:
Figure BDA0001818542190000034
Figure BDA0001818542190000035
Figure BDA0001818542190000036
i.e. Hn(x1,x2,…,xn)=Ln(x1,x2,…,xn)+pn(x1,x2,…,xn) And the probability interval dependency of lossless coding and decoding is as follows:
[Ln(x1,x2,…,xn),Hn(x1,x2,…,xn))
∈[Ln-1(x1,x2,…,xn-1),Hn-1(x1,x2,…,xn-1))∈…
∈[L1(x1),H1(x1))。
further, in the probability model, the following conditional expression is satisfied:
Figure BDA0001818542190000037
p′(1)=rp(1)=1。
the present invention provides another aspect of a channel decoding method to solve the problem of poor error correction or error detection capability in the prior art.
A method of channel decoding, comprising:
step 1: parameter initialization, setting probability of symbol 0
Figure BDA0001818542190000041
Probability of symbol 1
Figure BDA0001818542190000042
Figure BDA0001818542190000043
First order static coefficient
Figure BDA0001818542190000044
Obtaining H according to a probabilistic model expression0=p0=1,L 00, wherein L0、H0、p0Respectively a lower limit initial value, an upper limit initial value and an interval length initial value of the probability interval; acquiring the length Len of a random binary sequence; setting a loop variable i to 1, wherein i is the ith symbol currently processed, and when i is Len, the coding is completed; setting buffer Buff [ n ]],[n]N in the buffer is the buffer length, and a buffer counter lp is 0; obtaining a V value when the temporary variable H is 0; x is a decoded symbol; j is the j-th binary system of the V value;
step 2: obtaining the ith symbol x according to the probability model expressioniPossible probability intervals:
symbol 0 interval is:
Figure BDA0001818542190000045
symbol 1 interval is:
Figure BDA0001818542190000046
entering the step 3;
and 3, step 3: and judging the interval to which V belongs according to a probability model expression:
if it is
Figure BDA0001818542190000047
Or
Figure BDA0001818542190000048
X is theniWhen x is equal to 0, x isiStoring the value of 0 in a buffer;
if it is
Figure BDA0001818542190000049
Or
Figure BDA00018185421900000410
X is theni1, mixing xiStoring the result in a buffer;
entering the step 4;
and 4, step 4: detecting errors, judging whether the original characteristic string is satisfied in the buffer, if so, judging that the decoding is correct, and entering the step 6; if not, judging that the decoding is incorrect, and entering a 5 th step for error correction;
and 5, step 5: turning the current V from left to right according to bits, obtaining a new V when turning 1bit, judging whether j is equal to the binary length L of the V value, if j is less than or equal to L, returning to the step 2, and if j is equal to j + 1; if j is larger than L, outputting an identifier and ending decoding;
and 6, step 6: judging whether the following characteristic strings appear in the buffer from left to right: if the substring is 101, outputting a symbol 0; if the substring is 01, outputting a symbol 1, adding 1 to a loop variable i, namely i is i +1, and entering the step 7;
and 7, step 7: judging, if i is less than or equal to Len, returning to the step 2 for continuous decoding; if i > Len, decoding is ended.
In addition, the channel decoding method according to the present invention may further include the following additional features:
further, in the step 4, the step of judging whether the buffer satisfies the original feature string includes two criteria:
criterion 1: the number of continuous symbols 1 in the binary string of the buffer cannot be larger than 2;
criterion 2: the number of consecutive symbols 0 in the buffer binary string cannot be larger than 1,
if the judgment result meets the judgment 1 and the judgment 2, the decoding is judged to be correct, and the step 6 is entered; if at least one criterion is not satisfied, the decoding is judged to be incorrect, and 5 th step error correction is carried out.
Further, the probability model satisfies the following condition:
constructing a contractible or expansive probability model and defining time tnN is a natural number of 1 or more, and the probability contraction or expansion coefficient of the sign is ωnAnd defines the probability of all symbols at time tnAccording to the same coefficient omeganThe stochastic process of variation is a generic process with three basic probabilistic models: if at any time tnHas omeganWhen is identical to 1, the model is defined as a standard model; if 0 < omega at any moment n1 or less, and omega is presentnIf < 1, defining the model as a contraction model; if there is omega at any momentnNot less than 1, and omega existsnIf the value is more than 1, defining the expansion model;
setting the fixed probabilities of the symbol 0 and the symbol 1 in the random binary sequence to be p (0) and p (1), if
Figure BDA0001818542190000051
And the first-order static coefficient r of the expansion model satisfies:
Figure BDA0001818542190000052
if the number of consecutive 1 s in the random sequence
Figure BDA0001818542190000053
The distribution function of the expansion model can keep the mathematical property of the random sequence and can completely restore the random sequence;
the probability function and the distribution function of the random binary sequence are as follows:
Figure BDA0001818542190000061
Figure BDA0001818542190000062
Figure BDA0001818542190000063
i.e. Hn(x1,x2,…,xn)=Ln(x1,x2,…,xn)+pn(x1,x2,…,xn) And the probability interval dependency of lossless coding and decoding is as follows:
[Ln(x1,x2,…,xn),Hn(x1,x2,…,xn))
∈[Ln-1(x1,x2,…,xn-1),Hn-1(x1,x2,…,xn-1))∈…
∈[L1(x1),H1(x1))。
further, in step 5, the V value is inverted to a finite length or an infinite length in bits.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a channel coding method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a channel decoding method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a conventional digital information transmission system.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The channel coding method and the channel decoding method corresponding to the channel coding method provided by this embodiment are both based on a probability model with interference, the generalized process is a change process of the probability of a symbol in a time sequence, and the probability of the symbol in a random process is not interfered by external interference in the time sequence, and is a standard model. The standard model is an ideal model, and if the probability of a symbol is externally interfered in time sequence, there are three basic situations: standard, no change in probability; shrinking to reduce the probability; expand, making the probability greater. Based on this, the probabilistic model satisfies the following condition:
firstly, a contractible or expandable probability model is constructed, and a time t is definednN is a natural number of 1 or more, and the probability contraction or expansion coefficient of the sign is ωnAnd defines the probability of all symbols at time tnAccording to the same coefficient omeganThe stochastic process of variation is a generic process with three basic probabilistic models: if at any time tnHas omeganWhen is identical to 1, the model is defined as a standard model; if 0 < omega at any moment n1 or less, and omega is presentnIf < 1, defining the model as a contraction model; if there is omega at any momentnNot less than 1, and omega existsnIf the value is more than 1, defining the expansion model;
setting the fixed probabilities of the symbol 0 and the symbol 1 in the random binary sequence to be p (0) and p (1), if
Figure BDA0001818542190000071
And the first-order static coefficient r of the expansion model satisfies:
Figure BDA0001818542190000072
if the number of consecutive 1 s in the random sequence
Figure BDA0001818542190000073
The distribution function of the expansion model can keep the mathematical property of the random sequence and can completely restore the random sequence;
the probability function and the distribution function of the random binary sequence are as follows:
Figure BDA0001818542190000074
Figure BDA0001818542190000075
Figure BDA0001818542190000076
i.e. Hn(x1,x2,…,xn)=Ln(x1,x2,…,xn)+pn(x1,x2,…,xn) And the probability interval dependency of lossless coding and decoding is as follows:
Figure BDA0001818542190000081
in the effective range of the interference effect, the distribution function can correctly restore the symbols at each moment, so that the lossless coding and decoding method is constructed based on the distribution function and the dependency relationship.
Referring to fig. 1, a channel coding method according to an embodiment mainly includes the following steps:
step 1, preprocessing a random binary sequence to enable a symbol 0 in the random binary sequence to be a symbol 1, 0, 1 and enable a symbol 1 in the random binary sequence to be a symbol 0, 1;
the description is given by taking a certain fully random binary sequence to be coded as an example, and the fully random binary sequence to be coded specifically includes:
1100101000111101011111110000001010110111110
in specific implementation, the following two ways can be adopted to preprocess the random binary sequence:
the first method comprises the following steps: adding 1 symbol 1 after each symbol 0 to obtain a sequence A;
A=1101011011010101111101101111111101010101010110110111011111101
adding 1 symbol 0 behind each symbol 1 of the sequence A to obtain a sequence B;
B=10100100101001010010010010101010100101001010101010101010010010010010010010100101001010100101010101010010
bitwise negation of sequence B to obtain sequence C, i.e.
Figure BDA0001818542190000082
C=01011011010110101101101101010101011010110101010101010101101101101101101101011010110101011010101010101101。
And the second method comprises the following steps: if the symbol 0 is taken from the original string, actually coding three symbols of 1, 0 and 1 in sequence; if the symbol 1 is taken from the original string, the two symbols of 0 and 1 are actually coded in sequence, and after the processing, the obtained result is the same as that of the first type, and compared with the first type of the second preprocessing mode, optimization acceleration can be realized.
Step 2, initializing parameters and setting the probability of a symbol 0
Figure BDA0001818542190000083
Probability of symbol 1
Figure BDA0001818542190000084
Figure BDA0001818542190000085
First order static coefficient
Figure BDA0001818542190000086
According to H in a probability model expressionn(x1,x2,…,xn),Ln(x1,x2,…,xn) And pn(x1,x2,…,xn) Obtaining H0=p0=1,L 00, wherein L0、H0、p0Respectively, the lower initial value of the probability intervalAn upper limit initial value and an interval length initial value; acquiring the length Len of a random binary sequence (Len is 43 in the embodiment, where Len is the length of a string to be compressed, and is not the length of the sequence C); setting a loop variable i to 1, wherein i is the ith symbol currently processed, and when i is Len, the coding is completed; the subscript V ═ 0 of the probability interval after all the symbols are encoded is set (in this embodiment, V ═ L is set43(x1,x2,…,x43));xiIs the ith symbol waiting for encoding; p is a radical of1=p2=p3=0;
And 3, step 3: if the ith symbol is symbol 0, entering the step 4; if the ith symbol is symbol 1, entering the step 5;
and 4, step 4: 3 symbols 1, 0, 1 are coded separately, as follows:
for the purpose of encoding the symbol 1,
Figure BDA0001818542190000091
V=V+p1
for the coded symbols 0, p2=rp(0)p1,V=V+0;
For the coded symbols 1, p3=rp(1)p2,V=V+p3
Entering the step 6;
and 5, step 5: the method comprises the following steps of respectively coding 2 symbols 0 and 1:
for the coded symbol 0, the code is,
Figure BDA0001818542190000092
V=V+0;
for the coded symbols 1, p2=rp(1)p1,V=V+p2
Entering the step 6;
and 6, step 6: adding 1 to a cyclic variable i, namely i is i + 1; if i is judged to be less than or equal to Len, returning to the step 3 for coding the next symbol; if i is more than Len, ending the coding and outputting V and Len.
The principle of the above channel coding method is explained as follows:
due to the need ofTo pre-process the binary sequence and then add the supervision element, first the binary sequence to be transmitted is set to be completely random, and the number of the symbols 0 is equal to the number of the symbols 1, i.e. the binary sequence is pre-processed and then the supervision element is added
Figure BDA0001818542190000093
According to the method, the preprocessing method of the characteristic random sequence comprises the following steps:
adding 1 symbol 1 behind each symbol 0 to obtain a sequence A;
adding 1 symbol 0 behind each symbol 1 of the sequence A to obtain a sequence B;
then the sequence B is subjected to bitwise non-operation to obtain a sequence C, namely
Figure BDA0001818542190000094
And setting the total length of the original random sequence as Len, wherein the number of the symbols 0 in the sequence C is as follows through the three steps: len, the number of symbols 1 is:
Figure BDA0001818542190000101
has a total length of
Figure BDA0001818542190000102
The probabilities of symbol 0 and symbol 1 at this time are:
Figure BDA0001818542190000103
then the information is sent to an encoder for encoding, and the formula comprises the following components according to the information entropy:
Figure BDA0001818542190000104
obviously, the length of the coded sequence is 2.42737639 times of the original length, and the code rate of 1/2 cannot be achieved. Analyzing the probability model, k is a positive integer, and then the probability of assigning the symbol 0 is:
Figure BDA0001818542190000105
the probability of assigning symbol 1 is:
Figure BDA0001818542190000106
at this time
Figure BDA0001818542190000107
Therefore, the maximum number of continuous 1 in the sequence C is 2, and the condition that k is less than 3 is met. At the same time
Figure BDA0001818542190000108
The probability of the current assignment is thus derived in accordance with the probability model described above
Figure BDA0001818542190000109
It can then be concluded that the probability model satisfies the following conditional expression
Figure BDA00018185421900001010
p' (1) ═ rp (1) ═ 1. Substituting p '(0) and p' (1) into an entropy formula to obtain;
Figure BDA00018185421900001011
it is clear that H' (X) is smaller than H (X) and 65.2953% bit is reduced, so the channel coding constructed based on the above probability model is compressive. If 1/2 code rate is needed, a supervision element can be added or the probability of the symbol 0 and the probability of the symbol 1 are set as:
Figure BDA00018185421900001012
p' (1) ═ 1; the coding method has higher error correction capability, and the smaller the p' (0), the higher the error correction capability. And then, analyzing an error correction thought under the condition that the supervision element is not added. Two features can be derived based on sequence C:
the method is characterized in that: the maximum number of the continuous 1 is 2;
the second characteristic: the number of consecutive 0 s is only 1.
The error correction method can use these two characteristics, for example, if 3 or more than 3 consecutive symbols 1 occur in the decoding process, or 2 or more than 2 consecutive symbols 0 occur in the decoding process, then it is considered that there is a decoding error. The decoder can restore the received data through the two characteristics, decode the sequence C, and restore the original binary sequence through the following steps:
1) the sequence C is not operated according to the bit to obtain a sequence B, namely
Figure BDA0001818542190000111
2) Removing the symbol 0 behind the symbol 1 of the sequence B to obtain a sequence A;
3) and removing the symbol 1 behind the symbol 0 of the sequence A, and completely restoring the binary sequence.
On the basis, referring to fig. 2, an embodiment of a channel decoding method mainly includes the following steps:
step 1: parameter initialization, setting probability of symbol 0
Figure BDA0001818542190000112
Probability of symbol 1
Figure BDA0001818542190000113
Figure BDA0001818542190000114
First order static coefficient
Figure BDA0001818542190000115
Obtaining H according to the expression of the probability model0=p0=1,L 00, wherein L0、H0、p0Respectively a lower limit initial value, an upper limit initial value and an interval length initial value of the probability interval; acquiring the length Len of the random binary sequence (in this embodiment, Len ═ 43, which can be obtained from the encoding result); setting a loop variable i to 1, wherein i is the ith symbol currently processed, and when i is Len, the coding is completed; setting buffer Buff [ n ]],[n]N in the buffer is the buffer length (the buffer stores n binary symbols), and a buffer counter lp is 0; the temporary variable H is 0 (for recording the interval superscript value of 0 at each time), and the value V is obtained(ii) a x is a decoded symbol; j is the j-th binary system of the V value;
step 2: obtaining the ith symbol x according to a probability model expression (specifically according to formulas 1.2, 1.3 and 1.4)iPossible probability intervals:
symbol 0 interval is:
Figure BDA0001818542190000116
symbol 1 interval is:
Figure BDA0001818542190000117
entering the step 3;
and 3, step 3: and (3) judging the V belonged interval according to a probability model expression (specifically according to a formula 1.4):
if it is
Figure BDA0001818542190000118
Or
Figure BDA0001818542190000119
X is theniWhen x is equal to 0, x isiStoring the value of 0 in a buffer;
if it is
Figure BDA00018185421900001110
Or
Figure BDA00018185421900001111
X is theni1, mixing xiStoring the result in a buffer;
entering the step 4;
and 4, step 4: detecting errors, judging whether the original characteristic string is satisfied in the buffer, if so, judging that the decoding is correct, and entering the step 6; if not, judging that the decoding is incorrect, and entering a 5 th step for error correction;
as can be seen from the above analysis, the step of determining whether the buffer satisfies the original feature string includes two criteria:
criterion 1: the number of continuous symbols 1 in the binary string of the buffer cannot be larger than 2;
criterion 2: the number of consecutive symbols 0 in the buffer binary string cannot be larger than 1,
if the judgment result meets the judgment 1 and the judgment 2, the decoding is judged to be correct, and the step 6 is entered; if at least one criterion is not satisfied, the decoding is judged to be incorrect, and 5 th step error correction is carried out.
And 5, step 5: turning over the current V (in any digital system, V is expressed by binary system) from left to right according to bits, obtaining a new V when turning over 1bit, judging whether j is equal to the binary length L of the V value, if j is less than or equal to L, returning to the step 2, and if j is j + 1; if j is larger than L, the V value cannot be corrected, so that an identifier is output, and decoding is finished;
the V value may be of finite length or infinite length by bit flipping.
And 6, step 6: judging whether the following characteristic strings appear in the buffer from left to right: if the substring is 101, outputting a symbol 0; if the substring is 01, outputting a symbol 1, adding 1 to a loop variable i, namely i is i +1, and entering the step 7;
and 7, step 7: judging, if i is less than or equal to Len, returning to the step 2 for continuous decoding; if i > Len, decoding is ended.
In conclusion, the method provided by the invention adds the relation of error checking and correction of symbols during coding, realizes the unity of error checking and correction, can effectively improve the capability of error correction or error detection, is closer to the entropy theoretical value, simulates an AWGN channel by a self-test experiment, starts decoding in the 27 th byte, and can detect errors when errors occur in the 27 bytes and are less than 3 bits and more than or equal to 3 bits; in addition, the method provided by the invention belongs to linear error correction and has no time delay; the error correction rate and the adaptive error rate are high, and under the condition that the error rate is less than 0.00001, 100% of correction decoding can be carried out at one time; the code rate is low, the traditional method needs 1/2 code rate, and the method only needs 1/1.5849625 code rate; for uncorrectable data, only 100 bits need to be retransmitted, and the whole data packet does not need to be retransmitted, so that the coding and decoding speed is higher.
It should be noted that, in practical applications, because of the problem of computer precision, the length of the probability interval is reduced very little during encoding, so that the probability interval with infinite precision can be reduced by iterative reduction of the probability interval with finite precision; similarly, the decoding process takes a limited number of bits (for example, 32 bits, because the number of bits of a variable of int type in the computer is 32 bits) of the V value to perform decoding in a probability interval of limited precision. It is impossible to flip from left to right and perform predecoding decisions with an infinite precision V value when error correction is required. Instead, the predecoding decision is performed by buffering a plurality of adjacent V values of finite length (e.g., 32 bits), flipping the V values by bit. However, this is only a variation on the specific implementation method, and the judgment mode and the encoding and decoding flow are all in accordance with the algorithm flow, and cannot be understood as a new method invention, which is described herein.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method of channel coding, comprising:
step 1, preprocessing a random binary sequence to enable a symbol 0 in the random binary sequence to be a symbol 1, 0, 1 and enable a symbol 1 in the random binary sequence to be a symbol 0, 1;
step 2, initializing parameters and setting the probability of a symbol 0
Figure FDA0002780771520000011
Probability of symbol 1
Figure FDA0002780771520000012
Figure FDA0002780771520000013
First order static coefficient
Figure FDA0002780771520000014
Obtaining H according to the expression of the probability model0=p0=1,L00, wherein L0、H0、p0Respectively a lower limit initial value, an upper limit initial value and an interval length initial value of the probability interval; acquiring the length Len of a random binary sequence; setting a loop variable i to 1, wherein i is the ith symbol currently processed; setting subscript V of probability interval after all symbols are coded as 0; x is the number ofiIs the ith symbol waiting for encoding; p is a radical of1=p2=p3=0;
And 3, step 3: if the ith symbol is symbol 0, entering the step 4; if the ith symbol is symbol 1, entering the step 5;
and 4, step 4: 3 symbols 1, 0, 1 are coded separately, as follows:
for the purpose of encoding the symbol 1,
Figure FDA0002780771520000015
V=V+p1
for the coded symbols 0, p2=rp(0)p1,V=V+0;
For the coded symbols 1, p3=rp(1)p2,V=V+p3
Entering the step 6;
and 5, step 5: respectively coding 2 symbols 0 and 1, and the steps are as follows:
for the coded symbol 0, the code is,
Figure FDA0002780771520000016
V=V+0;
for the coded symbols 1, p2=rp(1)p1,V=V+p2
Entering the step 6;
and 6, step 6: adding 1 to a cyclic variable i, namely i is i + 1; if i is judged to be less than or equal to Len, returning to the step 3 for coding the next symbol; if i is more than Len, ending the coding and outputting V and Len.
2. The channel coding method according to claim 1, wherein in step 1, the preprocessing the random binary sequence specifically comprises:
adding 1 symbol 1 after each symbol 0 to obtain a sequence A;
adding 1 symbol 0 behind each symbol 1 of the sequence A to obtain a sequence B;
bitwise negation of sequence B to obtain sequence C, i.e.
Figure FDA0002780771520000027
3. The channel coding method according to claim 1, wherein in step 1, the preprocessing the random binary sequence specifically comprises:
if the symbol 0 is taken from the original string, actually coding three symbols of 1, 0 and 1 in sequence; if symbol 1 is taken from the original string, two symbols 0, 1 are actually encoded in sequence.
4. The channel coding method according to claim 1, wherein the probability model satisfies the following condition:
constructing a contractible or expansive probability model and defining time tnN is a natural number of 1 or more, and the probability contraction or expansion coefficient of the sign is ωnAnd defines the probability of all symbols at time tnAccording to the same coefficient omeganThe stochastic process of variation is a generic process with three basic probabilistic models: if at any time tnHas omeganWhen is identical to 1, the model is defined as a standard model; if 0 < omega at any momentn1 or less, and omega is presentnIf < 1, defining the model as a contraction model; if there is omega at any momentnNot less than 1, and omega existsnIf the value is more than 1, defining the expansion model;
setting fixed probabilities of a symbol 0 and a symbol 1 in the random binary sequence to be p (0) and p (1), respectively,if it is
Figure FDA0002780771520000021
And the first-order static coefficient r of the expansion model satisfies:
Figure FDA0002780771520000022
if the number of consecutive 1 s in the random sequence
Figure FDA0002780771520000023
The distribution function of the expansion model can keep the mathematical property of the random sequence and can completely restore the random sequence;
the probability function and the distribution function of the random binary sequence are as follows:
Figure FDA0002780771520000024
Figure FDA0002780771520000025
Figure FDA0002780771520000026
i.e. Hn(x1,x2,...,xn)=Ln(x1,x2,...,xn)+pn(x1,x2,...,xn) And the probability interval dependency of lossless coding and decoding is as follows:
[Ln(x1,x2,...,xn),Hn(x1,x2,...,xn))
∈[Ln-1(x1,x2,...,xn-1),Hn-1(x1,x2,...,xn-1))∈…
∈[L1(x1),H1(x1))。
5. the channel coding method according to claim 4, wherein the probability model satisfies the following conditional expression:
Figure FDA0002780771520000031
p′(1)=rp(1)=1。
6. a method of channel decoding, comprising:
step 1: parameter initialization, setting probability of symbol 0
Figure FDA0002780771520000032
Probability of symbol 1
Figure FDA0002780771520000033
Figure FDA0002780771520000034
First order static coefficient
Figure FDA0002780771520000035
Obtaining H according to the expression of the probability model0=p0=1,L00, wherein L0、H0、p0Respectively a lower limit initial value, an upper limit initial value and an interval length initial value of the probability interval; acquiring the length Len of a random binary sequence; setting a loop variable i to 1, wherein i is the ith symbol currently processed; setting buffer Buff [ n ]],[n]N in the buffer is the buffer length, and a buffer counter lp is 0; obtaining a V value when the temporary variable H is 0; x is a decoded symbol; j is the j-th binary system of the V value;
step 2: obtaining the ith symbol x according to the probability model expressioniPossible probability intervals:
symbol 0 interval is:
Figure FDA0002780771520000036
symbol 1 interval is:
Figure FDA0002780771520000037
entering the step 3;
and 3, step 3: and judging the interval to which V belongs according to a probability model expression:
if it is
Figure FDA0002780771520000038
Or
Figure FDA0002780771520000039
X is theniWhen x is equal to 0, x isiStoring the value of 0 in a buffer;
if it is
Figure FDA00027807715200000310
Or
Figure FDA00027807715200000311
X is theni1, mixing xiStoring the result in a buffer;
entering the step 4;
and 4, step 4: detecting errors, judging whether the original characteristic string is satisfied in the buffer, if so, judging that the decoding is correct, and entering the step 6; if not, judging that the decoding is incorrect, and entering a 5 th step for error correction;
and 5, step 5: turning the current V from left to right according to bits, obtaining a new V when turning 1bit, judging whether j is equal to the binary length L of the V value, if j is less than or equal to L, returning to the step 2, and if j is equal to j + 1; if j is larger than L, outputting an identifier and ending decoding;
and 6, step 6: judging whether the following characteristic strings appear in the buffer from left to right: if the substring is 101, outputting a symbol 0; if the substring is 01, outputting a symbol 1, adding 1 to a loop variable i, namely i is i +1, and entering the step 7;
and 7, step 7: judging, if i is less than or equal to Len, returning to the step 2 for continuous decoding; if i > Len, decoding is ended.
7. The channel decoding method according to claim 6, wherein the step of determining whether the original signature string is satisfied in the buffer in the step 4 comprises two criteria:
criterion 1: the number of continuous symbols 1 in the binary string of the buffer cannot be larger than 2;
criterion 2: the number of consecutive symbols 0 in the buffer binary string cannot be larger than 1,
if the judgment result meets the judgment 1 and the judgment 2, the decoding is judged to be correct, and the step 6 is entered; if at least one criterion is not satisfied, the decoding is judged to be incorrect, and 5 th step error correction is carried out.
8. The channel decoding method according to claim 6, wherein the probability model satisfies the following condition:
constructing a contractible or expansive probability model and defining time tnN is a natural number of 1 or more, and the probability contraction or expansion coefficient of the sign is ωnAnd defines the probability of all symbols at time tnAccording to the same coefficient omeganThe stochastic process of variation is a generic process with three basic probabilistic models: if at any time tnHas omeganWhen is identical to 1, the model is defined as a standard model; if 0 < omega at any momentn1 or less, and omega is presentnIf < 1, defining the model as a contraction model; if there is omega at any momentnNot less than 1, and omega existsnIf the value is more than 1, defining the expansion model;
setting the fixed probabilities of the symbol 0 and the symbol 1 in the random binary sequence to be p (0) and p (1), if
Figure FDA0002780771520000041
And first order statics of the dilated modelThe coefficient r satisfies:
Figure FDA0002780771520000042
if the number of consecutive 1 s in the random sequence
Figure FDA0002780771520000051
The distribution function of the expansion model can keep the mathematical property of the random sequence and can completely restore the random sequence;
the probability function and the distribution function of the random binary sequence are as follows:
Figure FDA0002780771520000052
Figure FDA0002780771520000053
Figure FDA0002780771520000054
i.e. Hn(x1,x2,...,xn)=Ln(x1,x2,...,xn)+pn(x1,x2,...,xn) And the probability interval dependency of lossless coding and decoding is as follows:
[Ln(x1,x2,...,xn),Hn(x1,x2,...,xn))
∈[Ln-1(x1,x2,...,xn-1),Hn-1(x1,x2,...,xn-1))∈…
∈[L1(x1),H1(x1))。
9. the channel decoding method according to claim 6, wherein in the step 5, the V value is bit-flipped to a finite length or an infinite length.
CN201811154122.0A 2018-09-30 2018-09-30 Channel coding and decoding method Active CN109495211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154122.0A CN109495211B (en) 2018-09-30 2018-09-30 Channel coding and decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154122.0A CN109495211B (en) 2018-09-30 2018-09-30 Channel coding and decoding method

Publications (2)

Publication Number Publication Date
CN109495211A CN109495211A (en) 2019-03-19
CN109495211B true CN109495211B (en) 2020-12-29

Family

ID=65689387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154122.0A Active CN109495211B (en) 2018-09-30 2018-09-30 Channel coding and decoding method

Country Status (1)

Country Link
CN (1) CN109495211B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798224A (en) * 2019-11-13 2020-02-14 青岛芯海源信息科技有限公司 Compression coding, error detection and decoding method
CN113297591B (en) * 2021-05-07 2022-05-31 湖南遥昇通信技术有限公司 Webpage resource encryption method, equipment and storage medium
CN113746599B (en) * 2021-08-24 2024-03-22 湖南遥昇通信技术有限公司 Encoding method, decoding method, terminal, electronic device, and storage medium
CN116527904B (en) * 2023-07-03 2023-09-12 鹏城实验室 Entropy coding method, entropy decoding method and related devices

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444352A (en) * 2002-09-09 2003-09-24 西南交通大学 Method for parallelly-redundantly transmitting and parallelly-merging and receiving block data in mixed automatic retransmission request system
EP1359697A1 (en) * 2002-04-30 2003-11-05 Psytechnics Ltd Method and apparatus for transmission error characterisation
CN1981471A (en) * 2004-05-06 2007-06-13 高通股份有限公司 Method and apparatus for joint source-channel MAP decoding
CN102223533A (en) * 2011-04-14 2011-10-19 广东工业大学 Signal decoding and coding method and device
CN102349255A (en) * 2009-03-13 2012-02-08 夏普株式会社 Methods and devices for providing unequal error protection code design from probabilistically fixed composition codes
CN103138769A (en) * 2013-01-17 2013-06-05 中山大学 Encoding method provided with unequal error protection
CN107483154A (en) * 2017-08-17 2017-12-15 辽宁工业大学 A kind of degree distribution function design method of Internet fountain codes and channel combined coding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7068729B2 (en) * 2001-12-21 2006-06-27 Digital Fountain, Inc. Multi-stage code generator and decoder for communication systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1359697A1 (en) * 2002-04-30 2003-11-05 Psytechnics Ltd Method and apparatus for transmission error characterisation
CN1444352A (en) * 2002-09-09 2003-09-24 西南交通大学 Method for parallelly-redundantly transmitting and parallelly-merging and receiving block data in mixed automatic retransmission request system
CN1981471A (en) * 2004-05-06 2007-06-13 高通股份有限公司 Method and apparatus for joint source-channel MAP decoding
CN102349255A (en) * 2009-03-13 2012-02-08 夏普株式会社 Methods and devices for providing unequal error protection code design from probabilistically fixed composition codes
CN102223533A (en) * 2011-04-14 2011-10-19 广东工业大学 Signal decoding and coding method and device
CN103138769A (en) * 2013-01-17 2013-06-05 中山大学 Encoding method provided with unequal error protection
CN107483154A (en) * 2017-08-17 2017-12-15 辽宁工业大学 A kind of degree distribution function design method of Internet fountain codes and channel combined coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Impact of error-detecting/error-correcting codes on reliable data transmission over noisy channels in ATM systems";B.X. Weis;《IEEE Transactions on Communications》;19910430;588-593 *
"无线纠错模块";湖南瑞利德信息科技有限公司;《http://www.rilled.cn/index.php/content/13》;20180412;1 *

Also Published As

Publication number Publication date
CN109495211A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109495211B (en) Channel coding and decoding method
JP3017379B2 (en) Encoding method, encoding device, decoding method, decoder, data compression device, and transition machine generation method
CN107040262B (en) Method for calculating L ist predicted value of polar code SC L + CRC decoding
CN108768403B (en) LZW-based lossless data compression and decompression method, LZW encoder and decoder
Rahmati A quick note on undiversified turbo coding
WO2021164064A1 (en) Method and device for channel coding and error correction decoding, and storage medium
US7990290B1 (en) Efficient rateless distributed compression of non-binary sources
CN112398484B (en) Coding method and related equipment
CN110291793B (en) Method and apparatus for range derivation in context adaptive binary arithmetic coding
Ahmed et al. Information and communication theory-source coding techniques-part II
CN114556791A (en) Iterative bit flipping decoding based on symbol reliability
DE60111974D1 (en) Abort criterion for a turbo decoder
CN107181567B (en) Low-complexity MPA algorithm based on threshold
US20200204299A1 (en) System and a method for error correction coding using a deep neural network
CN110798224A (en) Compression coding, error detection and decoding method
CN108270508B (en) Cyclic redundancy check CRC implementation method, device and network equipment
CN112165338A (en) Estimation method for interleaving relation of convolutional code random interleaving sequence
CN109412611B (en) Method for reducing LDPC error code flat layer
Abbe Universal source polarization and sparse recovery
CN101411071A (en) MAP decoder with bidirectional sliding window architecture
CN111835363B (en) LDPC code decoding method based on alternate direction multiplier method
Hameed et al. A new lossless method of Huffman coding for text data compression and decompression process with FPGA implementation
US6101281A (en) Method for improving data encoding and decoding efficiency
US20220006475A1 (en) Performance enhancement of polar codes for short frame lengths considering error propagation effects
Berezkin et al. Data compression methods based on Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant