CN113783659A - Data processing method, device and medium based on binary erasure channel - Google Patents
Data processing method, device and medium based on binary erasure channel Download PDFInfo
- Publication number
- CN113783659A CN113783659A CN202110973566.2A CN202110973566A CN113783659A CN 113783659 A CN113783659 A CN 113783659A CN 202110973566 A CN202110973566 A CN 202110973566A CN 113783659 A CN113783659 A CN 113783659A
- Authority
- CN
- China
- Prior art keywords
- sequence
- data processing
- decoding
- processing method
- erasure channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 55
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000012937 correction Methods 0.000 claims abstract description 19
- 230000015654 memory Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 5
- 108091026890 Coding region Proteins 0.000 claims 2
- 230000005540 biological transmission Effects 0.000 abstract description 10
- 238000013459 approach Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000013524 data verification Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005315 distribution function Methods 0.000 description 4
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- WPBKQAUPVSRZPK-UHFFFAOYSA-N CBSC Chemical compound CBSC WPBKQAUPVSRZPK-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/25—Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM]
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/373—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with erasure correction and erasure determination, e.g. for packet loss recovery or setting of erasures for the decoding of Reed-Solomon codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0057—Block codes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Error Detection And Correction (AREA)
Abstract
The invention discloses a data processing method, equipment and medium based on a binary erasure channel, wherein the method comprises the following steps: acquiring an output sequence from an output end of a binary erasure channel, wherein an input sequence corresponding to the input end of the binary erasure channel is a sequence obtained by performing weighted probability arithmetic coding on a target sequence, and the target sequence is a sequence obtained by performing information source processing on an information source sequence; performing weighted probability arithmetic decoding on the output sequence to obtain a decoded sequence; and comparing the decoding sequence with the target sequence, and restoring the information source sequence or carrying out forward error correction on the decoding sequence according to the comparison result. The method provided by the method can realize that the transmission rate approaches the channel capacity of a binary erasure channel, and has data check and error correction capabilities.
Description
Technical Field
The present invention relates to the field of information coding technologies, and in particular, to a data processing method, device, and medium based on a binary erasure channel.
Background
Expert scholars have made continuous efforts to construct coding methods that approach the capacity of the channel. In 2009, Arikan professor proposed a coding method based on the channel polarization phenomenon, called Polar Code, which proved to be capable of reaching capacity when the Code length approached infinity. LDPC (Low Density Parity Check Code), Turbo Code can also approach the shannon limit.
The current scheme mostly belongs to coding technology under a BSC (binary symmetric channel) channel, and lacks coding technology applied to a BEC (binary erasure channel) channel, in particular lacks a coding and decoding method which can realize that the transmission rate can approach the channel capacity and has data checking and error correcting capabilities in the BEC (binary erasure channel).
Disclosure of Invention
The present invention is directed to at least solving the problems of the prior art. Therefore, the invention provides a data processing method, a system, equipment and a medium based on a binary erasure channel. The method can realize that the transmission rate approaches to the channel capacity and has data check and error correction capability.
In a first aspect of the present invention, a data processing method based on a binary erasure channel is provided, which is applied to a receiving end, and the data processing method includes:
acquiring an output sequence from an output end of a binary erasure channel, wherein an input sequence corresponding to the input end of the binary erasure channel is a sequence obtained by performing weighted probability arithmetic coding on a target sequence, and the target sequence is a sequence obtained by performing information source processing on an information source sequence;
performing weighted probability arithmetic decoding on the output sequence to obtain a decoded sequence, wherein the weighted probability arithmetic decoding is the inverse process of the weighted probability arithmetic coding;
and comparing the decoding sequence with the target sequence, and restoring the source sequence or carrying out forward error correction on the decoding sequence according to a comparison result.
In a second aspect of the present invention, a data processing method based on a binary erasure channel is provided, which is applied to a sending end, and the data processing method includes:
acquiring an information source sequence;
carrying out information source processing on the information source sequence to obtain a target sequence;
performing weighted probability arithmetic coding on the target sequence to obtain an input sequence;
and transmitting the input sequence to a receiving end through a binary erasure channel so as to trigger the receiving end to carry out weighted probability arithmetic decoding on the output sequence output by the binary erasure channel, trigger the receiving end to compare a decoding result with the target sequence, and trigger the receiving end to restore the information source sequence or carry out forward error correction on the decoding sequence according to the comparison result, wherein the weighted probability arithmetic decoding is the inverse process of the weighted probability arithmetic coding.
In a third aspect of the present invention, there is provided an electronic apparatus comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing: the data processing method based on the binary erasure channel according to the first aspect of the present invention or the data processing method based on the binary erasure channel according to the second aspect of the present invention.
In a fourth aspect of the present invention, there is provided a computer-readable storage medium storing computer-executable instructions, wherein the computer-executable instructions are configured to perform: the data processing method based on the binary erasure channel according to the first aspect of the present invention or the data processing method based on the binary erasure channel according to the second aspect of the present invention.
In the data processing method based on the binary erasure channel provided in the first aspect of the embodiment of the present application, it is possible to achieve that the transmission rate approaches the channel capacity of the binary erasure channel, and it has the data check and error correction capabilities.
It is to be understood that the advantageous effects of the second aspect to the fourth aspect compared to the related art are the same as the advantageous effects of the first aspect compared to the related art, and reference may be made to the related description of the first aspect, which is not repeated herein.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a block diagram of a data processing system based on a binary erasure channel according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a data processing method based on a binary erasure channel according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a data processing method based on a binary erasure channel according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a process of encoding a symbol 010 by a weighting model according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a data processing method based on a binary erasure channel according to another embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The current scheme mostly belongs to coding technology under a BSC (binary symmetric channel) channel, and lacks coding technology applied to a BEC (binary erasure channel) channel, in particular lacks a coding and decoding method which can realize that the transmission rate can approach the channel capacity and has data checking and error correcting capabilities in the BEC (binary erasure channel).
In order to solve the above-mentioned drawbacks, referring to fig. 1, an embodiment of the present invention provides a data processing system based on a binary erasure channel, where the system includes a sending end and a receiving end, and it should be noted that the present invention does not limit the configuration of the sending end and the receiving end, and the sending end and the receiving end may be any terminals with data processing functions. The transmitting end mainly comprises the processes of executing information source processing, weighting probability arithmetic coding and transmitting a coding result; the receiving end mainly comprises the processes of receiving the output result of the binary erasure channel, carrying out weighted probability arithmetic decoding on the output result, carrying out forward error correction and restoring an information source sequence.
Referring to fig. 2, a data processing method based on a binary erasure channel is provided based on a system embodiment, where the method uses a receiving end as an execution main body, and the method mainly includes the following steps:
step S101, a receiving end obtains an output sequence from an output end of a binary erasure channel, an input sequence corresponding to the input end of the binary erasure channel is a sequence obtained by performing weighted probability arithmetic coding on a target sequence, and the target sequence is a sequence obtained by performing information source processing on an information source sequence.
To facilitate understanding of those skilled in the art, the source sequence in step S101 is first assumed to be X, the target sequence is assumed to be Y, the input sequence is assumed to be V, and the output sequence is assumed to be U.
The following three binary sequences are listed:
firstly, randomly generating a binary source sequence X with a length of n, wherein X is equal to (0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0), and the probability of symbols in the sequence X is equal;
second, performing source processing on the sequence X, and replacing the symbol "1" in the sequence X with "10" to obtain a sequence Y, where Y is (0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0.);
thirdly, another source processing is performed on the sequence X, and if the symbol 1 in the sequence X is replaced by "101" and the symbol 0 is replaced by "01", a sequence Z is obtained, where Z is (0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1). The following process of source processing of the source sequence X is not exhaustive.
Let the length of sequence Y or sequence Z be denoted as l, the purpose of source processing (i.e. sequence conversion) is to make sequence Y and sequence Z satisfy the condition of data check. The conversion of a sequence X sequence into a sequence Z is more computationally intensive than the conversion of a sequence X sequence into a sequence Y. The data verification conditions for sequence Y are:
the number of consecutive symbols 1 in the sequence is at most 1 (1)
The data verification condition of the sequence Z is as follows:
the number of consecutive symbols 0 in the sequence is at most 1 and the number of consecutive symbols 1 is at most 2 (2)
The sequence Z has one more data verification condition than the sequence Y, and different conversion methods enable the sequence to have more data verification conditions. After the information source processing, a large amount of redundant information exists in the sequence Y and the sequence Z, and the sequence Y and the sequence Z can be subjected to lossless compression coding by adopting entropy coding.
In the art, arithmetic coding has poor error tolerance, such as decoding an erroneous binary sequence Y 'or a sequence Z' from a sequence V in which bit errors exist, and when the sequence Y 'does not satisfy the above equation (1), it is inevitable that there is a sequence Y' ≠ Y, or when the sequence Z 'does not satisfy the above equation (2), it is inevitable that there is Z' ≠ Z, and thus data verification can be achieved after encoding and decoding by source processing.
Since there is an obvious Context relationship between the symbols in the sequence Y and the sequence Z, taking the sequence Y as an example, a Context-Adaptive Binary Arithmetic Coding (CABAC) method can be selected to change the sequence Y to (Y)1,y2,...,yl) Code, let R0=1,L 00, the i (i) th symbol y is 1, 2, 3iThe binary arithmetic coding operation formula is as follows:
Ri=Ri-1p(yi)
Li=Li-1+Ri-1F(yi-1)
Hi=Li+Ri (3)
it should be noted that the above formula (3) includes three formulas, and the formula (3) is common knowledge of those skilled in the art and will not be described in detail herein.
Each symbol X of the sequence XiE {0, 1}, so yiWhen equal to 0Then p (-1) ═ 0 and F (-1) ═ 0(F denotes the cumulative distribution function). Contract y0When 0, then P (y)i0) there are two probabilities p (0|0) and p (0| 1); y isiWhen 1, F (0) is P (y)i0) and P (y)i1) there is a probability p (1| 0). When y isi-11 and yiProbability P (y) of 0 th symbol when 0i=0)=p(0|1)=1L is obtained from the above formula (3)i=Li-1,Ri=Ri-1Then Hi=Hi-1Then the current symbol 0 is coded in null (or skip), LlIs the result of the encoding.
The entropy of the sequence X is H (X) -p (0) log2 p(0)-p(1)log2p (1) (p (0) ═ p, p (1) ═ 1-p, p is the probability of the symbol 0 and 0 ≦ p ≦ 1), and the information entropy of the sequence Y is h (Y) ═ p (0|0) log2 p(0|0)-p(1|0)log2 p(1|0)-p(0|1)log2p (0|1), h (y) h (x) because p (0|1) is 1 and p (0|0) is p, and p (1|0) is 1-p. Thus Ll=LnI.e. sequence Y encodes result LlEquivalent to sequence X coding result Ln。
Encoding result L of sequence YlConversion to a binary sequence of length m, V ═ V1,v2,...,vm) V is transmitted through a DMC (symmetric Discrete Memoryless Channel) Channel, the DMC includes four basic Channel models of BSC, BEC, AWGN (Additive white noise Channel) and rayleigh Channel, and the method mainly relates to the application of a binary erasure Channel (BEC Channel) in the DMC Channel. U ═ U1,u2,...,um) Is the received binary sequence.
As shown in fig. 3, if U ═ V, sequence U decodes sequence Q, which can restore sequence X; if U ≠ V, since the sequence U decodes the erroneous sequence X 'and the sequence X' has no data check condition, it is impossible to check the error of the sequence U. Similarly, the sequence Z cannot implement data check based on context binary arithmetic coding, and a data check coding method of the sequence Y or Z cannot be constructed based on conditional probability (or markov chain).
When y isi-1=1,yiProbability P (y) of 0 and 0 of i-th symboliWhen p (0|1) ═ r (0 < r < 1), U ═ V, then sequence U is decoded to obtain sequence Q ═ Y; since U ≠ V, since the sequence Q must satisfy the above equation (1), it can be effectively determined that U has an error.
P(yiP (0|1) ═ r can be understood as P (y)i=0)=rp(0|1)=r,R is called a weight coefficient of probability, and thus the weight coefficients of p (0|0) and p (1|0) are 1. When r < 1 and r → 1, the coding result L of the sequence YlApproaching to the sequence X coding result LnThis conclusion is easily proven based on information entropy.
Order to Is defined as a weighted probability. The simplest weighted probability is that the weight coefficients of all symbol probabilities are the same real number r, let yiE.a ═ {0, 1.·, s }, the weighted cumulative distribution function is defined as:
if yi∈{0,1},yiWhen F (y) is 0i-1,r)=rF(yi-1)=rp(-1)=0;yiWhen 1 is F (y)i-1,r)=rF(yi-1) ═ rp (0). Let R0=1,L 00, the i (i) th symbol y is 1, 2, 3iThe arithmetic expression of weighted probability arithmetic coding of (1) includes:
Ri=Ri-1rp(yi)
Li=Li-1+Ri-1rF(yi-1)
Hi=Li+Ri (5)
note that the above formula (5) includes three formulas.
The transmission rate of the DMC channel in fig. 3 is I (V; U) ═ H (V) -H (V | U). The BEC (epsilon) channel is one of the DMC channels, where epsilon represents the erasure probability of an erasure symbol. The BEC (epsilon) channel transmits symbols v epsilon {0, 1}, receives symbols For erasure symbols, which are in the BEC (epsilon) channel, the probability of erasure is epsilon. Then p (u-0 | v-0) ═ p (u-1 | v ═ 1) ═ 1-epsilon, the transmission rate R of the BEC (epsilon) channel can be obtainedBECH (v) -epsilon, channel capacity CBECComprises the following steps:
CBEC=1-ε (6)
before introducing weighted probability arithmetic coding of a target sequence Y to obtain an input sequence V, the logic principle of weighted probability arithmetic coding is introduced:
defining 1, setting a discrete random variable y, y ∈ a { (0, 1.·, k }, P { y ═ a } ═ P (a) (a ∈ a), and weighting a probability quality function asp (a) is a probability mass function, 0 ≦ p (a ≦ 1), r is a weight coefficient, and:
F(a)=∑y≤a p(y) (7)
if F (a, r) satisfies F (a, r) ═ rf (a), F (a, r) is referred to as a weighted cumulative distribution function, and is simply referred to as a weighted distribution function. It is apparent that the weighted probability sum of all symbols isThere are three basic cases for the weight coefficient r: r is more than 0 and less than 1; r is 1; r > 1 and in this case the lossless coding must have a maximum value rmax. Let r > 1, a ═ {0, 1}, the probabilities of symbol 0 and symbol 1 in sequence Y be p (0) ═ p, and p (1) ═ 1-p, respectively. Encoding y according to equation (5) abovei+1=0,y i+21 and yi+3The process of 0 is shown in fig. 4.
According to FIG. 4, Li+3=Li+Rir2p2,Ri+3=Rir3p2(1-p),Hi+3=Li+Rir2p2+Rir3p2(1-p). Due to y i+10, F (-1) 0, so Li+1=Li,Ri+1=Rirp,Hi+1=Li+RiAnd rp. Let Hi+3≤Hi+1The following can be obtained:
let equation ar2+ br + c ═ 0, where a ═ p (1-p), b ═ p, c ═ 1, and r > 0. The positive real number satisfying the equation isSimplifying:
binary sequences of all 0's or all 1's are not considered and no error correction is needed, so 0 < p < 1. In decoding, the symbol y is assigned according to the above equation (5)i+1Corresponding interval when yi+1When 0 is defined as the interval [ Li,Li+Rirp) when yi+1When 1 is the interval [ Li+Rirp,Li+Rir). Then, when L isi+3<Li+Ri rp time y i+10; when L isi+3≥Li+Ri rp time y i+11. Since 1-p > 0, 4(1-p)2>0,4p2-8p+4>0,p2-4p+4>4p-3p2,(2-p)2>4p-3p2, Can obtain the productNamely, it isSuppose Li+3=Li+Rir2p2≥Li+Rirp, can be obtainedR is more than 0 and less than or equal to rmaxAnd isIf this is not true, L can be obtainedi+3=Li+Rir2p2<Li+Rirp, then yi+1=0。
Due to yi+1Not equal to 0, so Li+1=Li,Ri+1=RiAnd rp. When L isi+3<Li+1+Ri+1rp time yi+2When L is 0i+3≥Li+1+Ri+ 1 rp time y i+21. Due to Li+1+Ri+1rp=Li+Rir2p2Therefore L isi+3=Li+1+Ri+1rp, then yi+2=1。
Due to y i+21, so Li+2=Li+1+Ri+1rF(0)=Li+Rir2p2,Ri+2=Ri+1r(1-p)=Rir2p (1-p). When L isi+3<Li+2+Ri+2rp time yi+3When L is 0i+3≥Li+2+Ri+2 rp time y i+31. Due to Li+2+Ri+2rp=Li+Rir2p2+Rir3p2(1-p) and Rir3p2(1-p) > 0, so that Li+3=Li+Rir2p2<Li+2+Ri+2rp, then yi+3=0。
And (3) proving that: suppose Li(i ≧ 1, i ∈ Z) losslessly decodable y1,y2,...,yiSince the sequence Y satisfies the above formula (1), it is verified in two cases.
In the first case: y isi1, no y is presenti+1When y is equal to 1i+1L is obtained by encoding 0 according to the above formula (5)i+1=Li,Ri+1=Rirp because of Li+1=LiTherefore L isi+1Lossless coding y1,y2,...,yi. Due to Rirp>0,Li+1<Li+Rirp, so decoded to yi+1=0。
In the second case: y isiL is obtained by coding according to the above formula (5) as 0i=Li-1,Ri=Ri-1rp,Li-1Lossless coding y1,y2,...,yi. When y isi+1L is obtained by encoding 0 according to the above formula (5)i+1=Li,Ri+1=Rirp because of Li+1=LiTherefore L isi+1Lossless coding y1,y2,...,yi. Due to Li+1<Li+Rirp, so decoded to yi+1=0。
When y isi+1L is obtained by encoding 1 according to the above formula (5)i+1=Li+Rir2p2,Ri+1=Ri-1r2p (1-p). Since rp < 1, Li+1<Li+Rirp, decoding to y i0. Due to yiIs equal to 0, and Li+1+Ri+1rp=Li+Rir2p2What is, what isWith Li+1=Li+1+Ri+1rp, decoding to y i+11. When y isi=0,yi+1When 1, if y can be provedi-1Can be correctly decoded, then L can be obtained by the induction methodi+1Lossless coding y1,y2,...,yi,yi+1。
When y isi-1=0,yi=0,yi+1L is obtained by 1-hour codingi=Li-1=Li-2,Li+1=Li-2+Ri-2r3p3. Since rp < 1, Li+1<Li-2+Ri-2rp, decoding to y i-10. When y isi-1=1,yi=0,yi+1L is obtained by 1-hour codingi-1=Li-2+Ri- 2rp,Li=Li-1,Li+1=Li+Rirp=Li-2+Ri-2rp+Ri-1r3p2(1-p). Obviously, Li+1>Li-2+Ri-2rp, decoding to y i-11. I.e. yi-1Can be correctly decoded.
Let t +2( t 1, 2, 3..) symbols from the i +1 th position in the sequence Y be 0, 1,. 1, 0, where the consecutive number of symbols 1 is t. According to the above formula (5):
from Hi+t+2≤Hi+1The following can be obtained:
theory 2, when the number of consecutive symbols 1 in the sequence Y is at most tAnd isTime weighted probability arithmetic coding losslessly decodes sequence Y by V.
And (3) proving that: let d be the number of consecutive symbols 1 in the sequence Y, d is greater than or equal to 0 and less than or equal to t. When d is 0, L can be obtained according to the above formula (5)i+t+2=Li+t+1=Li+t=…=LiDue to the fact So yi+1=yi+2=…=y i+t+20. When d is more than or equal to 1 and less than or equal to t,according to the above formula (5), aWhen decoding, becauseSo Li+d+2Can accurately decode y i+10. Due to yi+1Is equal to 0, soDue to the fact thatTherefore, it is not only easy to useLi+d+2Can accurately decode y i+21. When d is greater than or equal to 2 Li+d+2Can accurately decode y i+d+11. Due to Li+d+2=Li+d+1And isSo Li+d+2Can accurately decode y i+d+20. When d is t +1So y i+11, decoding error; when d is t +1+ c (c is more than or equal to 1) So y i+11, decoding error. The lossless decoding sequence Y of V can be obtained when d is more than or equal to 0 and less than or equal to t. When in useAnd isThen V can be decoded without loss when d is more than or equal to 0 and less than or equal to t according to the proving steps.
In some embodiments, weighted probability arithmetic coding of the target sequence Y to obtain the input sequence V comprises the steps of:
in the encoding process, the source sequence X can be processed into Y and the sequence Y by the source for weighted probability arithmetic coding and merging, and according to fig. 3, the encoding step of the sequence X with the length of n by the sending end is as follows:
(1) initialization parameter, R0←1,L0Oid ← 0, p ← 0, i ← 1. Wherein R is0Representing a coding variable RiInitial value of, L0Representing a coded variable LiP represents the probability of the symbol 0 in the target sequence Y, and i represents a variable.
(2) The number of symbols 0 in the statistical sequence X is denoted c.
(3)Where p represents the probability of symbol 0 in sequence Y and n represents the sequence length of source sequence X.
(4)rmaxRepresents the maximum value of the weight coefficient of the weighted probability arithmetic coding.
(5) Obtaining the ith symbol X in the sequence Xi. It is noted that the process of source processing for the sequence X has been merged into the encoding step here.
(6) If xiWhen R is equal to 0, then Ri←Ri-1rmaxp。
(7) If xiWhen 1, then Ri←Ri-1rmax 2p (1-p) and Li←Li-1+Ri-1rmaxp。
(8)i←i+1。
(9) If i is less than or equal to n, repeating the steps (5) to (9).
(10) L is converted into a binary sequence V of length m. It should be noted that L here represents the sequence output after the last symbol in the sequence X is encoded.
(11) And sending the sequences V, c and n to a receiving end.
The receiving end will receive the output sequence U from the output of the binary erasure channel.
And S102, the receiving end carries out weighted probability arithmetic decoding on the output sequence to obtain a decoded sequence, wherein the weighted probability arithmetic decoding is the inverse process of weighted probability arithmetic coding.
In some embodiments, weighted probability arithmetic coding of the output sequence U (in which case the output sequence U is not necessarily identical to the input sequence V, and verification is required) to obtain the decoded sequence Q comprises the steps of:
the receiving end receives the sequences U, c and n and converts U into a real number U. According to FIG. 3, let qiIs the ith symbol in sequence Q, l ═ 2 n-c. The decoding step at the receiving end is as follows:
(1) initialization parameter, R0←1,L0Section number of keys, section number, section. Where s represents the last decoded symbol.
(4)H←Li-1+Ri-1rmaxp。
(5) And when u is larger than or equal to H and s is equal to 1, Q ← null, and the decoding is finished.
(6) When L is not less than H and s is 0, qi←1,s←1,Ri←Ri-1rmax(1-p),Li←Li-1+Ri-1rmaxp。
(7) When L < H and s ═ 1, s ← 0, Ri←Ri-1rmaxp。
(8) When L < H and s is 0, qi←0,s←0,Ri←Ri-1rmaxp。
(9)i←i+1。
(10) If i is less than or equal to l, repeating the steps (4) to (10).
(11) Returning to the sequence Q.
Since the sequence Y satisfies the above formula (1), Q is satisfied when u.gtoreq.H in the above step (5)iIf "11" is decoded because s is 1, U ≠ V, and the data is erroneous. The receiving end obtains a binary sequence Q (Q) through weighted probability arithmetic decoding1,q2,...,ql) When the sequence Q satisfies the above formula (1), there is a probability PerrSo that Q ≠ Y, then PerrThe average decoding error probability is the average decoding error probability of the method, which will be analyzed later. When Q does not satisfy the above formula (1), Q ≠ Y is inevitable, and the embodiment of the method constructs a forward error correction method based on the binary erasure channel model in step S103.
Step S103, the receiving end compares the decoding sequence with the target sequence, and restores the information source sequence or carries out forward error correction on the decoding sequence according to the comparison result.
In some embodiments, step S103 specifically includes: comparing whether the decoding sequence Q is consistent with the target sequence Y or not, and when the decoding sequence Q is consistent with the target sequence Y, restoring an information source sequence X according to the decoding sequence Q; and when the decoding sequence Q is inconsistent with the target sequence Y, carrying out forward error correction on the decoding sequence Q until the decoding sequence Q is consistent with the target sequence Y. Because the content sent to the channel by the sending end does not contain the target sequence Y, the standard that the receiving end judges whether the decoding sequence Q is consistent with the target sequence Y is to judge whether the decoding sequence Q meets the above formula (1).
In some embodiments, the forward error correction process for BEC (epsilon) is as follows:
let e erasure symbols exist in the sequence U Taking values as {0, 1}, and sequentially marking the e erasure symbols asLet i be 1, 2eTo obtain Table 1.
TABLE 1
As shown in Table 1, the ith mode assigns e erasures in the sequence USymbols, e.g. when i is 1 When i is 2And substituting the sequence U into the weighted probability arithmetic decoding, and carrying out a decoding process again, wherein if Q is not equal to null, the forward error correction decoding is finished. The method comprises the following steps:
(1) initialization parameter, i ← 1.
(2) The e erasure symbols in the sequence U are assigned in the ith way of table 1.
(3) And substituting U into the weighted probability arithmetic coding.
(4) If i is less than or equal to 2eAnd Q ≠ null, decoding is completed, and then the process is finished.
(5) If i < 2eAnd Q ═ null, i ← i +1, and steps (2) to (5) are repeated.
Because 2eAll possibilities are covered, so 2eAt least one of the possibilities exists such that the sequence Q satisfies the above formula (1).
Information entropy, coding rate and decoding error probability and simulation experiment results of the above method embodiments are given below;
firstly, weighting probability model information entropy, coding rate and average decoding error probability;
(1) weighted probability model information entropy.
Let binary discrete memoryless information source sequence Y ═ Y1,y2,...,yl)(yiE.g., a is {0, 1}), when r is 1,defined by shannon information entropy, the entropy of Y is:
H(X)=-p(0)log2 p(0)-p(1)log2 p(1) (12)
when r ≠ 1, the definition has a probabilityRandom variable y ofiThe self information quantity is as follows:
I(yi)=-log2 p(yi) (13)
set of { yiIn a } (i ═ 1, 2., l, a ∈ a), there is caA symbol a. The total information content of the source sequence Y when r is known is:
-c0 log2 p(0)-c1 log2 p (1)
the average amount of information per symbol is then:
definition 2, let H (Y, r) be:
after r is determined according to definition 2, the length of V after arithmetic coding by weighted probability is nH (Y, r) (bit/symbol). The weighted probability arithmetic lossless coding minimum limit is then:
and (3) proving that: according to theory 1, rmaxIs the maximum value of weighted probability arithmetic lossless coding, and rmaxIs greater than 1. When r > rmaxV cannot reduce the sequence Y, so H (Y, r)max) Is a weighted probability arithmetic lossless coding minimum limit.
(2) Coding rate;
the amount of information carried by each bit in sequence Y is on average H (Y, r)max) (bit/symbol) and the total information amount is lH (Y, r)max) (bit). The total information amount of the source sequence X is nh (X) (bit), and the coding rate of weighted probability arithmetic coding can be obtained:
let the probability of symbol 0 in the binary source sequence X with length n be q (q is more than or equal to 0 and less than or equal to 1). According to the above formula (12), nh (x) ═ qn log2 q-(1-q)n log2(1-q). The length of sequence Y is l ═ 2-q) n, then
And (3) proving that: according to the above formula (17), since q is 0. ltoreq. q.ltoreq.1, 4(1-q)2Not less than 0, then 4-8q +4q2Is more than or equal to 0. Due to 4-8q +4q2=(3-2q)2- (5-4q) ≥ 0, soDue to the fact thatCan obtain the productThenBecause of the fact thatAnd 2-2q is more than or equal to 0,therefore, it is not only easy to useI.e. l tablets (Y, r)max) -nH (X) is not less than 0The weighted probability arithmetic coding rate can reach 1.
(3) Average decoding error probability;
let event E represent a set of sequences Q satisfying the above formula (1), and event E has f (l) sequences Y. When l is 1, E is (0, 1), f (l is 1) is 2, and the complementary event isWhen l is 2, E is (00, 01, 10), f (l is 2) is 3,when l is 3, E is (000, 001, 010, 100, 101), f (l is 3) is 5,when l is more than or equal to 3:
f(l)=f(l-1)+f(l-2) (18)
the probability of an available event E is:
let f (l) sequences Q in event E obey a uniform distribution, then:
thus, the probability of Q ∈ E and Q ≠ Y is:
p (Q ≠ Y | Q ∈ E) is the average decoding error probability, thus Perr=P(Q≠Y|Q∈E)。
Theory 5, liml→∞Perr=liml→∞(Q≠Y|Q∈E)=0。
And (3) proving that: l → infinity is P (Y ≠ Q) → 1, and P (Y ≠ Q | Y ∈ E) → P (E) is obtained. According to FiboThat order is that F (0) is 0, F (1) is 1, and when l is greater than or equal to 2, l belongs to N*And F (l) ═ F (l-1) + F (l-2). So that l ≧ 1, l ∈ N*When F (l) ═ F (l) + F (l + 1). Derived from the fibonacci number polynomials:
the following can be obtained:
Let event F denote the set of sequences Q satisfying the above formula (2), and there are g (l) sequences Q for event F. When l is 1, F is (0, 1), g (l is 1) is 2, and the complementary event isWhen l is 2, F is (01, 10, 11), g (l is 2) is 3,when l is 3, F is (010, 101, 011, 110), F (l is 3) is 4,when l is more than or equal to 4:
g(l)=g(l-2)+g(l-3) (22)
f (l) and g (l) are both monotonicIncreasing and g (l) ≦ f (l), i.e.According to theory 5, → ∞ timePerrGiven as P (Q ≠ Z | Q ∈ F), lim can be obtainedl→∞ P err0. P can be calculated from the above equations (21) and (23)errAs in table 2.
l(bit) | P(Q≠Y|Q∈E) | P(Q≠Z|Q∈F) |
20 | 0.016890526 | 0.000443459 |
64 | 1.50584*10-6 | 5.9561*10-12 |
112 | 5.75104*10-11 | 1.53974*10-20 |
256 | 3.20367*10-24 | 2.66011*10-46 |
TABLE 2
V, c and n are known to the receiving end according to the weighted probability arithmetic coding described above. Theory 4, lH (Y, r)max) M is the bit length of sequence V, the probability of symbol 0 in sequence YAnd isSo H (Y, r)max) As is known, then:
when l is substituted into the above equations (21) and (23), P (Q ≠ Y | Q ∈ E) and P (Q ≠ Z | Q ∈ F) can be obtained, and when the value of l is determined, P (Q ≠ Y | Q ∈ E) and P (Q ≠ Z | Q ∈ F) are determined, and the value of n cannot be determined.
Secondly, the channel capacity can be reached;
according to fig. 3, the sequence Y is encoded with weighted probability arithmetic into a binary sequence V, and the transmission rate I (V; U) of the DMC channel is H (V) -H (V | U), so RBEC=H(V)-ε。
According to theory 3, since the coding rate R is 1, the coding rate R is not equal to 1 ThenSince the symbol probabilities in sequence X are equal, h (X) is 1, and m is n. Sequence V can losslessly code sequence Y according to theoretical 2, and replacing "10" with "1" in sequence Y can losslessly restore sequence X, so sequence V can losslessly restore sequence X. H (0) H (V) 1, assuming H (V) 1, then mH (V) nH (X), not conforming to the theory of undistorted source coding, only mH (V) nH (X) time can be decoded without loss, so H (V) 1, i.e. VThe symbol probabilities are equal. Thus, theWhen R isBEC=1-ε=CBEC。
According to theories 4 and 2, because R.ltoreq.1, lH (Y, R)max) m.gtoreq.nH (X), i.e., m.gtoreq.n. When m > n, redundant information is present in the sequence V. At this time, the sequence V is subjected to lossless coding to obtain a sequence V ', and then the sequence V ' is transmitted through a DMC channel, wherein U ' is a received binary sequence. The receiving end firstly decodes U 'into U, then decodes the U into a sequence Q, if Q does not satisfy the above formula (1), Q is not equal to Y, and the receiving end can carry out forward error correction decoding on U' according to the method. Since V ' can losslessly code X, H (V ') ═ H (X) ═ 1, i.e., symbol probabilities in V ' are equal. Then R is the transmission rate I (V '; U') -H (V '| U') of the DMC channelBSC=1-H(ξ)=CBSC,RBEC=1-ε=CBEC。
Through the analysis, the data processing method based on the binary erasure channel provided by the invention is different from the coding methods such as BCH, Hamming Code, RS, CRC, LDPC, Turbo, Polar Code and the like, and a direct comparison method is difficult to find from the coding methods. Experiments prove that the coding rate can reach 1, when the code length approaches infinity, the transmission rate can reach the channel capacity by the method, the code rate is 1/2, and the method shows good performance in simulation experiments under Binary Input Additive White Noise (BIAWGN) channels.
Referring to fig. 5, an embodiment of the present invention provides a data processing method based on a binary erasure channel, where an execution main body of the method is a sending end, and the method mainly includes the following steps:
step S201, the transmitting end obtains the information source sequence.
And S202, the transmitting end performs information source processing on the information source sequence to obtain a target sequence.
And S203, the sending end carries out weighted probability arithmetic coding on the target sequence to obtain an input sequence.
Step S204, the sending end transmits the input sequence to the receiving end through the binary erasure channel to trigger the receiving end to carry out weighted probability arithmetic decoding on the output sequence output by the binary erasure channel, trigger the receiving end to compare the decoding result with the target sequence, and trigger the receiving end to restore the information source sequence or carry out forward error correction on the decoding sequence according to the comparison result, wherein the weighted probability arithmetic decoding is the inverse process of weighted probability arithmetic coding.
It should be noted that the method embodiment and the above method embodiment are based on the same inventive concept, and the execution main body of the method embodiment is a receiving end, and the execution main body of the method embodiment is a transmitting end, so that the related contents of the above embodiments are also applicable to this embodiment, and are not described again here.
An embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor.
The processor and memory may be connected by a bus or other means.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It should be noted that the electronic device in this embodiment may be applied to, for example, a transmitting end or a receiving end in the embodiment shown in fig. 1, the device in this embodiment can form a part of a system architecture in the embodiment shown in fig. 1, and these embodiments all belong to the same inventive concept, so these embodiments have the same implementation principle and technical effect, and are not described in detail here.
The non-transitory software programs and instructions required to implement the binary erasure channel based data processing method of the above described embodiment are stored in a memory and, when executed by a processor, perform the above described embodiment method, e.g. performing the above described method steps S101 to S103 in fig. 2 and S201 to S204 in fig. 5.
The above described terminal embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Furthermore, an embodiment of the present invention provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor or a controller, for example, by a processor in the terminal embodiment, and can make the processor execute the binary erasure channel-based data processing method in the above-described embodiment, for example, execute the above-described method steps S101 to S103 in fig. 2 and method steps S201 to S204 in fig. 5.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (10)
1. A data processing method based on a binary erasure channel is applied to a receiving end, and the data processing method comprises the following steps:
acquiring an output sequence from an output end of a binary erasure channel, wherein an input sequence corresponding to the input end of the binary erasure channel is a sequence obtained by performing weighted probability arithmetic coding on a target sequence, and the target sequence is a sequence obtained by performing information source processing on an information source sequence;
performing weighted probability arithmetic decoding on the output sequence to obtain a decoded sequence, wherein the weighted probability arithmetic decoding is the inverse process of the weighted probability arithmetic coding;
and comparing the decoding sequence with the target sequence, and restoring the source sequence or carrying out forward error correction on the decoding sequence according to a comparison result.
2. The binary erasure channel-based data processing method of claim 1, wherein the comparing the decoded sequence with the target sequence, and recovering the source sequence or performing forward error correction on the decoded sequence according to the comparison result comprises:
when the decoding sequence is consistent with the target sequence, restoring the source sequence according to the decoding sequence; and when the decoded sequence is inconsistent with the target sequence, carrying out forward error correction on the decoded sequence until the decoded sequence is consistent with the target sequence.
3. The binary erasure channel-based data processing method of claim 2, wherein said forward error correcting said decoded sequence comprises:
assigning values to erasure symbols in the output sequence, and performing the weighted probability arithmetic decoding on the output sequence assigned with the erasure symbols.
4. The binary erasure channel-based data processing method of claim 3, wherein said assigning erasure symbols in said output sequence comprises:
obtaining 2 of the erasure symboleArranging, wherein e represents the number of erasure symbols in the output sequence;
and selecting one arrangement mode from the 2e arrangement modes to assign values to the erasure symbols in the output sequence.
5. The binary erasure channel-based data processing method of claim 1, wherein the source sequence is subjected to source processing, comprising:
each symbol 1 in the source sequence is followed by a symbol 0.
6. The binary erasure channel-based data processing method of claim 5, wherein said comparing said decoded sequence with said target sequence comprises:
when two continuous symbols 1 appear in the coding sequence, the coding sequence is not consistent with the target sequence; when two consecutive symbols 1 do not appear in the decoded sequence, the decoded sequence is identical to the target sequence.
7. The binary erasure channel-based data processing method of claim 6, wherein the target sequence is weighted probability arithmetic coded, comprising:
making the source sequence X;
for the ith bit symbol X in XiAnd encoding until the encoding is finished by the last X-bit symbol, wherein the encoding comprises the following steps:
when x isiWhen R is equal to 0, then Ri=Ri-1rmaxp; when x isiWhen 1, then Ri=Ri-1rmax 2p (1-p) and Li=Li-1+Ri-1rmaxp, wherein r ismaxWeight coefficients representing the weighted probability arithmetic coding, theSaid p represents the probability of the symbol 0 in said target sequence, saidThe n represents the sequence length of the X, the c represents the number of symbols 0 in the source sequence, and the R representsiAnd said LiA coding variable representing the weighted probability arithmetic coding.
8. A data processing method based on a binary erasure channel is applied to a transmitting end, and the data processing method comprises the following steps:
acquiring an information source sequence;
carrying out information source processing on the information source sequence to obtain a target sequence;
performing weighted probability arithmetic coding on the target sequence to obtain an input sequence;
and transmitting the input sequence to a receiving end through a binary erasure channel so as to trigger the receiving end to carry out weighted probability arithmetic decoding on the output sequence output by the binary erasure channel, trigger the receiving end to compare a decoding result with the target sequence, and trigger the receiving end to restore the information source sequence or carry out forward error correction on the decoding sequence according to the comparison result, wherein the weighted probability arithmetic decoding is the inverse process of the weighted probability arithmetic coding.
9. An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements:
the binary erasure channel-based data processing method of any one of claims 1 through 7 or the binary erasure channel-based data processing method of claim 8.
10. A computer-readable storage medium having stored thereon computer-executable instructions for performing:
the binary erasure channel-based data processing method of any one of claims 1 through 7 or the binary erasure channel-based data processing method of claim 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110973566.2A CN113783659A (en) | 2021-08-24 | 2021-08-24 | Data processing method, device and medium based on binary erasure channel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110973566.2A CN113783659A (en) | 2021-08-24 | 2021-08-24 | Data processing method, device and medium based on binary erasure channel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113783659A true CN113783659A (en) | 2021-12-10 |
Family
ID=78838892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110973566.2A Pending CN113783659A (en) | 2021-08-24 | 2021-08-24 | Data processing method, device and medium based on binary erasure channel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113783659A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115567165A (en) * | 2022-10-18 | 2023-01-03 | 天津津航计算技术研究所 | Coding error correction method, system, terminal equipment and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106254030A (en) * | 2016-07-29 | 2016-12-21 | 西安电子科技大学 | The two-way coding and decoding method of the code of Spinal without speed |
CN107508656A (en) * | 2017-07-24 | 2017-12-22 | 同济大学 | A kind of Spinal joint source-channel decoding methods on BEC channels |
CN111294058A (en) * | 2020-02-20 | 2020-06-16 | 湖南遥昇通信技术有限公司 | Channel coding and error correction decoding method, equipment and storage medium |
CN112865961A (en) * | 2021-01-06 | 2021-05-28 | 湖南遥昇通信技术有限公司 | Symmetric encryption method, system and equipment based on weighted probability model |
-
2021
- 2021-08-24 CN CN202110973566.2A patent/CN113783659A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106254030A (en) * | 2016-07-29 | 2016-12-21 | 西安电子科技大学 | The two-way coding and decoding method of the code of Spinal without speed |
CN107508656A (en) * | 2017-07-24 | 2017-12-22 | 同济大学 | A kind of Spinal joint source-channel decoding methods on BEC channels |
CN111294058A (en) * | 2020-02-20 | 2020-06-16 | 湖南遥昇通信技术有限公司 | Channel coding and error correction decoding method, equipment and storage medium |
CN112865961A (en) * | 2021-01-06 | 2021-05-28 | 湖南遥昇通信技术有限公司 | Symmetric encryption method, system and equipment based on weighted probability model |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115567165A (en) * | 2022-10-18 | 2023-01-03 | 天津津航计算技术研究所 | Coding error correction method, system, terminal equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10938506B2 (en) | Method for encoding information in communication network | |
US9660763B2 (en) | Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes | |
Trifonov et al. | Generalized concatenated codes based on polar codes | |
JP3923618B2 (en) | Method for converting information bits having error correcting code and encoder and decoder for performing the method | |
US10879932B2 (en) | Encoding method and device, and apparatus | |
CN108282259B (en) | Coding method and device | |
US20070162821A1 (en) | Parity check matrix, method of generating parity check matrix, encoding method and error correction apparatus | |
CN1756090B (en) | Channel encoding apparatus and method | |
CN101595644B (en) | Apparatus and method for decoding using channel code | |
CN115441993A (en) | Channel coding and decoding method, device, equipment and storage medium | |
EP3713096B1 (en) | Method and device for decoding staircase code, and storage medium | |
Chandesris et al. | On puncturing strategies for polar codes | |
CN113783659A (en) | Data processing method, device and medium based on binary erasure channel | |
CN107181567B (en) | Low-complexity MPA algorithm based on threshold | |
Nozaki | Zigzag decodable fountain codes | |
JP2006060695A (en) | Information decoding and encoding method,information communication method, information decoding device, transmitting device, and information communication system | |
US11387849B2 (en) | Information decoder for polar codes | |
WO1999039442A1 (en) | Precoding technique to lower the bit error rate (ber) of punctured convolutional codes | |
US7533320B2 (en) | Wireless transmit/receive unit having a turbo decoder with circular redundancy code signature comparison and method | |
TWI783727B (en) | Communications system using polar codes and decoding method thereof | |
Su et al. | Detailed asymptotics of the delay-reliability tradeoff of random linear streaming codes | |
Schotsch et al. | Finite length LT codes over F q for unequal error protection with biased sampling of input nodes | |
Li et al. | Algebraic codes for Slepian-Wolf code design | |
CN114142869A (en) | Error detection performance improving method, system, equipment and medium for Jielin code error correction | |
CN114584259A (en) | Decoding method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |