CN110460340B - Self-adaptive construction and decoding method based on random convolutional network error correcting code - Google Patents

Self-adaptive construction and decoding method based on random convolutional network error correcting code Download PDF

Info

Publication number
CN110460340B
CN110460340B CN201910626457.6A CN201910626457A CN110460340B CN 110460340 B CN110460340 B CN 110460340B CN 201910626457 A CN201910626457 A CN 201910626457A CN 110460340 B CN110460340 B CN 110460340B
Authority
CN
China
Prior art keywords
error
network
decoding
coding
random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910626457.6A
Other languages
Chinese (zh)
Other versions
CN110460340A (en
Inventor
郭网媚
刘明叶
高晶亮
田敏涵
李永康
张泽阳
姚璐阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910626457.6A priority Critical patent/CN110460340B/en
Publication of CN110460340A publication Critical patent/CN110460340A/en
Application granted granted Critical
Publication of CN110460340B publication Critical patent/CN110460340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2939Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using convolutional codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/35Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics

Abstract

The invention discloses a self-adaptive construction and decoding method based on random convolutional network error correcting codes, which solves the problem of high complexity of an error correcting coding and decoding algorithm in a network with unknown topology and transmission delay in the prior art, and comprises the following implementation steps: adaptively constructing random convolutional network codes; constructing a convolution error correcting code as an input code of random convolution network error correcting coding; decoding at a receiving end by using a q-element random convolutional network error correction decoding algorithm; and optimizing the algorithm. The invention provides a self-adaptive random convolutional network coding, and different nodes select local coding kernels with different lengths according to self conditions; searching error information by using all-zero test data, equating the collected combined errors to a signal source end and designing an error correcting code capable of correcting the errors; the error correction decoding algorithm based on the minimum network error weight of the combined errors is provided, and the coding and decoding algorithm with low complexity, low time delay and strong error correction capability is realized and is used for an actual network with unknown topology and transmission delay.

Description

Self-adaptive construction and decoding method based on random convolutional network error correcting code
Technical Field
The invention belongs to the technical field of network coding, mainly relates to a random convolutional network coding and decoding technology, and particularly relates to a self-adaptive construction and decoding method based on a random convolutional network error correcting code, which is used in an actual network with unknown network topology and time delay.
Background
Network coding was originally proposed by Ahlswede in 2000, the essence of which was to allow intermediate nodes to perform forwarding operations after processing received information. Many efforts have shown that network coding has potential advantages in terms of throughput, load balancing, security, etc., and has attracted widespread attention. Random Network Coding (RNC), which allows intermediate nodes to randomly select coding coefficients within a limited domain, is feasible for unknown topologies, and t.ho et al demonstrate that when the number of users is larger than the size of the coding domain, the probability of random successful coding approaches 1. Convolutional Network Coding (CNC) allows a combination of nodes to receive information from different input channels and different time slots, which is more feasible for actual communications. Errors always exist in a communication system, and even a single error can affect more symbols at a receiving node due to the mixing process of intermediate nodes, so that error correction is indispensable to RNC and CNC, and is an important guarantee for correct decoding.
Convolutional Network Error Correction Code (CNECC) was originally proposed by k.prasad et al and presented the construction and decoding algorithms of CNECC to correct a set of network errors for coherent networks. For error correction in random network coding, Koetter and Kschickang propose to represent code words by subspaces, Silva and Kschickang indicate that the boosted rank metric code is almost the optimal subspace code of random network coding, A.Wachter-Zeh and the like provide rank metric convolutional codes, namely, convolutional codes are used at a source end to correct errors, but the complexity of using the subspace codes and the rank distance codes is higher, and the realization is difficult. Yang et al propose a quasi-Viterbi decoding algorithm based on minimum error weight, and a generalized CNECC constructed by the method proposed by K.Prasad gives sufficient conditions for algorithm feasibility, but only considers a deterministic network and a binary finite field
Figure BDA0002127269710000011
And (4) decoding.
In summary, in the coding and decoding scheme based on the coherent network in the prior art, the coherent network is a network determined by topology and determined by encoding, and is not very suitable for a network with unknown topology and time delay.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects of the conventional network coding and decoding algorithm of the random coil machine, the self-adaptive construction and decoding method based on the random convolutional network error correcting code is provided, and has strong error correction capability and low complexity.
In order to realize the purpose of the invention, the invention adopts the following technical scheme:
step 1, self-adaptively constructing random convolutional network coding:
the source node sends all-zero data packets to the network to construct a random convolutional network code in a self-adaptive manner, starting from a small code domain, the length of a Local Encoding Kernel (LEK) is increased until all relevant receiving nodes can be translated into a stop, and at the moment, the coding is successful; the receiving node receives a combined error vector formed by the mixed superposition of the network errors through the intermediate nodes;
step 2, constructing a random convolution network error correcting code:
collecting network errors by using all-zero test data; the global coding core (GEK) is sent to the receiving nodes according to the time slot, so that the information of the transmission matrix is obtained in a distributed mode at the receiving nodes, and meanwhile, network errors are mixed according to the transmission coding coefficient, and combined errors are obtained in a distributed mode; when the network intermediate node uses a general polynomial as a coefficient, the adjoint matrix of the transmission matrix is used for replacing the inverse matrix of the transmission matrix, and the combination error is equivalent to the information source end; maximum weight T according to equivalent error of information source terminalsSelecting the free distance more than or equal to 2T at the information source ends+1 error correction code capable of correcting network errors as input code of the random convolutional network;
step 3, improving to obtain a q-element random convolutional network error correction decoding algorithm:
extending Viterbi-like decoding algorithms based on minimum network error weight to random convolutional networks and
Figure BDA0002127269710000021
field, forming q-aryA random convolutional network error correction decoding algorithm; the q-element random convolution network error correction decoding algorithm is based on the combined error vectors, the weight of each combined error vector is defined as the weight of the minimum network error vector, and a decoding path with the accumulated minimum error weight is searched; the algorithm is directly decoded at a receiving node and can correct any error in the error correction range of the random convolutional network error correction code;
step 4, optimizing the error correction decoding algorithm of the q-element random convolution network:
in the operation of optimizing the received sequence, updating the decoded sequence by subtracting the influence of the error-correcting decoded code word of the q-element random convolutional network and the estimated network error; the situation in the subsequent window is equivalently transformed into the first window by removing the influence caused by the input sequence before the window, and distributed decoding with low complexity and low decoding delay is realized.
The invention provides the whole technical scheme for realizing the self-adaptive construction and decoding method of the random convolutional network error correcting code through the four steps, has strong error correcting capability and low complexity, and is suitable for a network with unknown topology and time delay.
Advantageous effects
Compared with the prior art, the invention has the beneficial effects that:
the requirements of an actual network are met: the invention constructs random convolution network coding, combines the advantages of random network and convolution network, is suitable for network with unknown topology and time delay, and is more suitable for actual network.
The error correction capability is strong: and (3) encoding operation: the invention estimates the maximum weight of equivalent errors at the source for a set of network errors that may occur, and designs an error correction code that can correct the set of network errors before transmitting to the network. And (3) decoding operation: the q-element random convolution network error correction decoding algorithm provided by the invention can correct any error in the RCNECC error correction range based on the accumulative minimum weight decoding of combined errors.
The complexity is low: and (3) encoding operation: the invention constructs the random convolution network coding in a self-adaptive way, combines the advantages of RNC and CNC, selects local coding cores with different lengths according to the self conditions of different nodes, and has the advantages of shortest equivalent coding core length and low coding complexity. And (3) decoding operation: the q-element random convolution network error correction decoding algorithm can directly decode at a receiving node, and the receiving sequence updates the decoding sequence by subtracting the influence of the decoded word and the estimated network error, thereby reducing the complexity and the decoding time delay and realizing the distributed decoding.
Drawings
FIG. 1 is a diagram of a random convolutional network code with an input convolutional code employed in the present invention, and is a schematic flow chart of the present invention;
FIG. 2 is a random convolutional network code diagram of transmission rate 2 in the present invention;
FIG. 3 is a random convolution network code diagram at time t-0 in the present invention;
FIG. 4 is a random convolution network coding diagram at time t-1 in the present invention;
FIG. 5 is a diagram of the correct input and output of the decoding window [0,2] in the present invention;
FIG. 6 is a diagram of the error screening process in the decoding window [0,2] in the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
Example 1:
the essence of network coding is to allow intermediate nodes to perform forwarding operations after processing received information, with potential advantages in terms of throughput, load balancing, security, etc., and to attract a wide range of attention. Random Network Coding (RNC) allows the intermediate nodes to randomly select coding coefficients within a limited domain, suitable for networks with unknown topology. Convolutional Network Coding (CNC) allows nodes to combine received information from different input channels and different time slots, feasible for a time-delayed network.
An algorithm for constructing a convolutional network error correcting code is provided in the prior art, and the algorithm has certain error correcting capability but is only suitable for a coherent network; subspace codes and rank distance codes are applied to error correction in a random network, but the complexity is high and the implementation is difficult; the mixing process of the intermediate nodes of the network makes the classical Viterbi algorithm based on hamming weight no longer suitable for error correction decoding in the network. In order to solve the problems, the invention develops research on the problems, and provides a self-adaptive construction and decoding method based on random convolutional network error correcting codes, wherein error correction is added in the encoding and decoding processes, the error correcting capability is strong, the complexity is low, and the method is suitable for actual networks.
The invention relates to a self-adaptive construction and decoding method based on a random convolution network error correcting code, which is shown in figure 1 and comprises the following steps:
step 1, self-adaptively constructing random convolutional network coding:
the source side sends omega packet all zero data to the network, so that the network starts adaptive construction. The convolutional network coding is physically realizable if and only if the local coding kernel constant term coefficient matrix K0Is zero-power, the following operation ensures K0Is a power of zero: the edges in the network are directional and numbered eiI is more than or equal to 1 and less than or equal to | E |, and when the time is 0, the local coding core k with the inflection point E' > Ee',e,0If not, uniformly and randomly selecting from the small domain; after the initialization step, ke',e,tCan be randomly and uniformly selected in a limited field. The Local Encoding Kernel (LEK) length is increased until the global encoding matrix at all relevant receiving nodes is full rank, at which time the encoding is successful and the lengths of the different LEKs may be different. The receiving node receives a combined error vector obtained by the mixed superposition of the network errors through the intermediate nodes.
Step 2, constructing a random convolution network error correcting code:
the self-adaptive random coding of the convolutional network, the information of the global coding core and the transmitted characters are transmitted to the receiving node together according to the time slot, therefore, the information M of the transmission matrix at each moment is acquired in a distributed mode at the receiving noder,0,Mr,1z,Mr,τzτ…, distributed computing combination errors
Figure BDA0002127269710000041
Using transmission matrices Mr(z)=Mr,0+Mr,1z+…+Mr,τzτThe adjoint matrix of + … replaces the inverse matrix thereof, avoiding the error of mapping the finite weight of the receiving end to the infinite weight of the source end; the combination error is multiplied by the adjoint matrix of the transmission matrix, so that the error generated by the network is equivalent to the source end; maximum weight T according to equivalent error of information source terminalsSelecting the free distance more than or equal to 2T at the information source ends+1 Random Convolutional Network Error Correction Code (RCNECC) capable of correcting network errors.
Step 3, improving to obtain a q-element random convolutional network error correction decoding algorithm:
by y (z) ═ x (z) GO,t(z)+e(z)Ft(z) obtaining: x (z) GO,t(z)=y(z)-e(z)Ft(z) the error sequences e (z) are superposed in a convolution-like manner in a staggered manner to form e (z) Ft(z) which characterizes the effect of any error vector in the network and is ultimately output at the receiving node. Due to the mixing process at the intermediate nodes, even a single error will affect more symbols at the receiving node where decoding is no longer possible based only on the minimum hamming distance between the output sequence and the input sequence, so decoding is based on combining error vectors, finding a new metric: defining the weight of each combined error vector as the weight of the minimum network error vector, and finding the decoding path with the accumulated minimum error weight. Extending a Viterbi-like decoding algorithm with minimal network error weight to a random convolutional network code sum
Figure BDA0002127269710000051
A domain, which forms a q-element random convolution network error correction decoding algorithm; the q-element random convolutional network error correction decoding algorithm can be directly decoded at a receiving node and can correct any error in an RCNECC error correction range.
Step 4, optimizing the error correction decoding algorithm of the q-element random convolution network:
in the operation of optimizing the received sequence, when only a unique combined error vector remains in the decoding window, the error needs to be removed from the decoded trellis diagram. Since a 1-bit message of the input sequence will affect the omega-bit message of the error-free output sequence, after removing the effect of the q-ary random convolutional network error correction decoding to decode the codeword, the first omega-bits of the received sequence are all zeros. The sequence is divided by the time delay factor z to be regarded as a newly received sequence, and the situation in the subsequent window can be equivalently transformed into the first window by removing the influence caused by the input sequence before the window, so that the distributed decoding with low complexity and decoding time delay is realized.
The idea of the invention is as follows: the source end sends zero data packets to the network to construct the random convolutional network coding adaptively, and is used for collecting combined error vectors for error correcting code design, because the combined error vectors can be used for estimating errors and equivalent errors of the source whether the combined error vectors are successful or not; equating the combined error vector to the information source end, and designing a random convolution network error correcting code capable of correcting the error at the information source end according to the maximum weight of the equivalent error; and performing error correction decoding at the receiving node by using a random convolutional network error correction decoding algorithm.
The invention constructs random convolution network coding, combines the advantages of random network and convolution network, and is suitable for topology unknown network and network with time delay; errors occurring in a network are equivalent to an information source end, a code word with error correction capability is designed before data is sent, error correction decoding is carried out on a receiving node, the error correction capability is strong, and meanwhile, the complexity of the whole coding and decoding process is low through an optimization algorithm.
Example 2
The self-adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that of the embodiment 1, and the self-adaptive construction random convolutional network coding in the step 1 specifically comprises the following steps:
sending omega packets of all-zero data to a network to enable the network to start adaptive construction; the convolutional network coding is physically realizable if and only if the local coding kernel constant term coefficient K0The corresponding coding topology is ringless. The edges in the network are directional and numbered eiI is more than or equal to 1 and less than or equal to | E |, and a pair of edges are marked as inflection points and are numbered as E '> E, namely the edge E' is more than the number of E. All the local coding cores and the global coding core have an initial value of 0, and when the time is 0, for e' being more than or equal to e, the order is givenIts local coding core ke',e,0Otherwise, the local coding kernel K is uniformly and randomly selected from the small field, and this initialization step ensures that the local coding kernel K is locally encoded0Is loopless. After initialization ke',e,tCan be randomly selected.
Kv(z)=(ke′,e(z))e′∈In(v),e∈Out(v)=Kv,0+Kv,1z+Kv,2z2+.. locally encoding a kernel matrix for an intermediate node v at time t, wherein each element k ise′,e(z)=ke′,e,0+ke′,e,1z+ke′,e,2z2+.. is a polynomial. Node v stores all incoming edges e' e in (v) received symbols ye',tAnd randomly selecting t +1 item local coding kernel coefficient k for the output edge e of ve',e,t. Then constructing a transmission data packet for each output edge e
Figure BDA0002127269710000061
And globally encode the kernel
Figure BDA0002127269710000062
Put in the packet head and send out.
At each time t, the receiving node r determines whether the global coding core it receives is full rank. If the rank is full, the success status of itself is set to 1 and an acknowledgement signal ACK is sent to its parent node. And the intermediate node v stops randomly selecting the local coding core and sends the ACK to the father node of the intermediate node v when receiving the ACK of all the child nodes. Subsequent encodings using the determined and stored
Figure BDA0002127269710000063
The length of the LEK is increased at each time instant until all receiving nodes have a transmission matrix of full rank.
The invention relates to a self-adaptive structured random convolutional network coding, which starts coding from a small coding domain, increases the length of a local coding core until all related receiving nodes can be decoded, allows different nodes to select the local coding cores with different lengths according to self conditions, ensures that the length of an equivalent coding core is as short as possible, reduces decoding time delay and storage requirements, can be applied to the condition of unknown topology and stops in limited time, and obtains output which is a combination error after the construction is successful.
Example 3
The adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that of the embodiment 1-2, the step 1 is the adaptive construction random convolutional network coding, and the mathematical model of the random convolutional network coding is as follows:
the convolutional network coding adopts a self-adaptive random coding mode, FrThe information of (z) is transmitted to the receiving node in time slots together with the transmitted characters. The source generated data can be represented as:
x(z)=x0+x1·z+…+xt-1·zt-1+…,
the transmission matrix of the sink node r can be represented as:
Mr(z)=Mr,0+Mr,1z+…+Mr,τzτ+…,
the corresponding error-free output at each moment when passing through the random convolutional network should be:
Figure BDA0002127269710000071
the corresponding error output at each moment is as follows:
Figure BDA0002127269710000072
where z is a delay factor, x (z) is an input sequence, xtIs the coefficient of x (z) at time t, Mr,tIs Mr(z) coefficients at time t, e (z) is the error sequence, etIs the coefficient at time t of e (z), Fr(z) removing the influence in the network caused by the mapping of the source side for the transmission matrix.
The invention constructs a random convolution network coding model, the transmitted data forms convolution in different time slots and is finally output to a receiving node according to time sequence, the process is expressed by a mathematical formula, and an output sequence after network errors are added is obtained by analysis, thereby being beneficial to the subsequent error correction coding and decoding operation.
Example 4:
the self-adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that of the embodiment 1-3, and the construction of the random convolutional network error correcting code in the step 2 specifically comprises the following steps:
2.1) random convolutional network coding collects the combined error vector at each moment by using all-zero test data in the self-adaptive construction process:
Figure BDA0002127269710000081
wherein ltIs a matrix Fr(z) highest power series.
2.2) distributed acquisition of the Transmission matrix Mr(z) its determinant | Mr(z) | is a non-zero constant polynomial in inversion
Figure BDA0002127269710000082
Easily mapping the error of limited weight of the receiving end to the error of infinite weight of the source end, and removing Mr(z)-1Does not affect the relationship of the corresponding error, so M is usedr(z) accompanying formula
Figure BDA0002127269710000083
Instead of the inverse matrix.
2.3) computing equivalent errors of the information source end
Figure BDA0002127269710000084
2.4) order WH(y) denotes a given vector y ∈ FqHamming weight of (i.e., a non-zero number of y coefficients); estimating a maximum weight T of an equivalent error at a source nodes=max{WH(E)}。
2.5) choosing the free distance at the source node to be 2Ts+1 error correction code capable of correcting equivalent errors.
According to the steps, aiming at a network error set which possibly occurs, when a network intermediate node uses a general polynomial as a coefficient, an adjoint matrix of a transmission matrix is used for replacing an inverse matrix of the network intermediate node, so that errors in the network are equivalent to a signal source end, an error correcting code is designed according to the maximum weight of the equivalent errors of the signal source end, the error correcting code capable of correcting the network error set is designed before being sent to the network, and therefore a decoding algorithm can correct code words in the error correcting range.
Example 5:
the adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that of the embodiments 1-4, and the q-element random convolutional network error correcting and decoding algorithm in the step 3 specifically comprises the following steps:
3.1) from yr(z)=x(z)GO,r(z)+e(z)Fr(z) obtaining: x (z) GO,r(z)=yr(z)-e(z)Fr(z) wherein GO,r(z) is a generator matrix of the output convolutional code. Processing e (z) at the receiving node is equivalent to processing individual combined error vectors which are cross-superimposed in a convolution manner to form the final e (z) Ft(z) during decoding, all the combined error vectors are corrected, and all the network errors are corrected accordingly.
3.2) adding judgment before the decoding starts, judging whether the intersection of the output message subspace phi (t, l) and the error subspace delta (t, l) only has a null space, if so, the message sequence and the error sequence can be separated, and decoding in a sliding window at the moment to search a combined error vector; if there is no null space, the next time is determined until Φ (t, l) # Δ (t, l) {0 }.
The invention is based on the mixing process of the random convolutional network coding intermediate node, network errors can be diffused and more symbols are influenced at the receiving node, and the minimum Hamming distance decoding based on the input sequence and the output sequence at the receiving node is not feasible any more; inspired by the classic Viterbi algorithm, the decoding process can be implemented in a similar way, in the comparison operation, the hamming weight of the network error vector corresponding to each combined error is defined in advance, when the intersection of Φ (t, l) and Δ (t, l) has only zero space, the window sliding length is determined, decoding is started in the sliding window, the combined error vector is found, and thus the decoding path with the accumulated minimum network error weight is found on the output grid diagram.
Example 6:
the adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that in the embodiments 1-5 and the step 4, wherein the optimized q-element random convolutional network error correction decoding algorithm for realizing the distributed decoding with low complexity and decoding delay specifically comprises the following steps:
4.1) when an error occurs in the network and is uniquely determined in the decoding window, the effect of the combined error vector must be subtracted:
Figure BDA0002127269710000091
in the formula, yr(z) as a receiving sequence with errors at the receiving node R epsilon R,
Figure BDA0002127269710000092
is an error-free received sequence.
4.2) assumption that x can be uniquely determined0Since the nature of the convolutional network coding is known, the received sequence
Figure BDA0002127269710000093
Is to transmit a message x0And x1Minus the effect of the first bit of information in the output sequence:
Figure BDA0002127269710000094
deleting x0After the influence of (c), the first ω bits of the received sequence are all zeros; will yeff(z)/z is considered as the newly received sequence; the situation in the subsequent window is equivalently transformed into the first window by removing the effect of the input sequence preceding the window.
4.3) assuming a delay of L, i.e. x, for sequence decoding0The character stream x may be received at the receiving node r from time 0 to L0M0,x0M1+x1M0,…,x0ML+…+xLM0And (4) uniquely determining. For having global coding core
Figure BDA0002127269710000095
The sink node r of (1), the necessary condition for decoding delay to be L is rank (F)0,F1,…,FL) ω; combining the decoding condition phi (t, L) # delta (t, L) ═ 0}, obtaining the decoding time delay L of the q-element random convolution network error correction decoding algorithmdelay=max{l,L}。
The invention optimizes the received sequence, updates the decoding sequence by subtracting the influence of decoded code words and estimated network errors, reduces the complexity of the algorithm, and obtains the decoding delay of the algorithm by combining the decoding delay of sequence decoding.
Example 7:
the adaptive construction and decoding method based on the random convolutional network error correcting code is the same as that of the embodiments 1 to 6, and the specific implementation method is as follows.
The edges in the network are directional and are numbered eiAnd i is more than or equal to 1 and less than or equal to | E |. At the time 0, the initial values of all the local coding cores and the global coding core are 0, and an all-zero sequence is sent to enable the network to start self-adaptive random coding. For the inflection point with edge e' greater than e, let its local coding kernel ke',e,0Otherwise, uniformly and randomly selecting from small domain, after the initialization step, ke',e,tCan be randomly selected. At time t, the intermediate node v stores all the symbols y received by the incoming edges e' e in (v)e',tAnd randomly selecting t +1 item local coding kernel coefficient k for the output edge e of ve',e,t. Then, an output data packet is constructed for each output edge e, and a global coding core is put in a packet header to be sent out.
At each time t, the receiving node r determines whether the global coding core it receives is full rank. If the rank is full, an ACK is sent to its parent node. And the intermediate node v stops randomly selecting the local coding core and sends the ACK to the father node of the intermediate node v when receiving the ACK of all the child nodes. Later encoding is performed using the determined and stored ke',eThe length of the LEK is increased at each time until all receiving nodes have a transmission matrix of full rank.
The mathematical model for constructing the successfully obtained random convolutional network code is as follows:
the convolutional network coding adopts a self-adaptive random coding mode, FrThe information of (z) is transmitted to the receiving node in time slots together with the transmitted characters. The transmission matrix of the sink node r can be represented as:
Mr(z)=Mr,0+Mr,1z+…+Mr,τzτ+…,
the corresponding error output at each moment is as follows:
Figure BDA0002127269710000111
if the transmitted zero data is wrong in network transmission, the output sequence is a combined error; constructing a convolution error correcting code at the information source to correct the error according to the maximum weight of the equivalent error of the source node; inputting the code word into a network, a group of network errors of the random convolutional network can be corrected, and the method specifically comprises the following steps:
(1) random convolutional network coding gathers the combined error vector at each time using all-zero test data in the adaptive construction process:
Figure BDA0002127269710000112
wherein ltIs a matrix FrThe highest power of (z).
(2) Distributed acquisition transmission matrix Mr(z) its determinant | Mr(z) | is a non-zero constant polynomial in inversion
Figure BDA0002127269710000113
Easily mapping the error of limited weight of the receiving end to the error of infinite weight of the source end, and removing Mr(z)-1Does not affect the relationship of the corresponding error, so M is usedr(z) accompanying formula
Figure BDA0002127269710000114
Instead of the inverse matrix.
(3) Computing source side equivalent error
Figure BDA0002127269710000115
(4) Let WH(y) denotes a given vector y ∈ FqHamming weight of (i.e., a non-zero number of y coefficients); calculating a maximum weight T of an equivalent error at a source nodes=max{WH(E)}。
(5) Selecting a free distance of 2T at the source nodes+1 error correction code capable of correcting equivalent errors.
The decoding algorithm used at the sink end is as follows: by y (z) ═ x (z) GO,t(z)+e(z)Ft(z) obtaining: x (z) GO,t(z)=y(z)-e(z)Ft(z) processing the error sequence e (z) at the receiving node, which is equivalent to processing the combined error vector e (z) Ft(z) which characterizes the effect of any error vector e in the network. Although e (z) Ft(z) the convolution-wise mis-superposition creates a great difficulty for error correction, but still does not start with the simplest case first, i.e. all combination errors are not superposed, since Ft(z) tends to be very simple, so the combined error category is much smaller than the number of possible error vectors. In the comparison operation, the weight of each combined error vector is defined as the weight of the smallest error vector. Decoding is to find a decoding path with the smallest accumulated error weight on the output trellis diagram. If all the combined error vectors can be corrected, the network error is corrected, resulting in the correct input sequence. The algorithm is described as follows:
distributed decoding algorithm for convolutional network coding
Figure BDA0002127269710000121
Figure BDA0002127269710000131
For the received sequence y (z), the minimum weight decoding algorithm selects the error vector min { w } with the smallest weight at each instantH(ei) In view of eachThe combined error vectors do not overlap, and thus are equivalent to selecting the smallest weight net error vector wH(e (z)). And wHThe smaller the value of (e (z)) is, the smaller the value of peThe smaller, i.e. between the bit error rate p and y (z)eThe smallest input sequence x (z). The whole decoding process is equivalent to searching the most similar code word x, so that the decoded code word
Figure BDA0002127269710000132
Probability of not being equal to transmitted codeword x
Figure BDA0002127269710000133
At the minimum, the temperature of the mixture is controlled,
Figure BDA0002127269710000134
and max. The decoding result is a MAP path.
For having global coding core
Figure BDA0002127269710000135
The sink node r, the necessary condition that the sequence decoding delay is L is rank (F)0,F1,…,FL) ω. Combining the minimum weight decoding condition phi (t, L) # delta (t, L) ═ 0, the minimum decoding time delay of the minimum network weight decoding algorithm of the random convolutional network is Ldelay=max{l,L}。
Firstly, adaptively constructing random convolutional network codes, and designing a random convolutional network error correcting code capable of correcting a group of network error sets at a signal source end when the codes are successfully encoded; the data is sent to the network and decoded at the receiving node using a random convolutional network error correction decoding algorithm. The invention constructs random convolution network coding in a self-adaptive way, and the coding and decoding algorithms have error correction capability and reduce complexity and can be applied to actual networks.
The following mainly describes how the present invention uses theorem 1 to determine the decoding window length and the combined error vector.
Example 8:
the adaptive construction and decoding method based on random convolution network error correcting code is the same as the embodiments 1-7, phi (t, l) is output volumeThe product-code spanned message subspace, i.e., all elements in Φ (t, l), are [0, l]X in the windowlGO,r(z),xl(z) all possible input sequences from time 0 to time l. Δ (t, l) denotes the subspace spanned by the error vectors, Fr(z) can be written as a power series
Figure BDA0002127269710000136
For any error vector e, a corresponding combined error vector at that time is obtained (eF)r,0,eFr,1,...,eFr,l) Wherein eFr,i(0. ltoreq. i.ltoreq.l) is a 1 x ω subvector, from which it is known that the combined error vector at each time instant is a 1 x (1+ l) ω line vector. The following theorem is given below:
theorem 1: for any different y, y 'e phi (t, l), y and y' are divisible on the receiving node and only if
Figure BDA0002127269710000141
For a linear code, y and y' are divisible at the receiving node and only if Φ (t, l) # Δ (t, l) {0 }.
And (3) proving that: for source s and receiver r, convolutional network coding can be viewed as a coding kernel of Fr(z) linear network coding.
The sufficiency: if it is satisfied with
Figure BDA0002127269710000142
I.e. y ' + g ≠ y + g ', meaning that y ' plus any one of wH(e) After an error g caused by e ≦ t, it will not equal y plus any wH(e ') value after error g ' caused by e ' of ≦ t. So y and y' are separable at the receiving node.
The necessity: if y and y ' are separable at the receiving node for any different y, y '. epsilon.phi (t, l), then for any g, g ' y ' + g ≠ y + g ', i.e. y ' ≠ y + (g ' -g), i.e. g
Figure BDA0002127269710000143
For linear coding, y and y ' are divisible at the receiving node only if Φ (t, l) # Δ (t, l) {0}, since any y, y ' ∈ Φ (t, l), there is y-y ' ∈ Φ (t, l). After the syndrome is confirmed.
From theorem 1, it can be known that the judgment is needed before decoding: t is l (l is more than or equal to l)t) And judging whether the intersection of the corresponding phi (t, l) and delta (t, l) has a null space. The message sequence and the error sequence can be separated only in the null space, the length of a sliding window is obtained, a combined error vector is calculated in each window, and the combined error vector in the next window is obtained from the last combined error vector which can be extended. The method for determining the length of the decoding window and the error vector combination in the window is a key step in the decoding algorithm.
The following describes the whole encoding and decoding process of the present invention with reference to specific examples.
Example 9:
the adaptive construction and decoding method based on random convolutional network error correcting code are the same as embodiments 1-8, so as to
Figure BDA0002127269710000146
The random convolutional network coding above is taken as an example, and one possible network topology is shown in fig. 2.
The global codes of two outgoing edges of the source s are respectively
Figure BDA0002127269710000144
And
Figure BDA0002127269710000145
i.e. ye1,t=x1,tAnd ye2,t=x2,t. It is assumed that the probability of network errors occurring on each edge is the same and separated by at least a time interval of l + 1. The omega packet with all zero data is sent to the network and random convolutional network coding starts to construct the coding kernel adaptively.
At time 0, one possible allocation is shown in FIG. 3. At this time
Figure BDA0002127269710000151
The global coding core of the receiving node is not full rank, and the random coding construction is unsuccessful.
At time 1, core k is locally encodede',e,1Can be randomly and uniformly distributed from
Figure BDA0002127269710000152
Up-selected and the length of the local coding core plus 1, one possible allocation is shown in fig. 4. At this time
Figure BDA0002127269710000153
And receiving the global coding core full rank of the node, and successfully constructing the random coding.
eFr,0={00,01,02,10,11,12,20,21,22},eFr,1And {00,01,02} is a combined error vector of possible outputs. Calculating the equivalent error of the source end to obtain T s2, the free distance d is thus selectedfree≥2TsThe convolutional code with +1 ═ 5 is used as input convolutional code, and the generation matrix of the input convolutional code is selected as GI(z)=[1+z 2 1+z+z2]。
Assume that the input sequence is x (z) ═ 1+ z2+z5I.e., | x (z) | 6. By receiving node r2The random convolutional network error correction decoding of RCNECC in q-element domain and the distributed decoding process thereof are illustrated as an example.
Distributively derived from the adaptive encoding process described above:
Figure BDA0002127269710000154
and
Figure BDA0002127269710000155
after this, the length of the sliding window is determined. By judging phi (r)2,l)∩Δ(r2And l) {0} indicates that l ═ 2 satisfies the condition.
Order to
Figure BDA0002127269710000156
Represents a sequence received at the ith time, wherein
Figure BDA0002127269710000157
When receiving
Figure BDA0002127269710000158
And then decoding is started.
As shown in fig. 5, the first, second and third branches on the tree represent input information 0, 1 and 2, respectively. The combined error screening procedure in [0,2] window is as follows:
according to the algorithm described in example 7, if E 000 and E1When 00, the cumulative error weight {00,00, E is set2Is 0; if E is0Not in the reference tables
Figure BDA0002127269710000161
Remove E from the tree view0This branch, since there is theoretically no error vector that produces such a combination error. In the remaining path, do not belong to
Figure BDA0002127269710000162
Combined error of E1And do not belong to
Figure BDA0002127269710000163
Combined error of E2Is deleted. [0,2]Obtaining a unique combined error vector in a window
Figure BDA0002127269710000164
Its weight was 1. The whole process is shown in fig. 6.
Window [0,2]]An input sequence x can be determined01, sequence received
Figure BDA0002127269710000165
Subtracting the influence of the error vector, i.e. the only combined error vector in the sliding window, to obtain the correct output path:
Figure BDA0002127269710000166
the sequence received at time 3 is
Figure BDA0002127269710000167
The decoding window is slid forward by one instant to [1,3 ]]Window, received sequence minus decoded word x01 affects in the network:
Figure BDA0002127269710000168
Figure BDA0002127269710000169
can be seen as the output sequence in the first window, the decoding operation and 0,2]The operation in the window is the same. Similarly, when the output sequence is
Figure BDA00021272697100001610
Window [2,4 ]]Only a unique combined error vector remains, which means that the network error is distinguishable in both windows, so that the input sequence (0 → 1) can be uniquely determined. When the output sequence is
Figure BDA00021272697100001611
Figure BDA00021272697100001612
Then (0 → 0), (1), (0) are in the sliding window [4,6 ]]、[5,7]、[6,8]And (4) medium output. The input sequence (1 → 0 → 1 → 0 → 0 → 1) is obtained in a distributed manner. In fact, all combination errors are
Figure BDA00021272697100001613
This is the main reason for achieving distributed decoding, within the error correction capability of the generated convolutional code.
In short, the invention discloses a self-adaptive construction and decoding method of a random convolution network error correcting code, which solves the problems of actual networks with unknown topology and transmission delay in the prior artThe problem of high error correction complexity of a coding and decoding algorithm is solved by the following steps: adaptively constructing random convolutional network codes; constructing a random convolutional network error correcting code as an input convolutional code; decoding at a receiving end by using a q-element random convolutional network error correction decoding algorithm; and optimizing the error correction decoding algorithm of the q-element random convolutional network. The invention provides a coding algorithm for transmitting all-zero data self-adaptive construction random convolutional network based on the transmission process of a data packet in the random convolutional network, and different nodes are allowed to select local coding kernels with different lengths according to the self condition; equating errors in the network to a source end, and designing an error correcting code capable of correcting the errors before sending the message to the network; extending decoding algorithms for minimum network error weight to random code sums
Figure BDA0002127269710000171
A domain, which forms a q-element random convolution network error correction decoding algorithm based on the minimum network error vector corresponding to the combined error; the decoding sequence is updated by subtracting the influence of the decoded word and the estimated network error, so that the decoding algorithm with low complexity, low time delay and strong error correction capability is realized and is used for an actual network with unknown topology and transmission delay.

Claims (2)

1. A self-adaptive construction and decoding method based on random convolutional network error correcting codes is characterized by comprising the following steps:
step 1, self-adaptively constructing random convolutional network coding:
the source node sends all-zero data packets to the network to construct a random convolutional network code in a self-adaptive manner, starting from a small code domain, the length of a local code core is increased until all relevant receiving nodes can decode, at the moment, the coding is successful, and the receiving nodes receive a combined error vector formed by mixing and overlapping network errors through intermediate nodes;
the random convolutional network coding is constructed in a self-adaptive mode, and the mathematical model of the random convolutional network is as follows:
the convolutional network coding adopts a self-adaptive random coding mode, and a transmission matrix Mr(z) information and transmitted charactersTogether transmitted in time slots to the receiving node, the transmission matrix of the sink node r can be distributively denoted as Mr(z)=Mr,0+Mr,1z+…+Mr,tzt+ …, the output with error at each moment is:
Figure FDA0003508402820000011
where z is a delay factor, x (z) is an input sequence, xtIs the coefficient of x (z) at time t, Mr,tIs Mr(z) coefficients at time t, e (z) is the error sequence, etIs e (z) coefficient at time t, Fr(z) removing the influence caused in the network after the source-end mapping for the transmission matrix at the sink node r, Fr,tIs Fr(z) coefficients at time t;
step 2, constructing a random convolution network error correcting code:
collecting network errors by using all-zero test data; the global coding core is sent to the receiving node according to the time slot, so that the information of the transmission matrix is obtained in a distributed mode at the receiving node, and meanwhile, the network errors are mixed according to the transmission coding coefficient, and the combined errors are obtained in a distributed mode; when the network intermediate node uses a general polynomial as a coefficient, the adjoint matrix of the transmission matrix is used for replacing the inverse matrix of the transmission matrix, and the combination error is equivalent to the information source end; maximum weight T according to equivalent error of information source terminalsSelecting the free distance more than or equal to 2T at the information source ends+1 error correction code capable of correcting network errors as input code of the random convolutional network;
the method for constructing the random convolutional network error correcting code specifically comprises the following steps:
2.1) random convolutional network coding collects the combined error vector at each moment by using all-zero test data in the self-adaptive construction process:
Figure FDA0003508402820000021
wherein ltIs a matrix Fr(z) the highest power;
2.2) distributed acquisition of the Transmission matrix Mr(z) its determinant | Mr(z) | is a non-zero constant polynomial in inversion
Figure FDA0003508402820000022
Easily mapping the error of limited weight of the receiving end to the error of infinite weight of the source end, and removing Mr(z)-1Does not affect the relationship of the corresponding error, so M is usedr(z) accompanying formula
Figure FDA0003508402820000023
Replacing the inverse matrix;
2.3) computing equivalent errors of the information source end
Figure FDA0003508402820000024
2.4) order WH(y) denotes a given vector
Figure FDA0003508402820000025
Hamming weight of (i.e., a non-zero number of y coefficients); calculating the maximum weight T of equivalent error at the source ends=max{WH(E)};
2.5) selecting the free distance more than or equal to 2T at the information source ends+1 error correction code capable of correcting this network error;
step 3, improving to obtain a q-element random convolutional network error correction decoding algorithm:
extending Viterbi-like decoding algorithms based on minimum network error weight to random convolutional networks and
Figure FDA0003508402820000026
a domain, which forms a q-element random convolution network error correction decoding algorithm; the q-element random convolution network error correction decoding algorithm is based on the combined error vectors, the weight of each combined error vector is defined as the weight of the minimum network error vector, and a decoding path with the accumulated minimum error weight is searched; the algorithm is directly decoded at a receiving node and can correct any error in the error correction range of the random convolutional network error correction code;
the improved q-element random convolution network error correction decoding algorithm specifically comprises the following steps:
3.1) from yr(z)=x(z)GO,r(z)+e(z)Fr(z) obtaining: x (z) GO,r(z)=yr(z)-e(z)Fr(z) wherein GO,r(z) a generator matrix for outputting the convolutional code; e (z) is processed at the receiving node, the network errors are staggered and superposed in a convolution mode to form final e (z) Fr(z), thus equivalent to processing individual combined error vectors; defining the weight of each combined error vector as the weight of the minimum network error vector, and searching a decoding path with the accumulated minimum error weight; in the decoding process, all the combined error vectors are corrected, and all the network errors are corrected to obtain a correct input sequence;
3.2) adding judgment before decoding: judging whether the intersection of the output message subspace phi (r, l) and the error subspace delta (r, l) is only a null space; if only null space exists, the message sequence and the error sequence can be separated, the length of a sliding window is determined at the moment, decoding is carried out in the sliding window, and a combined error vector is searched and moved to the sliding window; if there is no null space, the next time is determined again until Φ (r, l) # Δ (r, l) ═ 0}, and then the decoding process in 3.1) is executed;
step 4, optimizing the error correction decoding algorithm of the q-element random convolution network:
in the operation of optimizing the received sequence, updating the decoded sequence by subtracting the influence of the error-correcting decoded code word of the q-element random convolutional network and the estimated network error; the situation in the subsequent window can be equivalently transformed into the first window by removing the influence caused by the input sequence before the window, so that the distributed decoding with low complexity and low decoding time delay is realized;
the method comprises the following steps of optimizing a q-element random convolution network error correction decoding algorithm, wherein distributed decoding with low complexity and low decoding time delay is realized:
4.1) when an error occurs in the network and the combined error vector is uniquely determined in the decoding window, the effect of the combined error vector must be subtracted:
Figure FDA0003508402820000031
wherein, yr(z) indicates that there is an erroneous output,
Figure FDA0003508402820000032
representing an updated error-free output;
4.2) assumption that the transmission message x can be uniquely determined0The received sequence being known by the nature of the convolutional network coding
Figure FDA0003508402820000033
Is x0And x1Linear combination of (1), output sequence minus x0The influence of (a):
Figure FDA0003508402820000034
deleting x0After the influence of (3), the first omega bits of the received sequence are all zeros, and y is seteff(z)/z is considered as the newly received sequence; the situation in the subsequent window is equivalently transformed into the first window by removing the influence caused by the input sequence before the window;
4.3) assume delay L of sequence decoding, i.e. at receiving node r, x0Character stream x receivable from time 0 to L0Mr,0,x0Mr,1+x1Mr,0,…,x0Mr,L+…+xLMr,0Unique determination; for global coding core
Figure FDA0003508402820000041
The sink node r of (2), the necessary condition for the decoding delay to be L is rank (F)0,F1,…,FL) ω; combining the decoding condition phi (r, L) # delta (r, L) ═ 0}, obtaining the decoding time delay L of the q-element random convolution network error correction decoding algorithmdelay=max{L,l}。
2. The adaptive construction and decoding method of random convolutional network error correcting code according to claim 1, wherein the adaptive construction of random convolutional network coding in step 1 specifically comprises the following steps:
1.1) sending omega packets with all-zero data to the network, wherein omega is the transmission rate of the source message, so that the network starts adaptive construction;
1.2) convolutional network coding is physically realizable if and only if the local coding kernel constant term coefficient matrix K0Is of power zero; for inflection point e' > e, let its local encoding kernel ke',e,0If not, uniformly and randomly selecting from small domain, after initialization, ke',e,tRandomly selecting in a small domain;
1.3) global coding core obtained by calculation
Figure FDA0003508402820000042
And the local coding cores are placed at the head of the data packet, the length of the local coding core is increased at each moment, and different nodes select the local coding cores with different lengths according to the self condition until all receiving nodes have the transmission matrix with full rank.
CN201910626457.6A 2019-07-11 2019-07-11 Self-adaptive construction and decoding method based on random convolutional network error correcting code Active CN110460340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910626457.6A CN110460340B (en) 2019-07-11 2019-07-11 Self-adaptive construction and decoding method based on random convolutional network error correcting code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910626457.6A CN110460340B (en) 2019-07-11 2019-07-11 Self-adaptive construction and decoding method based on random convolutional network error correcting code

Publications (2)

Publication Number Publication Date
CN110460340A CN110460340A (en) 2019-11-15
CN110460340B true CN110460340B (en) 2022-04-05

Family

ID=68482691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910626457.6A Active CN110460340B (en) 2019-07-11 2019-07-11 Self-adaptive construction and decoding method based on random convolutional network error correcting code

Country Status (1)

Country Link
CN (1) CN110460340B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217820B (en) * 2020-09-27 2022-08-09 伍仁勇 Network transmission method and system, and local coding core generation method and system
CN112600647B (en) * 2020-12-08 2021-11-02 西安电子科技大学 Multi-hop wireless network transmission method based on network coding endurance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1479976A (en) * 2000-12-06 2004-03-03 Ħ��������˾ Apparatus and method for providing optimal self-adaptive forward error correction in communication system
CN106603196A (en) * 2016-11-22 2017-04-26 西安电子科技大学 Convolutional network error-correcting code coding and decoding method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9166624B2 (en) * 2010-05-11 2015-10-20 Osaka University Error-correcting code processing method and device
US8924831B2 (en) * 2011-08-26 2014-12-30 Texas Instruments Incorporated Systems and methods for network coding using convolutional codes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1479976A (en) * 2000-12-06 2004-03-03 Ħ��������˾ Apparatus and method for providing optimal self-adaptive forward error correction in communication system
CN106603196A (en) * 2016-11-22 2017-04-26 西安电子科技大学 Convolutional network error-correcting code coding and decoding method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Convolutional Codes for Network-Error Correction";K.Prasad.;《GLOBECOM 2009 - 2009 IEEE Global Telecommunications Conference》;20100304;全文 *
"卷积网络编码";郭网媚,蔡宁;《中国电子科学研究院学报》;20120228;全文 *
"卷积网络编码及其应用";郭网媚;《中国博士学位论文全文数据库》;20140131;全文 *

Also Published As

Publication number Publication date
CN110460340A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
JP5474582B2 (en) Network re-encoding method and apparatus for re-encoding encoded symbols transmitted to a communication device
Hashemi et al. Partitioned successive-cancellation list decoding of polar codes
JP4627317B2 (en) Communication apparatus and decoding method
KR101751497B1 (en) Apparatus and method using matrix network coding
Yang et al. BATS Codes: Theory and practice
CN109347604B (en) Multi-hop network communication method and system based on batched sparse codes
CN107565978B (en) BP decoding method based on Tanner graph edge scheduling strategy
CN107565984B (en) Raptor code optimized coding method with precoding as irregular code
CN110326342A (en) A kind of device and method of the ordered sequence for prescribed coding subchannel
CN110113131B (en) Network communication method and system based on batch coding
CN110460340B (en) Self-adaptive construction and decoding method based on random convolutional network error correcting code
CN106998242B (en) Unequal protection erasure coding method for space communication distributed dynamic network topology
CN104052499B (en) Erasure correcting decoding method and system of LDPC code
JP2023547596A (en) Method and apparatus for encoding and decoding data using concatenated polarity adjusted convolutional codes
Papadopoulou et al. Short codes with near-ML universal decoding: Are random codes good enough?
CN109831281B (en) Multi-user detection method and device for low-complexity sparse code multiple access system
CN110430011B (en) BATS code coding method based on regular variable node degree distribution
KR101991447B1 (en) The Method of Protograph LDPC codes Construction Robust to Block Interference and Fading
CN111865488B (en) Code selection method for multi-hop short packet communication
US7814392B2 (en) System, apparatus and methods of dynamically determined error correction codes in communication systems
Lyu et al. Reliability-oriented decoding strategy for ldpc codes-based d-jscc system
CN109728900B (en) LDPC error correction code rate self-adaption method and system in discrete variable quantum key distribution
Liu et al. Adaptive Construction and Decoding of Random Convolutional Network Error-correction Coding
Qin et al. Reinforcement-learning-based Overhead Reduction for Online Fountain Codes with Limited Feedback
Wu et al. Interpolation-based low-complexity Chase decoding algorithms for Hermitian codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant